id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2301.13362 | Optimizing DDPM Sampling with Shortcut Fine-Tuning | In this study, we propose Shortcut Fine-Tuning (SFT), a new approach for
addressing the challenge of fast sampling of pretrained Denoising Diffusion
Probabilistic Models (DDPMs). SFT advocates for the fine-tuning of DDPM
samplers through the direct minimization of Integral Probability Metrics (IPM),
instead of learning the backward diffusion process. This enables samplers to
discover an alternative and more efficient sampling shortcut, deviating from
the backward diffusion process. Inspired by a control perspective, we propose a
new algorithm SFT-PG: Shortcut Fine-Tuning with Policy Gradient, and prove that
under certain assumptions, gradient descent of diffusion models with respect to
IPM is equivalent to performing policy gradient. To our best knowledge, this is
the first attempt to utilize reinforcement learning (RL) methods to train
diffusion models. Through empirical evaluation, we demonstrate that our
fine-tuning method can further enhance existing fast DDPM samplers, resulting
in sample quality comparable to or even surpassing that of the full-step model
across various datasets. | Ying Fan, Kangwook Lee | 2023-01-31T01:37:48Z | http://arxiv.org/abs/2301.13362v4 | # Optimizing DDPM Sampling with Shortcut Fine-Tuning
###### Abstract
In this study, we propose _Shortcut Fine-Tuning (SFT)_, a new approach for addressing the challenge of fast sampling of pretrained Denoising Diffusion Probabilistic Models (DDPMs). SFT advocates for the fine-tuning of DDPM samplers through the direct minimization of Integral Probability Metrics (IPM), instead of learning the backward diffusion process. This enables samplers to discover an alternative and more efficient sampling shortcut, deviating from the backward diffusion process. We also propose a new algorithm that is similar to the policy gradient method for fine-tuning DDPMs by proving that under certain assumptions, the gradient descent of diffusion models is equivalent to the policy gradient approach. Through empirical evaluation, we demonstrate that our fine-tuning method can further enhance existing fast DDPM samplers, resulting in sample quality comparable to or even surpassing that of the full-step model across various datasets.
## 1 Introduction
Denoising diffusion probabilistic models (DDPMs) (Ho et al., 2020) are parameterized stochastic Markov chains, which are learned by gradually adding Gaussian noises to the data as the forward process, computing the backward process via posterior, and then training the DDPM sampler to match the backward process. Advances in DDPM (Nichol and Dhariwal, 2021; Dhariwal and Nichol, 2021) have shown its potential to rival GANs (Goodfellow et al., 2014) in generative tasks. One drawback of DDPM is that a large number of steps \(T\) is needed. As a result, there is a line of work dedicated to sampling fewer \(T^{\prime}\ll T\) steps to achieve better performance. Most works are dedicated to better approximating the backward process as stochastic differential equations (SDEs) with fewer steps, generally via better noise estimation and sub-sequence scheduling: (Kong and Ping, 2021; San-Roman et al., 2021; Lam et al., 2021; Watson et al., 2021; Jolicoeur-Martineau et al., 2021; Bao et al., 2021, 2022). Other works aim at approximating the backward process by fewer steps via changing the noise distribution to non-gaussian (Nachmani et al., 2021; Xiao et al., 2021).1
Footnote 1: There is another line of work dedicated to fast sampling DDIM (Song et al., 2020) that uses deterministic Markov chains, which we will discuss in Section 5.
To our best knowledge, existing fast samplers of DDPM stick to imitating the backward process. If we view data generation as a Reinforcement Learning (RL) task and the backward process as a demonstration to generate data from noise, imitating the backward process could be viewed as imitation learning (Hussein et al., 2017), which is one way to learn a generative model as policy. Naturally, one may wonder if we can do better than pure imitation, since learning via imitation is generally useful but rarely optimal. It generally takes extra interactions with the "environment" to find an optimal one (Vecerik et al., 2017).
Motivated by the above observation, we study the following underexplored question:
_Can we improve DDPM sampling by **not** following the backward process?_
In this work, we show that this is indeed possible. We fine-tune pretrained DDPM samplers by directly minimizing an integral probability metric (IPM) and show that finetuned DDPM samplers have significantly better generation qualities. In this way, we can still enjoy diffusion models' multistep capabilities with no need to change the noise distribution, and improve the performance with fewer steps.
More concretely, we first show that performing gradient descent of the DDPM sampler w.r.t. the IPM is equivalent to policy gradient, which echoes the aforementioned RL view but with a changing reward from the optimal critic function
in IPM. In addition, we provide a surrogate function that can give insights for monotonic improvements. Finally, we provide a fine-tuning algorithm with alternative updates between the critic and the generator.
We summarize our main contributions as follows:
* (Section 4.1) We propose a novel algorithm to fine-tune DDPM samplers with direct IPM minimization, and we show that performing gradient descent of diffusion models w.r.t. IPM is equivalent to policy gradient.
* (Section 4.2) We present a surrogate function of IPM in theory, which provides insights on conditions for monotonic improvement and algorithm design.
* (Section 4.3.2) We propose a regularization for the critic based on the baseline function, which shows benefits for the policy gradient training.
* (Section 6) Empirically, we show that our fine-tuning can improve DDPM sampling performance in two cases: when \(T\) itself is small, and when \(T\) is large but using a fast sampler where \(T^{\prime}\ll T\). In both cases, our fine-tuning achieves comparable or even higher sample quality than the DDPM with 1000 steps using 10 sampling steps.
## 2 Background
### Denoising Diffusion Probabilistic Models (DDPM)
Here we consider denoising probabilistic diffusion models (DDPM) introduced in Ho et al. (2020). Consider data distribution \(x_{0}\sim q_{0},x_{0}\in\mathbb{R}^{n}\). Define the forward noising process: for \(t\in[0,..,T-1]\),
\[q(x_{t+1}|x_{t}):=\mathcal{N}(\sqrt{1-\beta_{t}}x_{t},\beta_{t}I), \tag{1}\]
where \(x_{1},..,x_{T}\) are variables of the same dimensionality as \(x_{0}\), \(\beta_{1:T}\) is the variance schedule.
We can compute the posterior as a backward process:
\[q(x_{t}|x_{t+1},x_{0})=\mathcal{N}(\tilde{\mu}_{t+1}(x_{t+1},x_{0}),\tilde{ \beta}_{t+1}I), \tag{2}\]
where \(\tilde{\mu}_{t+1}(x_{t+1},x_{0})=\frac{\sqrt{\alpha_{t}}\beta_{1}}{1-\tilde{ \alpha}_{t+1}}x_{0}+\frac{\sqrt{\alpha_{t+1}(1-\tilde{\alpha}_{t})}}{1-\tilde{ \alpha}_{t+1}}x_{t+1}\), \(\alpha_{t+1}=1-\beta_{t+1}\), \(\bar{\alpha}_{t+1}=\prod_{s=1}^{t+1}\alpha_{s}\).
We define a DDPM sampler parameterized by \(\theta\), which generates data starting from some pure noise \(x_{T}\sim p_{T}\):
\[\begin{split}& x_{T}\sim p_{T}=\mathcal{N}(0,I),\\ & x_{t}\sim p_{t}^{\theta}(x_{t}|x_{t+1}),\\ & p_{t}^{\theta}(x_{t}|x_{t+1}):=\mathcal{N}\big{(}\mu_{t+1}^{ \theta}(x_{t+1}),\Sigma_{t+1}\big{)},\end{split} \tag{3}\]
Figure 1: A visual illustration of the key idea of Shortcut Fine-Tuning (SFT). DDPMs aim at learning the backward diffusion model, but this approach is limited with a small number of steps. We propose the idea of _not_ following the backward process and exploring other unexplored paths that can lead to improved data generation. To this end, we directly minimize an IPM and develop a policy gradient-like optimization algorithm. Our experimental results show that one can significantly improve data generation quality by fine-tuning a pretrained DDPM model with SFT.
where \(\Sigma_{t+1}\) is generally chosen as \(\beta_{t+1}I\) or \(\tilde{\beta}_{t+1}I\). 2
Footnote 2: In this work we consider a DDPM sampler with a fixed variance schedule \(\beta_{1:T}\), while it could also be learned as in Nichol and Dhariwal (2021).
Define
\[p_{x_{0:T}}^{\theta}:=p_{T}(x_{T})\prod_{t=0}^{T-1}p_{t}^{\theta}(x_{t}|x_{t+1}), \tag{4}\]
and we have \(p_{0}^{\theta}(x_{0})=\int\!p_{x_{0:T}}^{\theta}(x_{0:T})dx_{1:T}\).
The sampler is trained via minimizing the ELBO:
\[\mathbb{E}_{q_{0}}\left[-\log p_{0}^{\theta}(x_{0})\right]\leqslant\mathbb{E} _{q}\left[-\log\frac{q(x_{1:T}|x_{0})}{p_{x_{0:T}}^{\theta}(x_{0:T})}\right], \tag{5}\]
which is equivalent to minimizing the sum of KL divergence below:
\[J=\sum_{t=0}^{T-1}D_{KL}(q(x_{t}|x_{t+1},x_{0}),p_{t}^{\theta}(x_{t}|x_{t+1})). \tag{6}\]
Optimizing the above loss can be viewed as matching the conditional generator \(p_{t}^{\theta}(x_{t}|x_{t+1})\) with the posterior distribution \(q(x_{t}|x_{t+1},x_{0})\) for each step. Song et al. (2020) have also shown that \(J\) is equivalent to score-matching loss when formulating the forward and backward process as a discrete version of stochastic differential equations.
### Integral Probability Metrics (IPM)
Given \(\mathcal{A}\) as a set of parameters s.t. for each \(\alpha\in\mathcal{A}\), it defines a critic \(f_{\alpha}:\mathbb{R}^{n}\rightarrow\mathbb{R}\). Given a critic \(f_{\alpha}\) and two distributions \(p_{0}^{\theta}\) and \(q_{0}\), we define
\[g(p_{0}^{\theta},f_{\alpha},q_{0}):=\underset{x_{0}\sim p_{0}^{\theta}}{\mathbb{ E}}[f_{\alpha}(x_{0})]-\underset{x_{0}\sim q_{0}}{\mathbb{E}}[f_{\alpha}(x_{0})]. \tag{7}\]
Let
\[\Phi(p_{0}^{\theta},q_{0}):=\sup_{\alpha\in\mathcal{A}}g(p_{0}^{\theta},f_{ \alpha},q_{0}). \tag{8}\]
If \(\mathcal{A}\) satisfies that \(\forall\alpha\in\mathcal{A}\), \(\exists\alpha^{\prime}\in\mathcal{A}\), s.t. \(f_{\alpha^{\prime}}=-f_{\alpha}\), then \(\Phi(p_{\theta},q)\) is a pseudo metric over the probability space of \(\mathbb{R}^{n}\), making it so-called integral probability metrics (IPM).
In this paper, we consider \(\mathcal{A}\) that makes \(\Phi(p_{0}^{\theta},q_{0})\) an IPM. For example, when \(\mathcal{A}=\{\alpha:||f_{\alpha}||_{L}\leqslant 1\}\), \(\Phi(p_{0}^{\theta},q_{0})\) is the Wasserstein-1 distance; when \(\mathcal{A}=\{\alpha:||f_{\alpha}||_{\infty}\leqslant 1\}\), \(\Phi(p_{0}^{\theta},q_{0})\) is the total variation distance; it also includes maximum mean discrepancy (MMD) when \(\mathcal{A}\) defines all functions in Reproducing Kernel Hilbert Space (RKHS).
## 3 Motivation
### Issues with Existing DDPM Samplers
Here we review the existing issues with DDPM samplers 1) when \(T\) is not large enough, and 2) when the number of sampling steps \(T^{\prime}\ll T\), which inspires us to design our fine-tuning algorithm.
Case 1. Issues caused by training DDPM with a small \(T\) (Fig 1).Given a score-matching loss \(J\), the upper bound on Wasserstein-2 distance is given by Kwon et al. (2022):
\[W_{2}(p_{0}^{\theta},q_{0})\leqslant\mathcal{O}(\sqrt{J})+I(T)W_{2}(p_{T},q_{ T}), \tag{9}\]
where \(I(T)\) is non-exploding and \(W_{2}(p_{T},q_{T})\) decays exponentially with \(T\) when \(T\rightarrow\infty\). From the inequality above, one sufficient condition for the score-matching loss \(J\) to be viewed as optimizing the Wasserstein distance is when \(T\) is large enough such that \(I(T)W_{2}(p_{T},q_{T})\to 0\). Now we consider the case when \(T\) is small and \(p_{T}\not\approx q_{T}\).3. The upper bound in Eq. (9) can be high since \(W_{2}(p_{T},q_{T})\) is not neglectable. As shown in Fig 1, pure imitation \(p_{t}^{\theta}(x_{t}|x_{t+1})\approx q(x_{t}|x_{t+1},x_{0})\) would not lead the model exactly to \(q_{0}\) when \(p_{T}\) and \(q_{T}\) are not close enough.
Case 2. Issues caused by a smaller number of sub-sampling steps (\(T^{\prime}\)) (Fig 6 in Appendix A).We consider DDPM sub-sampling and other fast sampling techniques, where \(T\) is large enough s.t. \(p_{T}\approx q_{T}\), but we try to sample with fewer sampling steps (\(T^{\prime}\)). It is generally done by choosing \(\tau\) to be an increasing sub-sequence of \(T^{\prime}\) steps in \([0,T]\) starting from \(0\). Many works have been dedicated to finding a subsequence and variance schedule to make the sub-sampling steps match the full-step backward process as much as possible Kong and Ping (2021); Bao et al. (2021, 2022). However, this would inevitably cause downgraded sample quality if each step is Gaussian: as discussed in Salimans and Ho (2021) and Xiao et al. (2021), a multi-step Gaussian sampler cannot be distilled into a one-step Gaussian sampler without loss of fidelity.
### Problem Formulation
In both cases mentioned above, there might exist paths other than imitating the backward process that can reach the data distribution with fewer Gaussian steps. Thus one may expect to overcome these issues by minimizing the IPM.
Here we present the formulation of our problem setting. We assume that there is a target data distribution \(q_{0}\). Given a set of critic parameters \(\mathcal{A}\) s.t. \(\Phi(p_{0}^{\theta},q_{0})=\sup_{\alpha\in\mathcal{A}}g(p_{0}^{\theta},f_{ \alpha},q_{0})\) is an IPM, and given a DDPM sampler with \(T\) steps parameterized by \(\theta\), our goal is to solve:
\[\min_{\theta}\Phi(p_{0}^{\theta},q_{0}). \tag{10}\]
### Pathwise Derivative Estimation for Shortcut Fine-Tuning: Properties and Potential Issues
One straightforward approach is to optimize \(\Phi(p_{0}^{\theta},q_{0})\) using pathwise derivative estimation (Rezende et al., 2014) like GAN training, which we denote as **SFT** (shortcut fine-tuning). We can recursively define the stochastic mappings:
\[h_{\theta,T}(x_{T}):=x_{T}, \tag{11}\]
\[h_{\theta,t}(x_{t}):=\mu_{\theta}(h_{\theta,t+1}(x_{t+1}))+\epsilon_{t+1}, \tag{12}\]
\[x_{0}=h_{\theta,0}(x_{T}) \tag{13}\]
where \(x_{T}\sim\mathcal{N}(0,I),\epsilon_{t+1}\sim\mathcal{N}(0,\Sigma_{t+1}),t=0,...,T-1\).
Then we can write the objective function as:
\[\Phi(p_{0}^{\theta},q_{0})=\sup_{\alpha\in\mathcal{A}}\mathop{\mathbb{E}}_{x_{ T},\epsilon_{1:T}}[f_{\alpha}(h_{\theta,0}(x_{T}))]-\mathop{\mathbb{E}}_{x_{0} \sim q_{0}}[f_{\alpha}(x_{0})] \tag{14}\]
Assume that \(\exists\alpha\in\mathcal{A}\), s.t. \(g(p_{0}^{\theta},\alpha,q_{0})=\Phi(p_{0}^{\theta},q_{0})\). Let \(\alpha^{*}(p_{0}^{\theta},q_{0})\in\{\alpha:g(p_{0}^{\theta},\alpha,q_{0})= \Phi(p_{0}^{\theta},q_{0})\}\). When \(f_{\alpha}\) is 1-Lipschitz, we can compute the gradient which is similar to WGAN (Arjovsky et al., 2017):
\[\nabla_{\theta}\Phi(p_{0}^{\theta},q_{0})=\mathop{\mathbb{E}}_{x_{T},\epsilon_ {1:T}}\Big{[}\nabla_{\theta}f_{\alpha^{*}(p_{0}^{\theta},q_{0})}(h_{\theta,0}( x_{T}))\Big{]}\,. \tag{15}\]
Requirements on the family of critics \(\mathcal{A}\).In Eq. (15), we can observe that the critic \(f_{\alpha^{*}}\) needs to provide meaningful gradients (w.r.t. the input) for the generator. If the gradient of the critic happens to be 0 at some generated data points, even if the critic's value could still make sense, the critic would provide no signal for the generator on these points4. Thus GANs trained with IPMs generally need to choose \(\mathcal{A}\) such that the gradient of the critic is regularized: For example, Lipschitz constraints like weight clipping (Arjovsky et al., 2017) and gradient penalty (Gulrajani et al., 2017) have been used for WGAN, and relatively wide kernel widths are used in MMD GAN (Li et al., 2017).
Footnote 4: For example, MMD with very narrow kernels can produce such critic functions, where each data point defines the center of the corresponding kernel which yields gradient 0.
Potential issues.There might be some issues when computing Eq. (15) in practice. It contains differentiating a composite function with \(T\) steps, which faces similar problems when training RNNs:
* Gradient vanishing may result in long-distance dependency being lost;
* Gradient explosion may occur;
* Memory usage is high.
## 4 Method: Shortcut Fine-Tuning with Policy Gradient (SFT-PG)
We note that Eq. (15) is not the only way to estimate the gradient w.r.t. IPM. In this section, we show that performing gradient descent of \(\Phi(p_{0}^{\theta},q_{0})\) can be equivalent to policy gradient (Section 4.1), provide analysis towards monotonic improvement (Section 4.2) and algorithm design (Section 4.3).
### Policy Gradient Equivalence
By modeling the conditional probability through the trajectory, we provide an alternative way for gradient estimation which is equivalent to policy gradient, without differentiating through the composite functions.
**Theorem 4.1**.: _(Policy gradient equivalence) Assume that both \(p_{x_{0:T}}^{\theta}(x_{0:T})f_{\alpha^{\bullet}(p_{0}^{\theta},q_{0})}(x_{0})\) and \(\nabla_{\theta}p_{x_{0:T}}^{\theta}(x_{0:T})f_{\alpha^{\bullet}(p_{0}^{\theta},q_{0})}(x_{0})\) are continuous functions w.r.t. \(\theta\) and \(x_{0:T}\). Then_
\[\nabla_{\theta}\Phi(p_{0}^{\theta},q_{0})=\mathop{\mathbb{E}}_{p_{x_{0:T}}^{ \theta}}\bigl{[}f_{\alpha^{\bullet}(p_{0}^{\theta},q_{0})}(x_{0})\nabla_{ \theta}\log\sum_{t=0}^{T-1}p_{t}^{\theta}(x_{t}|x_{t+1})\bigr{]}. \tag{16}\]
Proof.: \[\nabla_{\theta}\Phi(p_{0}^{\theta},q_{0})\] (17) \[=\nabla_{\theta}\int p_{0}^{\theta}(x_{0})f_{\alpha^{\bullet}(p_ {0}^{\theta},q_{0})}(x_{0})dx_{0}+\nabla_{\theta}\alpha^{\bullet}(p_{0}^{ \theta},q_{0})\nabla_{\alpha^{\bullet}(p_{0}^{\theta},q_{0})}\int p_{0}^{ \theta}(x_{0})f_{\alpha^{\bullet}(p_{0}^{\theta},q_{0})}(x_{0})dx_{0},\]
where \(\nabla_{\alpha^{\bullet}(p_{0}^{\theta},q_{0})}\int p_{0}^{\theta}(x_{0})f_{ \alpha^{\bullet}(p_{0}^{\theta},q_{0})}(x_{0})dx_{0}\) is \(0\) from the envelope theorem. Then we have
\[\nabla_{\theta}\int p_{0}^{\theta}(x_{0})f_{\alpha^{\bullet}(p_{ 0}^{\theta},q_{0})}(x_{0})dx_{0} \tag{18}\] \[=\nabla_{\theta}\int\left(\int p_{x_{0:T}}^{\theta}(x_{0:T})dx_{1 :T}\right)f_{\alpha^{\bullet}(p_{0}^{\theta},q_{0})}(x_{0})dx_{0},\] \[=\nabla_{\theta}\int p_{x_{0:T}}^{\theta}(x_{0:T})f_{\alpha^{ \bullet}(p_{0}^{\theta},q_{0})}(x_{0})dx_{0:T}\] \[=\mathop{\mathbb{E}}_{p_{x_{0:T}}^{\theta}}\left[f_{\alpha^{ \bullet}(p_{0}^{\theta},q_{0})}(x_{0})\sum_{t=0}^{T-1}\nabla_{\theta}\log p_{ t}^{\theta}(x_{t}|x_{t+1})\right],\]
where the second last equality is from the continuous assumptions to exchange integral and derivative and the log derivative trick.
MDP construction for policy gradient equivalence.Here we explain why Eq. (16) could be viewed as policy gradient. We can construct an MDP with a finite horizon \(T\): Treat \(p_{t}^{\theta}(x_{t}|x_{t+1})\) as a policy, and assume that transition is an identical mapping such that the action is to choose the next state. Consider reward as \(f_{\alpha^{\bullet}(p_{0}^{\theta},q_{0})}(x_{0})\) at the final step, and as \(0\) at any other steps. Then Eq. (16) is equivalent to performing policy gradient [20].
**Comparing Eq. (15) and Eq. (16):**
* Eq. (15) uses the gradient of the critic, while Eq. (16) only uses the value of the critic. This indicates that for policy gradient, weaker conditions are required for critics to provide meaningful guidance for the generator, which means more choices of \(\mathcal{A}\) can be applied here.
* We compute the sum of gradients for each step in Eq. (16), which does not suffer from exploding or vanishing gradients. Also, we do not need to track gradients of the generated sequence during \(T\) steps.
* However, policy gradient methods usually suffer from higher variance [15]. Thanks to similar techniques in RL, we can reduce the variance via a baseline trick, which will be discussed in Section 4.3.1.
In conclusion, Eq. (16) is comparable to Eq. (15) in expectation, with benefits such as numerical stability, memory efficiency, and more choices of \(\mathcal{A}\). It could suffer from higher variance but the baseline trick can help. We denote such kind of method as **SFT-PG** (shortcut fine-tuning with policy gradient).
Empirical comparison.We also conduct experiments on some toy datasets (Fig 3), where we show the performance of Eq. (16) with the baseline trick is at least comparable to Eq. (15) at convergence when they use the same gradient penalty (GP) for critic regularization. We further observe SFT-PG with a new baseline regularization (B) has a noticeably better final performance compared to SFT with gradient penalty. The regularization methods will be introduced in Section 4.3.2. Details are in Section 6.2.2.
### Towards Monotonic Improvement
The gradient update discussed in Eq. (15) or Eq. (16) only supports one step of gradient update, given a fixed critic \(f_{\alpha*(p_{0}^{\theta^{\prime}},q_{0})}\) that is optimal to the current \(\theta\). Some questions remain: When is our update guaranteed to get improvement? Can we do more than one gradient step to get a potential descent? We answer the questions by providing a surrogate function of the IPM.
**Theorem 4.2**.: _(**The surrogate function of IPM**) Assume that \(g(p_{0}^{\theta},f_{\alpha},q_{0})\) is Lipschitz w.r.t. \(\theta\), given \(q_{0}\) and \(\alpha\in\mathcal{A}\). Given a fixed critic \(f_{\alpha*(p_{0}^{\theta},q_{0})}\), there exists \(l\geq 0\) such that \(\Phi(p_{0}^{\theta^{\prime}},q_{0})\) is upper bounded by the surrogate function below:_
\[\Phi(p_{0}^{\theta^{\prime}},q_{0})\leq g(p_{0}^{\theta^{\prime}},f_{\alpha*( p_{0}^{\theta},q_{0})},q_{0})+2l||\theta^{\prime}-\theta||. \tag{19}\]
Proof of Theorem 4.2 can be found in Appendix B. Here we provide an illustration of Theorem 4.2 in Fig 2. Given a critic that is optimal w.r.t. \(\theta\), \(\Phi(p_{0}^{\theta^{\prime}},q_{0})\) is unknown if \(\theta\neq\theta^{\prime}\). But if we can get a descent of the surrogate function, we are also guaranteed to get a descent of \(\Phi(p_{0}^{\theta^{\prime}},q_{0})\), which facilitates more potential updates even if \(\theta^{\prime}\neq\theta\).
Moreover, using the Lagrange multiplier, we can convert minimizing the surrogate function to a constrained optimization problem to optimize \(g(p_{0}^{\theta^{\prime}},f_{\alpha*(p_{0}^{\theta},q_{0})},q_{0})\) with the constraint that \(||\theta^{\prime}-\theta||\leq\delta\) for some \(\delta>0\). Following this idea, one simple trick is to perform \(n_{\text{generator}}\) steps of gradient updates with a small learning rate, and clip the gradient norm with threshold \(\gamma\). We present the empirical effect of such simple modification in Section 6.2.3, Table 2.
Discussion.One may notice that Theorem 4.2 is similar in spirit to Theorem 1 in TRPO (Schulman et al., 2015a), which provides a surrogate function for a fixed but unknown reward function. In our case, the reward function \(f_{\alpha*(p_{0}^{\theta},q_{0})}\) is known for the current \(\theta\) but changing: It is dependent on the current \(\theta\) so it remains unknown for \(\theta^{\prime}\neq\theta\). The proof techniques are also different, but they both estimate an unknown part of the objective function.
### Algorithm Design
In the previous sections, we only consider the case where we have an optimal critic function given \(\theta\). In the training, we adopt similar techniques in WGAN (Arjovsky et al., 2017) to perform alternative training of the critic and generator in
Figure 2: Illustration of the surrogate function given a fixed critic (red), and the actual objective \(\Phi(p_{0}^{\theta^{\prime}},q_{0})\) (dark). The horizontal axis represents the variable \(\theta^{\prime}\). Starting from \(\theta\), a descent in the surrogate function is a sufficient condition for a descent in \(\Phi(p_{0}^{\theta^{\prime}},q_{0})\).
order to approximate the optimal critic. Consider the objective function below:
\[\min_{\theta}\max_{\alpha\in\mathcal{A}}g(p_{0}^{\theta},f_{\alpha},q_{0}). \tag{20}\]
Now we discuss techniques to reduce the variance of the gradient estimation and regularize the critic, and then give an overview of our algorithm.
#### 4.3.1 Baseline Function for Variance Reduction
Given a critic \(\alpha\), we can adopt a technique widely used in RL to reduce the variance of the gradient estimation in Eq. (16). Similar to Schulman et al. (2015b), we can subtract a baseline function \(V_{t+1}^{\omega}(x_{t+1})\) from the cumulative reward \(f_{\alpha}(x_{0})\), without changing the expectation:
\[\nabla_{\theta}g(p_{0}^{\theta},f_{\alpha},q_{0}) \tag{21}\] \[=\mathop{\mathbb{E}}_{p_{\alpha:T}^{\theta}}\left[f_{\alpha}(x_{ 0})\sum_{t=0}^{T-1}\nabla_{\theta}\log p_{t}^{\theta}(x_{t}|x_{t+1})\right]\] \[=\mathop{\mathbb{E}}_{p_{\alpha:T}^{\theta}}\left[\sum_{t=0}^{T-1 }(f_{\alpha}(x_{0})-V_{t+1}^{\omega}(x_{t+1}))\nabla_{\theta}\log p_{t}^{ \theta}(x_{t}|x_{t+1})\right],\]
where the optimal choice of \(V_{t+1}^{\omega}(x_{t+1})\) to minimize the variance would be \(V_{t+1}(x_{t+1},\alpha):=\mathop{\mathbb{E}}_{p_{\alpha:T}^{\theta}}\left[f_{ \alpha}(x_{0})|x_{t+1}\right]\).
Detailed derivation of Eq (21) can be found in Appendix C. Thus, given a critic \(\alpha\) and a generator \(\theta\), we can train a value function \(V_{t+1}^{\omega}\) by minimizing the objective below:
\[R_{B}(\alpha,\omega,\theta)=\mathop{\mathbb{E}}_{p_{\alpha:T}^{\theta}}\left[ \sum_{t=0}^{T-1}(V_{t+1}^{\omega}(x_{t+1})-V_{t+1}(x_{t+1},\alpha))^{2}\right]. \tag{22}\]
#### 4.3.2 Choices of \(\mathcal{A}\): Regularizing the Critic
Here we discuss different choices of \(\mathcal{A}\), which indicates different regularization methods for the critic.
Lipschitz regularization.If we choose \(\mathcal{A}\) to include parameters of all 1-Lipschitz functions, we can adopt regularization as WGAN-GP (Gulrajani et al., 2017):
\[R_{GP}(\alpha,\theta)=\mathop{\mathbb{E}}_{\hat{x_{0}}}\left[(|\nabla_{x_{0}} f_{\alpha}(x_{0})||-1)^{2}\right], \tag{23}\]
where \(\hat{x_{0}}\) is sampled uniformly on the line segment between \(x_{0}^{\prime}\sim p_{0}^{\theta}\) and \(x_{0}^{\prime\prime}\sim q_{0}\). \(f_{\alpha}\) can be trained to maximize \(g(p_{0}^{\theta},f_{\alpha},q_{0})-\eta R_{GP}(\alpha,\omega,\theta)\), \(\eta>0\) is the regularization coefficient.
Baseline as critic regularization.As discussed in Section 4.1, since we only use the critic value during updates, now we can consider a potentially wider range of choices for \(\mathcal{A}\). Some regularization on \(f_{\alpha}\) is still needed; Otherwise, its value can explode. Regularization is also shown to be beneficial for local convergence (Mescheder et al., 2018). So we consider regularization that can be weaker than gradient constraints, such that the critic is more sensitive to the changes of the generator, which could be favorable when updating the critic for a fixed number of training steps.
We found an interesting fact that the loss \(R_{B}(\alpha,\omega,\theta)\) can be _reused_ to regularize the value of \(f_{\alpha}\), which implicitly defines a set \(\mathcal{A}\) that shows empirical benefits in practice.
Let
\[L(\alpha,\omega,\theta)=g(p_{0}^{\theta},f_{\alpha},q_{0})-\lambda R_{B}( \alpha,\omega,\theta). \tag{24}\]
Given \(\theta\), our critic \(\alpha\) and baseline \(\omega\) can be trained together to maximize \(L(\alpha,\omega,\theta)\).
We provide an explanation of such kind of implicit regularization. During the update, we can view \(V_{t+1}^{\omega}\) as an approximation of the expected value of \(f_{\alpha}\) from the previous step. The regularization provides a trade-off between maximizing \(g(p_{0}^{\theta},f_{\alpha},q_{0})\) and minimizing changes in the expected value of \(f_{\alpha}\), preventing drastic changes in the critic and stabilizing the training. Intuitively, it helps local convergence when both the critic and generator are already near-optimal: there is an extra cost for the critic value to diverge away from the optimal value. As a byproduct, it also makes the baseline function easier to fit.
Empirical comparison: baseline regularization and gradient penalty.We present a comparison of gradient penalty (GP) and baseline regularization (B) during policy gradient training (SFT-PG) in Section 6.2.2, Fig 3 on toy datasets, which shows in policy gradient training, the baseline function performs comparably well or better than gradient penalty.
#### 4.3.3 Putting Together: Algorithm Overview
Now we are ready to present our algorithm. Our critic \(\alpha\) and baseline \(\omega\) is trained to maximize \(L(\alpha,\omega,\theta)\), and the generator is trained to minimize \(g(p_{0}^{\theta},f_{a},q_{0})\) via Eq. (21). These steps are performed alternatively. See details in Alg 1.
```
Input: \(n_{\text{critic}}\), \(n_{\text{generator}}\), batch size \(m\), critic parameters \(\alpha\), baseline function parameter \(\omega\), pretrained generator \(\theta\), regularization hyperparameter \(\lambda\) while\(\theta\) not converged do Initialize trajectory buffer \(\mathcal{B}\) as \(\emptyset\) for t = 0,...,\(n_{\text{critic}}\)do Obtain \(m\) i.i.d. samples from \(p_{x_{0},T}^{\theta}\) and add to \(\mathcal{B}\) Obtain \(m\) i.i.d. samples from \(q_{0}\) Update \(\alpha\) and \(\omega\) via maximizing Eq. (24) endfor for t = 0,...,\(n_{\text{generator}}\)do Obtain \(m\) samples according to \(p_{x_{0},T}^{\theta}\) from \(\mathcal{B}\) Update \(\theta\) via policy gradient according to Eq. (21) endfor endwhile
```
**Algorithm 1** Shortcut Fine-Tuning with Policy Gradient and Baseline regularization: SFT-PG (B)
## 5 Related Works
GAN and RL.There are works using ideas from RL to train GANs (Yu et al., 2017; Wang et al., 2017; Sarmad et al., 2019; Bai et al., 2019). The most relevant work is SeqGAN (Yu et al., 2017), and our work differs in the following ways: The next token is dependent on all previous tokens in SeqGAN, which is not the case in diffusion models. The critic takes the whole sequence as input in SeqGAN, while we only care about the final state. Besides, in our work, rewards are derived from performing gradient descent w.r.t. IPM, while SeqGAN, rewards are designed manually.
Diffusion and GAN.There are other works combining diffusion and GAN training: Xiao et al. (2021) consider a more complicated noise distribution generated by GAN to enable fast sampling; Diffusion GAN (Wang et al., 2022) perturbs the data with an adjustable number of steps, and minimizes JS divergence for each step using GAN training. To our best knowledge, there is no existing work using GAN training to directly fine-tune a pretrained DDPM sampler.
Deterministic fast samplers of DDIM.There is another line of work on fast sampling of DDIM (Song et al., 2020), for example, knowledge distillation (Luhman and Luhman, 2021; Salimans and Ho, 2021) and solving ordinary differential equations (ODEs) (Watson et al., 2021; Liu et al., 2022; Lu et al., 2022). Fast sampling is easier for DDIM samplers (with deterministic Markov chains) than DDPM samplers (with conditional Gaussian Markov chains), since it is possible to combine multiple deterministic steps into one step without loss of fidelity, but not for stochastic steps, especially when combining multiple Gaussian steps into one Gaussian (Salimans and Ho, 2021). There is also a contemporary work that fine-tunes a DDIM sampler using MMD calculated by a pretrained feature space and a cubic kernel (Aiello et al., 2023), which is similar to Section 3.3 but with a fixed critic and a deterministic sampler. Besides the potential issues discussed in Section 3.3, we also note that adversarially trained critics provide stronger signals when \(p_{0}^{\theta}\neq q_{0}\) than fixed ones (Li et al., 2017). As a result, the latter may yield sub-optimal results when \(p_{0}^{\theta}\) is not close enough to \(q_{0}\) at initialization, and is also highly dependent on the choice of the critic.
## 6 Experiments
In this section, we aim to answer the following questions:
* (Section 6.2.1) Does the proposed algorithm SFT-PG (B) work in practice?
* (Section 6.2.2) How does SFT-PG (Eq. (16)) work compared to SFT (Eq. (15)) with the same regularization (GP), and how does baseline regularization (B) compared to gradient penalty (GP) in SFT-PG?
* (Section 6.2.3) Do more generator steps with gradient clipping improve the performance, as discussed in Section 4.2?
* (Section 6.3) Can the proposed objective improve existing fast samplers of DDPM on benchmark datasets?
Code is available at [https://github.com/UW-Madison-Lee-Lab/SFT-PG](https://github.com/UW-Madison-Lee-Lab/SFT-PG).
### Setup
Here we provide the setup of our training algorithm on different datasets. Model architectures and more training details can be found in Appendix D.
Toy datasets.The toy datasets we use are swiss roll and two moons (Pedregosa et al., 2011). We use \(\lambda=0.1\), \(n_{\text{critic}}=5,n_{\text{generator}}=1\) with no gradient clipping. For evaluation, we use the Wasserstein-2 distance on 10K samples from \(p_{0}\) and \(q_{0}\) respectively, calculated by POT (Flamary et al., 2021).
Image datasets.We use MNIST (LeCun et al., 1998), CIFAR-10 (Krizhevsky et al., 2009) and CelebA (Liu et al., 2015). For hyperparameters, we choose \(\lambda=1.0\), \(n_{\text{critic}}=5,n_{\text{generator}}=5\), \(\gamma=0.1\), except when testing different choices of \(n_{\text{generator}}\) and \(\gamma\). For evaluation, we use the FID (Heusel et al., 2017) measured by 50K samples generated from \(p_{0}^{\theta}\) and \(q_{0}\) respectively.
### Proof-of-concept Results
In this section, we fine-tune pretrained DDPMs with \(T=10\), and present the effect of the proposed algorithm on toy datasets, and show the results of different gradient estimations discussed in Section 4.1, different critic regularization methods discussed in Section 4.3.2, and the training technique with more steps discussed Section 4.2.
#### 6.2.1 Improvement from Fine-Tuning
On the swiss roll dataset, we first train a DDPM with \(T=10\) till convergence, and then use it as initialization of our fine-tuning. As in Table 1, our fine-tuned sampler with 10 steps can get better Wasserstein distance not only compared to the DDPM with \(T=10\), but can even outperform DDPM with \(T=1000\), which is reasonable since we directly optimize the IPM objective. 5 The training curve and the data visualization can be found in Fig 2(a) and Fig 2(d).
Figure 3: Training curves (2(a), 2(e)) and 10K randomly generated samples from SFT (GP) (2(b), 2(f)), SFT-PG (GP) (2(c), 2(g)), and SFT-PG (B) (2(d), 2(h)) at convergence. In the visualizations, red dots indicate the ground truth distribution, and blue dots indicate generated distribution. We can observe that SFT-PG (B) generates noticeably better distributions.
#### 6.2.2 Effect of Different Gradient Estimations and Regularizations
On the toy datasets, we compare gradient estimation SFT-PG and SFT, both with gradient penalty (GP). 6 We also compare them to our proposed algorithm SFT-PG (B). All methods are initialized with pretrained DDPM, \(T=10\), then trained till convergence. As shown in Fig 3, we can observe that all methods converge and the training curves are almost comparable, while SFT-PG (B) enjoys a slightly better final performance.
Footnote 6: For gradient penalty coefficient, we tested different choices in \([0.001,10]\) and pick the best choice \(0.001\). We also tried spectral normalization for Lipschitz constraints, but we found that its performance is worse than gradient penalty on these datasets.
#### 6.2.3 Effect of Gradient Clipping with More Steps
In Section 4.2, we discussed that performing more generator steps with the same fixed critic and meanwhile clipping the gradient norm can improve the training of our algorithm. Here we present the effect of \(n_{\text{generator}}=1\) or \(5\) with different gradient clipping thresholds \(\gamma\) on MNIST, initialized with a pretrained DDPM with \(T=10\), FID=7.34. From Table 2, we find that a small \(\gamma\) with more steps can improve the final performance, but could hurt the performance if too small. Randomly generated samples from the model with the best FID are in Fig 4. We also conducted similar experiments on the toy datasets, but we find no significant difference. We believe that this is because the task is too simple.
### Benchmark Results
In this section, we take pretrained DDPMs with \(T=1000\) and fine-tune them with sampling steps \(T^{\prime}=10\) to compare with existing fast samplers of DDPM on image benchmark datasets CIFAR-10 and CelebA.
Our baselines include various fast samplers of DDPM with Gaussian noises: naive DDPM sub-sampling, Fast-DPM (Kong and Ping, 2021), and recently advanced DDPM samplers like Analytic DPM (Bao et al., 2021) and SN-DPM (Bao et al., 2022). For fine-tuning, we use the fixed variance and sub-sampling schedules computed by FastDPM with \(T^{\prime}=10\) and only train the mean prediction model. From Table 3, we can observe that the performance of fine-tuning with \(T^{\prime}=10\) is comparable to the pretrained model with \(T=1000\), outperforming the existing DDPM samplers. Randomly generated images before and after fine-tuning are in Fig 5.
\begin{table}
\begin{tabular}{l c} \hline \hline
**Method** & \(\mathbf{W_{2}(p_{0}^{6},q_{0})}\) (\(\times 10^{-2}\)) (\(\downarrow\)) \\ \hline T= 10, DDPM & 8.29 \\ T= 100, DDPM & 2.36 \\ T= 1000, DDPM & 1.78 \\ T=10, SFT-PG (B) & **0.64** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of DDPM models and our fine-tuned model on the swiss roll dataset.
\begin{table}
\begin{tabular}{l c} \hline \hline
**Method** & **FID (\(\downarrow\))** \\ \hline
1 step & 1.35 \\
5 steps, \(\gamma=10\) & 0.83 \\
5 steps, \(\gamma=1.0\) & **0.82** \\
5 steps, \(\gamma=0.1\) & 0.89 \\
5 steps, \(\gamma=0.001\) & 1.46 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Effect of \(n_{\text{generator}}\) and \(\gamma\), trained on MNIST. Figure 4: Randomly generated samples, trained on MNIST.
### Discussions and Limitations
In our experiments, we only train the mean prediction model given a pretrained DDPM. It is also possible to learn the variance via fine-tuning with the same objective, and we leave it as future work. We also note that although we do not need to track the gradients during all sampling steps, we still need to run \(T^{\prime}\) inference steps to collect the sequence, which is inevitably slower than the 1-step GAN training.
## 7 Conclusion
In this work, we fine-tune DDPM samplers to minimize the IPMs via policy gradient. We show performing gradient descent of stochastic Markov chains w.r.t. IPM is equivalent to policy gradient, and present a surrogate function of the IPM which sheds light on monotonic improvement conditions. Our fine-tuning improves the existing fast samplers of DDPM, achieving comparable or even higher sample quality than the full-step model on various datasets.
|
2308.00173 | Optimal control of SPDEs driven by time-space Brownian motion | In this paper we study a Pontryagin type stochastic maximum principle for the
optimal control of a system, where the state dynamics satisfy a stochastic
partial differential equation (SPDE) driven by a two-parameter (time-space)
Brownian motion (also called Brownian sheet).
We first discuss some properties of a Brownian sheet driven linear SPDE which
models the growth of an ecosystem.
Further, applying time-space white noise calculus we derive sufficient
conditions and necessary conditions of optimality of the control.
Finally, we illustrate our results by solving a linear quadratic control
problem and an optimal harvesting problem in the plane. We also study possible
applications to machine learning. | Nacira Agram, Bernt Øksendal, Frank Proske, Olena Tymoshenko | 2023-07-31T22:13:39Z | http://arxiv.org/abs/2308.00173v2 | # Optimal control of SPDEs driven by time-space Brownian motion
###### Abstract
In this paper, we study a Pontryagin type stochastic maximum principle for the optimal control of a system, where the state dynamics satisfy a stochastic partial differential equation (SPDE) driven by a two-parameter (time-space) Brownian motion (also called Brownian sheet). We first discuss some properties of a Brownian sheet driven linear SPDE which models the growth of an ecosystem.
Associated to the maximum principle there is an adjoint process represented by a linear backward stochastic partial differential equation (BSPDE) in the plane driven by the Brownian sheet. We give a closed solution formula for general linear BSPDEs in the plane and also for the particular type coming from the adjoint equation. Further, applying time-space white noise calculus we derive sufficient conditions and necessary conditions of optimality of the control. Finally, we illustrate our results by solving a linear quadratic control problem in the plane. We also study possible applications to machine learning.
**Keywords:** SPDE, two-parameter Brownian motion, optimal control, maximum principles, BSPDE in the plane, linear-quadratic control, machine learning.
1
Footnote 1: Department of Mathematics, KTH Royal Institute of Technology 100 44, Stockholm, Sweden. Email: [email protected]. Work supported by the Swedish Research Council grant (2020-04697).
2
Footnote 2: Department of Mathematics, University of Oslo, Norway. Emails: [email protected], [email protected], [email protected]
Introduction
The purpose of this paper is to study optimal control of systems driven by the Brownian sheet.
Throughout this work, we denote by \(\{B(t,x):t\geq 0,x\in\mathbb{R}\}\) a Brownian sheet and \((\Omega,\mathcal{F},P)\) a complete probability space on which we define the (completed) \(\sigma\)-field \(\mathcal{F}_{t,x}\) generated by \(B(s,a),s\leq t,a\leq x\). Wong & Zakai [WZ] generalized the notion of stochastic integrals with respect to 1-parameter Brownian motion to stochastic intergrals driven by the two-parameter Brownian sheet. Let us denote by \(\mathbb{R}^{2}_{+}\) the positive quadrant of the plane and let \(z\in\mathbb{R}^{2}_{+}\), We define a first type stochastic integral with respect to the two-parameter Brownian motion in Cairoli [C72] denoted by:
\[\int\phi(z)B(dz)\]
and a second type [WZ74] stochastic integral denoted by
\[\int\psi(z,z^{\prime})B(dz)B(dz^{\prime}).\]
In Wong & Zakai [WZ], an Ito formula for stochastic integrals in the plane is given. Before moving to our main concern, let us start by stating three motivation examples. We will return to these examples later in the paper:
**Example 1.1**: _(An optimal harvesting problem) A classical model for the growth of an ecosystem (e.g. a population or a forest) with value \(Y(t)\) at time \(t\) in a random environment is the geometric Brownian motion, defined by the Ito stochastic differential equation (SDE)_
\[dY(t)=\kappa Y(t)dt+\gamma Y(t)dB(t),\quad t\geq 0,\quad Y(0)>0,\]
_where \(\kappa\) and \(\gamma\) are given constants. Equivalently, in terms of white noise \(\stackrel{{\bullet}}{{B}}\) and Wick product \(\diamond\), the equation can be written_
\[\frac{d}{dt}Y(t)=\kappa Y(t)+\gamma Y(t)\diamond\stackrel{{ \bullet}}{{B}}(t),\quad Y(0)>0,\]
_where_
\[\stackrel{{\bullet}}{{B}}(t)=\frac{d}{dt}B(t)\text{ is (time) white noise,}\]
_A natural extension of this model to the case where the noise of the environment depends on both time \(t\) and position \(x\), is the following SPDE in the value \(Y(t,x)\) of the ecosystem at time \(t\) and position \(x\):_
\[\frac{\partial^{2}}{\partial t\partial x}Y(t,x)=\alpha_{0}(t,x)Y(t,x)+\beta_{0 }(t,x)Y(t,x)\diamond\overset{\bullet}{B}(t,x),\quad Y(0,0)>0, \tag{1.1}\]
_where \(\alpha_{0}(t,x)\) and \(\beta_{0}(t,x)\) are given bounded deterministic functions, and_
\[\overset{\bullet}{B}(t,x)=\frac{\partial^{2}}{\partial t\partial x}B(t,x) \mbox{ is time-space white noise.}\]
_Assume for simplicity that \(\alpha_{0}\) and \(\beta_{0}\) are constants. If we at \((t,x)\) harvest from \(Y(t,x)\) at the rate \(u(t,x)\), the dynamics (1.1) becomes_
\[\frac{\partial^{2}}{\partial t\partial x}Y_{u}(t,x)=\alpha_{0}Y_{u}(t,x)-u(t, x)+\beta_{0}Y_{u}(t,x)\diamond\overset{\bullet}{B}(t,x),\]
_or, in integral form,_
\[Y_{u}(t,x) =Y(0,0)+\int_{0}^{t}\int_{0}^{x}\{\alpha_{0}Y_{u}(s,a)-u(s,a)\} dsda\] \[+\int_{0}^{t}\int_{0}^{x}\beta_{0}Y_{u}(s,a)B(ds,da).\]
_For given utility functions \(U_{1},U_{2}\) and given constants \(T>0,X>0\) such that \(T>t,X>x\), define the combined utility of the harvesting and the terminal population by_
\[J(u)=E\left[\int_{0}^{T}\int_{0}^{X}U_{1}(u(s,a))dsda+U_{2}(Y_{u}(T,X))\right].\]
_We want to find the harvesting strategy \(u^{*}(s,x)\) which maximizes the utility of the harvest, i.e._
\[J(u^{*})=\sup_{u\in\mathcal{A}}J(u).\]
_Here the set of admissible controls is denoted by \(\mathcal{A}\)._
**Example 1.2**: _(A linear-quadratic (LQ) problem) Consider the following linear-quadratic (LQ) control problem for time-space random fields: Suppose the state \(Y(t,x)\) is given by_
\[Y(t,x)=Y(0,0)+\int_{0}^{t}\int_{0}^{x}u(s,a)dsda+\beta B(t,x),\quad t\geq 0,x \in\mathbb{R}.\]
_We want to drive the state \(Y\) to 0 at time-space \((T,X)\) with minimal use of energy. Hence we put_
\[J(u)=-\tfrac{1}{2}E\Big{[}\int_{0}^{T}\int_{0}^{X}u^{2}(s,a)dsda+\theta Y^{2}(T,X )\Big{]},\]
_where \(\theta>0\) is a given constant. The problem is to find \(u^{*}\in\mathcal{A}\) such that_
\[J(u^{*})=\sup_{u\in\mathcal{A}}J(u). \tag{1.2}\]
**Example 1.3**: _(A machine learning problem) Consider the following hyperbolic SPDE:_
\[Y(t,x)=y-\int_{0}^{t}\int_{0}^{x}u(s,a)\nabla f(Y(s,a))dsda+\sigma B(t,x),y\in \mathbb{R}^{d},t,x\geq 0\text{,} \tag{1.3}\]
_where \(B\) is a Brownian sheet in \(\mathbb{R}^{d}\), \(\sigma\in\mathbb{R}^{d\times d}\) and \(\nabla f\) is the gradient of a function \(f\in C^{1}(\mathbb{R}^{d};\mathbb{R})\). Further, \(u:\Omega\times\left[0,\infty\right)^{2}\longrightarrow\left[0,\infty\right)\) is a stochastic learning rate in time and space, which is assumed to be an adapted random field. By formally setting \(u=\eta\delta_{x}\) in (1.3) for the Dirac delta function \(\delta_{x}\) in a fixed point \(x\) and \(\eta\geq 0\) we get (as a special case of (1.3) the SDE_
\[dY_{t}=-\eta\nabla f(Y_{t})dt+\sigma dB_{t},Y_{0}=x\in\mathbb{R}^{d},t\geq 0 \text{.} \tag{1.4}\]
_We mention that the latter type of SDE (1.4) is used in machine learning in connection with the stochastic gradient descent method (SGD) to minimize or maximize the objective or loss function \(f\). Since the dynamics (1.3) is more general than that of (1.4), one may replace for the sake of a deeper understanding of the classical SGD approach (at the possible expense of numerical tractability) the equation (1.4) by (1.3) and study an optimal control problem with respect to the (time-space) stochastic learning rate \(u\). In order to illustrate this application in a simplified framework, we approximate \(\nabla f\) (for smooth \(f\)) by Taylor's expansion in the case of \(d=1\) and consider in this paper the controlled process_
\[Y_{u}(t,x)=Y(0,0)-\int_{0}^{t}\int_{0}^{x}u(s,a)Y_{u}(s,a)dsda+\beta_{0}B(t,x),Y(0,0),\beta_{0}\in\mathbb{R},t,x\geq 0\]
_with respect to (in this context) natural performance functional_
\[J(u)=-E\left[\int_{0}^{T}\int_{0}^{X}u^{2}(s,a)dsda+\theta Y^{2}(T,X)\right].\]
_for \(\theta>0\)._
**A general formulation:** The examples mentioned above are special cases of the following general optimal stochastic control problem:
We study optimal control of solutions \(Y(t,x),t\geq 0,x\in\mathbb{R}\) of SPDEs of the form
\[Y_{u}(t,x)=Y(t_{0},x_{0})+\int_{R(t,x)}\alpha_{u}(Y_{u}(s,a))dsda+\int_{R(t,x)} \beta_{u}(Y_{u}(s,a))B(ds,da), \tag{1.5}\]
where
\[R(t,x)=R^{(t_{0},x_{0})}(t,x)=[t_{0},t]\times[x_{0},x],t\geq t_{0},x\geq x_{0},\]
and
\[B(t,x)=\mbox{Brownian sheet}.\]
The differential form of (1.5) is
\[\frac{\partial^{2}}{\partial t\partial x}Y_{u}(t,x)=\alpha_{u}(Y_{u}(t,x))+ \beta_{u}(Y_{u}(t,x))\diamond\stackrel{{\bullet}}{{B}}(t,x). \tag{1.6}\]
The identity of (1.5) and (1.6) comes from the fact that
\[\int_{R(t,x)}\varphi(s,a)B(ds,da)=\int_{R(t,x)}\varphi(s,a)\diamond\stackrel{{ \bullet}}{{B}}(s,a)dsda,\ \ \ \ \forall\varphi,t,x. \tag{1.7}\]
See e.g. Holden et al [HOUZ].
**Remark 1.4**: _Let us mention here that hyperbolic SPDEs of the type (1.1) have been studied over the years by several authors. See e.g. Cairoli [C72] and Yeh [Y], who established strong existence and pathwise uniqueness of solutions \(Y\) to_
\[Y(t,x)=y_{0}+\int_{R(t,x)}b(s,a,Y(s,a))dsda+\int_{R(t,x)}\sigma(s,a,Y(s,a))B( ds,da),y_{0}\in\mathbb{R}^{d}, \tag{1.8}\]
_when \(b\) and \(\sigma\) are Lipschitz continuous vector fields of linear growth. Further, smoothness of solutions to (1.8) in the sense of Malliavin differentiability for sufficiently regular \(b\) and \(\sigma\) was analyzed in Nualart & Sanz [NS]. See also Bogso et al [BDMPP], where the authors construct Malliavin differentiable unique solutions to (1.8), when the drift vector field \(b\) is merely bounded and measurable and \(\sigma\) is given by the unit matrix. As for other works in this direction (in the case of both weak and strong solutions), we also refer to Yeh [Y87] and [Y85]. In addition, the reader may consult the book of Nualart [N] in connection with other references. Finally, we want to point out the interesting link between hyperbolic SPDEs and non-linear (random) wave equations, when \(d=1\) and \(b\), \(\sigma:\mathbb{R}\longrightarrow\mathbb{R}\): By applying the orthogonal transformation \(u=x+t,v=x-t\) to the SPDE (1.6) we see
that the corresponding differential version of equation (1.8) can be transformed into the following non-linear stochastic wave equation:_
\[\frac{\partial^{2}}{\partial t^{2}}Y(t,x)-\frac{\partial^{2}}{\partial x^{2}}Y(t,x)=\sigma(Y(t,x))\frac{\partial^{2}}{\partial t\partial x}\widetilde{B}(t,x)+b (Y(t,x)),\]
_where \(\widetilde{B}\) is another Brownian sheet, obtained by applying the inverse orthogonal transformation to \(B\). See e.g. Walsh [W] for further details._
In the sequel we assume that the performance functional \(J_{u}(t_{0},x_{0})\) has the form
\[J_{u}(t_{0},x_{0})=E^{(t_{0},x_{0})}\left[\int_{t_{0}}^{T}\int_{x_{0}}^{X}f_{u }(Y_{u}(s,a))dsda+g(Y_{u}(T,X))\right], \tag{1.9}\]
and consider the following problem:
**Problem 1.5**: _Let \({\cal A}\) be a given family of admissible controls. Find an optimal control \(u^{*}\in{\cal A}\) and the value function \(\Phi\) such that_
\[\Phi(t_{0},x_{0})=J_{u^{*}}(t_{0},x_{0})=\sup_{u\in{\cal A}}J_{u}(t_{0},x_{0}). \tag{1.10}\]
_The control \(u^{*}\) is called an optimal control and the function \(\Phi\) is referred to as the value function of this problem._
We tackle those problems by using a maximum principle approach. Therefore we need to study the adjoint equation given by a BSPDE in the plane. The existence and the uniqueness of solutions to such a BSPDE were proven in Zaidi & Nualart [ZN] for a particular Lipschitz constant.
Here is an outline of the rest of the paper:
* In the next we check properties of the solution of SPDE.
* In section 3 we introduce some background about stochastic calculus of time-space white noise.
* Then in Section 4 we derive a closed formula for the solution of two types of linear BSDEs in the plane. The first of these appears naturally in the maximum principles.
* In Section 5 we prove two types of maximum principles, a sufficient (verification theorem) and a necessary maximum principle.
* Finally, in Section 6 we apply the results of Section 5 to solve the problem mentioned in the Introduction.
A discussion of the solution of the SPDE (1.1)
Suppose that the coefficients \(\alpha_{0},\beta_{0}\) are bounded deterministic functions. Then it follows from (1.1) that
\[\widetilde{Y(t,x)}(z)=Y(0,0)+\int_{R(t,x)}\widetilde{K(s,a)}(z)\widetilde{Y(s, v)}(z)dsdv,\]
where \(\widetilde{Y(t,x)}(z)\) denotes the Hermite transform of \(Y(t,x)\) for \(z\in\left(\mathbb{C}^{\mathbb{N}}\right)_{c}\) (set of all finite sequences in \(\mathbb{C}^{\mathbb{N}}\)) and where \(R(t,x)=[0,t]\times[0,x]\). Further,
\[K(s,a)=\alpha_{0}(s,a)+\beta_{0}(s,a)\overset{\bullet}{B}(s,a),\]
where
\[\overset{\bullet}{B}(t,x)=\frac{\partial^{2}}{\partial t\partial x}B(t,x)\text { is white noise}. \tag{2.1}\]
See [1] for the properties of the Hermite transform.
Then, using Picard iteration, we find with \(y_{0}=Y(0,0)>0\), the semi-explicit solution
\[\widetilde{Y(t,x)}(z) \tag{2.2}\] \[= y_{0}\sum_{n=0}^{\infty}\int_{R(t,x)}\int_{R(s_{1},a_{1})}... \int_{R(s_{n-1},a_{n-1})}\prod_{j=1}^{n}\widetilde{K(s_{j},a_{j})}(z)ds_{n}da _{n}...ds_{1}da_{1},\]
for \(z\in\left(\mathbb{C}^{\mathbb{N}}\right)_{c}\), where \(s_{1}>...>s_{n}\), \(a_{1}>...>a_{n}\).
It is natural to ask if the random field \(Y(t,x)\) in (1.1) is positive. By Theorem 2.11.4 in Holden et al [1] this is equivalent to asking whether for all \(m\) the function
\[g(y):=\widetilde{Y(t,x)}(iy)e^{-\frac{1}{2}|y|^{2}};\quad i=\sqrt{-1},y=(y_{1},y_{2},...,y_{m})\in\mathbb{R}^{m} \tag{2.3}\]
is positive definite. In this context, we mention that
\[\widetilde{K(s,a)}(iy) =\alpha_{0}(s,a)+\beta_{0}(s,a)\widetilde{\overset{\bullet}{B}(s,a)}(iy)\] \[=\alpha_{0}(s,a)+\beta_{0}(s,a)\sum_{k=1}^{m}\mu_{k}(s,a)iy_{k}. \tag{2.4}\]
Here \(\mu_{k}(s,a)\) is the \(k\)'th element in an orthonormal basis of \(L^{2}(\mathbb{R}^{2})\) consisting of tensor products of Hermite functions. See Holden et al [1], Section 2.2.1 and
Combining the above we get
\[\widetilde{Y(t,x)}(iy) \tag{2.5}\] \[= y_{0}\sum_{n=0}^{\infty}\int_{R(t,x)}\int_{R(s_{1},a_{1})}...\int_ {R(s_{n-1},a_{n-1})}\] \[\prod_{j=1}^{n}\left(\alpha_{0}(s_{j},a_{j})+i\beta_{0}(s_{j},a_{ j})\sum_{k=1}^{m}\mu_{k}(s_{j},a_{j})y_{k}\right)ds_{n}da_{n}...ds_{1}da_{1}.\]
Therefore the positivity question is equivalent to the following:
_Is for all \(m=1,2.,...\) the function \(g:\mathbb{R}^{m}\mapsto\mathbb{C}\) given by_
\[g(y):=y_{0}\sum_{n=0}^{\infty}\int_{R(t,x)}\int_{R(s_{1},a_{1})}...\int_{R(s_{n-1},a_{n-1})}\] \[\prod_{j=1}^{n}\left(\alpha_{0}(s_{j},a_{j})+i\beta_{0}(s_{j},a_ {j})\sum_{k=1}^{m}\mu_{k}(s_{j},a_{j})y_{k}\right)ds_{n}da_{n}...ds_{1}da_{1} e^{-\frac{1}{2}|y|^{2}}\]
_positive definite_?
It turns out that the latter is not true, in general. In what follows, we want to give an explanation for this in the case of \(\alpha_{0}=0\) and \(\beta_{0}\) given by
\[\beta_{0}(s,a)=\beta_{1}(s)\beta_{2}(a),\]
where \(\beta_{1}\) and \(\beta_{2}\) are bounded measurable functions. Assume that \(m=1\). We also note that we can write
\[\mu_{1}(s,a)=\xi_{1}(s)\xi_{2}(a)\]
with elements \(\xi_{1}\) and \(\xi_{2}\) of an orthonormal basis of \(L^{2}(\mathbb{R})\). In this case, we obtain the representation
\[\widetilde{Y(t,x)}(iy_{1}) \tag{2.6}\] \[= y_{0}\sum_{n=0}^{\infty}\int_{R(t,x)}\int_{R(s_{1},a_{1})}...\int _{R(s_{n-1},a_{n-1})}\prod_{j=1}^{n}\left(i\beta_{0}(s_{j},a_{j})\mu_{1}(s_{j},a_{j})y_{1}\right)ds_{n}da_{n}...ds_{1}da_{1}\] \[= y_{0}\sum_{n=0}^{\infty}\frac{1}{(n!)^{2}}\left(i\eta(t,x)y_{1} \right)^{n},\]
where \(\eta(t,x):=\int_{R(t,x)}\beta_{0}(s,a)\mu_{1}(s,a)dsda\). Let us now have a look at the expression
\[i^{k}\left(\eta(t,x)y_{1}\right)^{k}\exp(-\frac{1}{2}\left|y_{1}\right|^{2}).\]
Then for a standard normally distributed random variables \(Z\), the inverse Fourier transform of the latter is given by
\[i^{k}\frac{1}{(\sqrt{2\pi})^{1/2}}\int_{\mathbb{R}}\left(\eta(t, x)y_{1}\right)^{k}\exp(-\frac{1}{2}\left|y_{1}\right|^{2})\exp(iu_{1}y_{1})dy\] \[= i^{k}E\left[\left(\eta(t,x)Z\right)^{k}\exp(iu_{1}Z)\right]\] \[= \left(\eta(t,x)\right)^{k}\frac{\partial^{k}}{\partial u_{1}^{k }}\varphi_{Z}(u_{1})=\left(\eta(t,x)\right)^{k}\frac{\partial^{k}}{\partial u _{1}^{k}}(\exp(-\frac{1}{2}u_{1}^{2}))\] \[= \left(\eta(t,x)\right)^{k}(-1)^{k}h_{k}(u_{1})\exp(-\frac{1}{2} u_{1}^{2}),\]
where \(\varphi_{Z}\) denotes the characteristic function of \(Z\) and where \(h_{k}\) is the \(k\)'th Hermite polynomial.
So, using dominated convergence, the inverse Fourier transform of the function \(g\) (for \(m=1\)) is
\[b(u_{1})\exp(-\frac{1}{2}u_{1}^{2}), \tag{2.7}\]
where
\[b(u_{1}):=y_{0}\sum_{n=0}^{\infty}\frac{1}{(n!)^{2}}\left(\eta(t,x)\right)^{n} (-1)^{n}h_{n}(u_{1}).\]
Let us now show that the function \(b\) cannot be non-negative. For this purpose let us recall the following properties of Hermite polynomials:
\[h_{n}(x+y)=\sum_{k=0}^{n}\frac{n!}{(n-k)!k!}x^{n-k}h_{k}(y)\]
and
\[h_{k}\left(\int_{\mathbb{R}^{2}}\phi(s,a)B(ds,da)\right)=\left(\int_{\mathbb{ R}^{2}}\phi(s,a)B(ds,da)\right)^{\diamond k},\]
where \(\diamond\) denotes the Wick product and where \(\phi\in L^{2}(\mathbb{R}^{2})\) with \(\left\|\phi\right\|_{L^{2}(\mathbb{R}^{2})}=1\) (see [1]). So we obtain that
\[h_{n}(c+\int_{\mathbb{R}^{2}}\phi(s,a)B(ds,da)) = \sum_{k=0}^{n}\frac{n!}{(n-k)!k!}c^{n-k}h_{k}(\int_{\mathbb{R}^{2 }}\phi(s,a)B(ds,da))\] \[= \left(c+\int_{\mathbb{R}^{2}}\phi(s,a)B(ds,da)\right)^{\diamond n}\]
for all \(c\in\mathbb{R}\). Assume that the above function \(b\) is non-negative. Then the random variable
\[b(c+\int_{\mathbb{R}^{2}}\phi(s,a)B(ds,da)) = y_{0}\sum_{n=0}^{\infty}\frac{1}{(n!)^{2}}\left(\eta(t,x)\right)^ {n}(-1)^{n}h_{n}(c+\int_{\mathbb{R}^{2}}\phi(s,a)B(ds,da))\] \[= y_{0}\sum_{n=0}^{\infty}\frac{1}{(n!)^{2}}\left(-\eta(t,x)(c+ \int_{\mathbb{R}^{2}}\phi(s,a)B(ds,da))\right)^{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
## 3 Background
To simplify the notation we sometimes put \(z=(t,x),\zeta=(s,a)\) in the following:
Let \({\cal P}\) be the predictable \(\sigma\)-algebra of subsets of \(\Omega\times R_{z_{0}}\) generated by the sets \((z,z^{\prime}]\times A\) where \(A\in{\cal F}_{z}\) and we denote by \({\cal D}\) the \(\sigma\)-algebra of \(\Omega\times R_{z_{0}}\times R_{z_{0}}\) generated by the sets \((z_{1},z_{1}^{\prime}]\times(z_{2},z_{2}^{\prime}]\times A\) where \((z_{1},z_{1}^{\prime}]\bar{\wedge}(z_{2},z_{2}^{\prime}]\) and \(A\in{\cal F}_{z_{1}\lor z_{2}}\).
The solutions of BSPDE in the plane, which we want to discuss in connection with the stochastic maximum principle in Section 5 and 6 in more detail, will live in the following spaces:
* \(L^{2}_{a,1}\) is the space of predictable processes \(\{\phi(z),z\in R_{z_{0}}\}\), such that \(E\left[\int_{R_{z}}\phi(z)^{2}dz\right]<\infty\),
* \(L^{2}_{a,2}\) is the space of processes \(\{\psi(z,z^{\prime}),(z,z^{\prime})\in R_{z_{0}}\times R_{z_{0}}\}\), such that 1. \(\psi(z,z^{\prime})=0\) unless \(z\bar{\wedge}z^{\prime}\), 2. \(\psi\) is \({\cal D}\)-measurable, 3. \(E\left[\int_{R_{z}}\int_{R_{z}}\psi(z,z^{\prime})^{2}dzdz^{\prime}\right]<\infty\).
### The Ito formula
To study such optimal control problems, we will use a version of the Ito formula for such systems. First we introduce some notation from Wang & Zakai [WZ].
* We put \(\zeta=(\zeta_{1},\zeta_{2})=(s,a)\in\mathbb{R}\times\mathbb{R}\) and \(d\zeta=d\zeta_{1}d\zeta_{2}=dsda\),
* \(B(t,x)\) is a Brownian sheet; \(t\geq 0,x\in\mathbb{R}\),
* \(z=(z_{1},z_{2})=(t,x),R_{z}=[0,z_{1}]\times[0,z_{2}]\),
* \(\int_{R_{z}}\varphi(\zeta)B(d\zeta)\) denotes the Ito integral with respect to \(B(\cdot)\) over \(R_{z}\),
* \(\int_{R_{z}}\psi(\zeta)d\zeta\) is 2-dimensional Lebesgue integral of \(\psi\),
* If \(a=(a_{1},a_{2}),b=(b_{1},b_{2})\), then \(a\lor b=(\max(a_{1},b_{1}),\max(a_{2},b_{2}))\).
**Theorem 3.1** (Ito formula, Wang & Zakai [WZ]): _Suppose_
\[Y(z)=Y_{0}+\int_{R_{z}}\alpha(\zeta)d\zeta+\int_{R_{z}}\beta(\zeta)B(d\zeta). \tag{3.1}\]
_Then, if \(f:\mathbb{R}\to\mathbb{R}\) is smooth, we have_
\[f(Y(z)) =f(Y_{0})+\int_{R_{z}}f^{\prime}(Y(\zeta))[\alpha(\zeta)d\zeta+\beta (\zeta)B(d\zeta)]+\tfrac{1}{2}\int_{R_{z}}f^{\prime\prime}(Y(\zeta))\beta^{2}( \zeta)d\zeta\] \[+\iint_{R_{z}\times R_{z}}f^{\prime\prime}(Y(\zeta\vee\zeta^{ \prime}))\beta(\zeta^{\prime})\beta(\zeta)B(d\zeta)B(d\zeta^{\prime})+\iint_{R_ {z}\times R_{z}}\Bigl{\{}f^{\prime\prime}(Y(\zeta\vee\zeta^{\prime}))\beta( \zeta^{\prime})\alpha(\zeta)\] \[+\tfrac{1}{2}f^{(3)}(Y(\zeta\vee\zeta^{\prime}))\beta(\zeta^{ \prime})\beta^{2}(\zeta)\Bigr{\}}d\zeta B(d\zeta^{\prime})\] \[+\iint_{R_{z}\times R_{z}}\Bigl{\{}f^{\prime\prime}(Y(\zeta\vee \zeta^{\prime}))\beta(\zeta^{\prime})\alpha(\zeta)+\tfrac{1}{2}f^{(3)}(Y( \zeta\vee\zeta^{\prime}))\beta(\zeta^{\prime})\beta^{2}(\zeta)\Bigr{\}}B(d \zeta)d\zeta^{\prime}\] \[+\iint_{R_{z}\times R_{z}}\Bigl{\{}f^{\prime\prime}(Y(\zeta\vee \zeta^{\prime}))\beta(\zeta^{\prime})\beta(\zeta)+\tfrac{1}{2}f^{(3)}(Y(\zeta \vee\zeta^{\prime}))\left[\alpha(\zeta^{\prime})\beta^{2}(\zeta)+\alpha(\zeta) \beta^{2}(\zeta^{\prime})\right]\] \[+\tfrac{1}{4}f^{(4)}(Y(\zeta\vee\zeta^{\prime}))\beta^{2}(\zeta^{ \prime})\beta^{2}(\zeta)\Bigr{\}}d\zeta d\zeta^{\prime}.\]
**Remark 3.2**: _Except for a deleted factor \(\frac{1}{4}\) in the beginning of the term (3.1) this formula agrees with Proposition 5.1 in Wang & Zakai [WZ]. In the case \(\alpha=0\) it is in agreement with the formula given by Imkeller [1], p. 35_
It is proved in [WZ] that the double \(B(d\zeta)B(d\zeta^{\prime})\)-integrals, and the mixed \(d\zeta B(d\zeta^{\prime})\) and \(B(d\zeta)d\zeta^{\prime}\)-integrals are all weak martingales and hence have expectation \(0\). Therefore, by the Ito formula above we get the following:
**Theorem 3.3**: _(Dynkin formula)_
\[E[f(Y(z))] =f(Y_{0})+E\Bigl{[}\int_{R_{z}}\Bigl{\{}\alpha(\zeta)f^{\prime}( Y(\zeta))+\frac{1}{2}\beta^{2}(\zeta)f^{{}^{\prime\prime}}(Y(\zeta))\Bigr{\}}d\zeta\] \[+\iint_{R_{z}\times R_{z}}\Bigl{\{}f^{\prime\prime}(Y(\zeta\vee \zeta^{\prime}))\beta(\zeta^{\prime})\beta(\zeta)+\tfrac{1}{2}f^{(3)}(Y(\zeta \vee\zeta^{\prime}))\Bigl{(}\alpha(\zeta^{\prime})\beta^{2}(\zeta)+\alpha( \zeta)\beta^{2}(\zeta^{\prime})\Bigr{)}\] \[+\tfrac{1}{4}f^{(4)}(Y(\zeta\vee\zeta^{\prime}))\beta^{2}(\zeta^ {\prime})\beta^{2}(\zeta)\Bigr{\}}d\zeta d\zeta^{\prime}\Bigr{]}.\]
**Lemma 3.4** (Integration by parts): _Suppose that_
\[Y_{k}(z)=Y_{k}(0)+\int_{R_{z}}\alpha_{k}(\zeta)d\zeta+\int_{R_{z}}\beta_{k}( \zeta)B(d\zeta),\quad k=1,2.\]
_Then_
\[E[Y_{1}(z)Y_{2}(z)] =Y_{1}(0)Y_{2}(0)+E\Bigl{[}\int_{R_{z}}\Bigl{\{}Y_{1}(\zeta)\alpha _{2}(\zeta)+Y_{2}(\zeta)\alpha_{1}(\zeta)+\beta_{1}(\zeta)\beta_{2}(\zeta)\] \[+2\int_{R_{z}}\beta_{1}(\zeta^{\prime})d\zeta^{\prime}\beta_{2}( \zeta)\Bigr{\}}d\zeta\Bigr{]}.\]
Proof. The proof follows from the Ito formula, Proposition 5.1 in Wang & Zakai [WZ]. \(\Box\)
## 4 BSPDEs in the plane
Let us recall now the representation of square integrable martingales.
**Theorem 4.1** (Wong & Zakai [WZ]): _If \(M=\{M(z),{\cal F}_{z},z\in{\mathbb{R}}_{+}^{2}\}\) is square integrable martingale, then for each \(z\in{\mathbb{R}}_{+}^{2}\)_
\[M(z)=M(0)+\int_{R_{z}}\phi(\zeta)B(d\zeta)+\iint\limits_{R_{z}\times R_{z}}\psi (\zeta,\zeta^{\prime})B(d\zeta)B(d\zeta^{\prime}), \tag{4.1}\]
_where \(\phi,\psi\) are adapted processes._
Let \(Z=(T,X)\) and if we fix a rectangle \(R_{Z}=[0,T]\times[0,X]\), and let \(\xi\) be an \({\cal F}_{Z}\)-measurable random variable and \(h(\omega,\zeta,p,q)\) is a \({\cal P}\times{\cal B}_{\mathbb{R}}\times{\cal B}_{\mathbb{R}}\)-measurable function such that \(\int_{R_{z}}|h(\zeta,p(\zeta),q(\zeta))|d\zeta<\infty\). Then we can define a triple of processes \((p,q,r)\in L_{a,1}^{2}\times L_{a,1}^{2}\times L_{a,2}^{2}\) solution of the BSPDE in the plane
\[p(z) =\xi-\int_{R_{Z}\setminus R_{z}}h(\zeta,p(\zeta),q(\zeta))d\zeta- \int_{R_{Z}\setminus R_{z}}q(\zeta)B(d\zeta)\] \[-\int_{R_{Z}\setminus R_{z}}\int_{R_{Z}\setminus R_{z}}r(\zeta, \zeta^{\prime})B(d\zeta)B(d\zeta^{\prime}). \tag{4.2}\]
Alternatively, let us introduce the notation
\[M_{r}(z)=M_{\psi}(t,x)=\iint\limits_{R_{z}\times R_{z}}r(\zeta,\zeta^{\prime} )B(d\zeta)B(d\zeta^{\prime}),\quad r\in L_{a,2}. \tag{4.3}\]
Then \(M_{r}(z)\) is a martingale, and we can write the equation for \((p,q,r)\) above in differential form as follows
\[p(dz) =h(z,p(z),q(z))dz+q(z)B(dz)+M_{r}(dz),\quad z\leq Z, \tag{4.4}\] \[p(Z) =\xi.\]
**Assumptions** We impose the following set of assumptions:
1. \(\xi\in L^{2}(\Omega,\mathcal{F}_{z_{0}},P)\),
2. \(h(\cdot,p,q)\in L^{2}_{a,1}\) for all \(p,q\in\mathbb{R}\),
3. \(|h(\zeta,p,q)-h(\zeta^{\prime},p^{\prime},q^{\prime})|^{2}\leq K_{1}|p-p^{ \prime}|^{2}+K_{2}|q-q^{\prime}|^{2}\), for all \(p,q,p^{\prime},q^{\prime}\in\mathbb{R}\) and \(\zeta\in R_{Z}\).
Let \(f_{0}\) be the Bessel function of order zero and \(r_{0}\approx 1.4458\) be the first nonnegative zero of \(J_{0}\):
\[r_{0}=inf\left\{t>0:f_{0}(2\sqrt{t})=\sum_{j=0}^{\infty}\frac{(-1)^{j}}{j!^{2} }t^{j}=0\right\}.\]
**Theorem 4.2** (Existence and Uniqueness Zaidi & Nualart [ZN]): _Under the above assumptions (i)-(iii) and if the Lipschitz constant satisfies \(K_{1}|z_{0}|<\sqrt{r_{0}}\) and \(K_{2}|z_{0}|<\sqrt{r_{0}}\), there exists a unique solution of the BSPDE (4.1)._
### Closed formula (1) for solutions of linear BSDEs
In particular, let us consider the linear BSPDE in the plane of the form
\[\begin{cases}p(dz)&=[\alpha_{0}(z)p(z)+\alpha_{1}(z)(q(z)+2\int_{R_{z}}q(\zeta ^{\prime})d\zeta^{\prime})+\alpha_{2}(z)]dz\\ &+q(z)B(dz)+M_{r}(dz),\quad 0\leq z\leq Z,\\ p(Z)&=\xi,\end{cases} \tag{4.5}\]
where \(\alpha_{0},\alpha_{1},\alpha_{2}\) are given bounded deterministic functions.
We define the linear forward SDE of the form
\[\begin{cases}\Gamma(dz)&=-\Gamma(z)[\alpha_{0}(z)dz+\alpha_{1}(z)B(dz)],\quad 0 \leq z\leq Z,\\ \Gamma(0)&=1.\end{cases} \tag{4.6}\]
**Remark 4.3**: _Note that for each given \(z>0\) the random variable \(\Gamma(z)\) has a probability density. This follows from Theorem 2.4.2 in Nualart [N]. In particular, we have that_
\[\Gamma(z)\neq 0\text{ a.s.}\]
**Remark 4.4**: _As mentioned before a semi-explicit expression for the solution \(\Gamma\) of this equation is given by (2.2) for the corresponding coefficients._
Applying the chain rule and taking conditional expectation, we get
\[p(Z)\Gamma(Z) =E\Big{[}p(z)\Gamma(z)+\int_{z}^{Z}\Big{\{}\alpha_{0}(\zeta)\Gamma( \zeta)p(\zeta)-\alpha_{0}(\zeta)\Gamma(\zeta)p(\zeta)\] \[-\alpha_{1}(\zeta)\Gamma(\zeta)\left(q(\zeta)+2\int_{z}^{Z}q( \zeta^{\prime})d\zeta^{\prime}\right)\] \[+\alpha_{1}(\zeta)\Gamma(\zeta)\left(q(\zeta)+2\int_{z}^{Z}q( \zeta^{\prime})d\zeta^{\prime}\right)\Big{\}}d\zeta\Big{|}\mathcal{F}_{z}\Big{]}.\]
Therefore
\[p(z)=\frac{1}{\Gamma(z)}E\Big{[}\xi\Gamma(Z)+\int_{z}^{Z}\Gamma( \zeta)\alpha_{2}(\zeta)d\zeta\Big{|}\mathcal{F}_{z}\Big{]},\]
with \(\Gamma(z)\) given by the (semi-explicit) representation (2.2) for the corresponding coefficients.
We summarize as follows:
**Theorem 4.5** (Closed formula (1) for linear BSPDEs in the plane): _Assume that the coefficients \((\alpha_{i})_{i=0,1,2}\) are bounded deterministic processes. Then_
\[p(z)=\frac{1}{\Gamma(z)}E\Big{[}\xi\Gamma(Z)+\int_{z}^{Z}\Gamma (\zeta)\alpha_{2}(\zeta)d\zeta\Big{|}\mathcal{F}_{z}\Big{]}. \tag{4.7}\]
### Closed formula (2) for solutions of linear BSPDEs
The BSPDE (4.5) appears naturally as the adjoint equation in our time-space maximum principle. But it is also of interest to consider a general linear BSPDE in the unknowns \((u,v,w)\) of the form
\[u(dz) =-[b_{0}(z)u(z)+b_{1}(z)v(z)+b_{2}(z)]dz\] \[+v(z)B(dz)+M_{w}(dz),\quad 0\leq z\leq Z, \tag{4.8}\] \[u(Z) =\xi,\]
where \(b_{0},b_{1},b_{2}\) are given bounded deterministic functions.
To find the solution of this BSPDE we introduce a process \(\Gamma(z)\) of the form
\[\Gamma(dz) =\Gamma(z)b_{0}(z)dz+\kappa(z)B(dz),\quad 0\leq z\leq Z, \tag{4.9}\] \[\Gamma(0) =1.\]
By Lemma 3.4 we have
\[u(Z)\Gamma(Z) =u(z)\Gamma(z)+\int_{z}^{Z}\Big{\{}u(\zeta)\Gamma(\zeta)b_{0}(\zeta)\] \[+\Gamma(\zeta)[-b_{0}(\zeta)u(\zeta)-b_{1}(\zeta)v(\zeta)-b_{2}( \zeta)]+\kappa(\zeta)v(\zeta)\] \[+2(\int_{0}^{\zeta}\kappa(\zeta^{\prime})d\zeta^{\prime})v(\zeta )\Big{\}}d\zeta+G(Z)-G(z), \tag{4.10}\]
where \(G\) is a martingale. Rearranging the terms we get
\[u(z)\Gamma(z) =\xi\Gamma(Z)+\int_{z}^{Z}\Big{\{}\Gamma(\zeta)b_{1}(\zeta)- \kappa(\zeta)-2\int_{0}^{\zeta}\kappa(\zeta^{\prime})d\zeta^{\prime}\Big{\}}v( d\zeta)\] \[+\int_{z}^{Z}\Gamma(\zeta)b_{2}(\zeta)d\zeta+G(Z)-G(z). \tag{4.11}\]
Now choose \(\kappa(\zeta)\) such that
\[\kappa(\zeta)+2\int_{0}^{\zeta}\kappa(\zeta^{\prime})d\zeta^{ \prime}=\Gamma(\zeta)b_{1}(\zeta). \tag{4.12}\]
**Remark 4.6**: _It is easy to see, for example by Picard iteration, that a unique solution \(\kappa\) of (4.12) exists with \(\int_{0}^{Z}\kappa^{2}(\zeta)d\zeta<\infty\) for all \(Z<\infty\). Specifically, if we put_
\[g(\zeta):=\Gamma(\zeta)b_{1}(\zeta), \tag{4.13}\]
_we can write the equation on the form_
\[\kappa(z) =g(z)-2\int_{0}^{z}\kappa(\zeta_{1})d\zeta_{1}\] \[=g(z)-2\int_{0}^{z}\left\{g(\zeta_{1})-2\int_{0}^{\zeta_{1}} \kappa(\zeta_{2})d\zeta_{2})\right\}d\zeta_{1}\] \[=g(z)-2\int_{0}^{z}g(\zeta_{1})d\zeta_{1}+(-2)^{2}\int_{0}^{z} \int_{0}^{\zeta_{1}}\left\{g(\zeta_{2})-2\int_{0}^{\zeta_{2}}\kappa(\zeta_{3} )d\zeta_{3}\right\}d\zeta_{2}d\zeta_{1}\] \[=g(z)-2\int_{0}^{z}g(\zeta_{1})d\zeta_{1}+(-2)^{2}\int_{0}^{z} \int_{0}^{\zeta_{1}}g(\zeta_{2})d\zeta_{2}d\zeta_{1}\] \[+(-2)^{3}\int_{0}^{z}\int_{0}^{\zeta_{1}}\int_{0}^{\zeta_{2}} \left\{g(\zeta_{3})-2\int_{0}^{\zeta_{4}}\kappa(\zeta_{4})d\zeta_{4}\right\}d \zeta_{3}d\zeta_{2}d\zeta_{1}.\]
_Proceeding like this we get by induction the solution_
\[\kappa(z)=g(z)+\sum_{m=1}^{\infty}(-2)^{m}J_{m}^{(g)}(z), \tag{4.14}\]
_where, for \(m=1,2,...\),_
\[J_{m}^{(g)}(z)=\int_{0}^{z}(\int_{0}^{\zeta_{1}}(\int_{0}^{\zeta_{2}}(...(\int_ {0}^{\zeta_{m-1}}g(\zeta_{m})d\zeta_{m})d\zeta_{m-1})...)d\zeta_{3})d\zeta_{2})d \zeta_{1}. \tag{4.15}\]
_Note that since \(|J_{m}(g)(z)|\) has the order of magnitude \((n!)^{-2}\) the series converges absolutely for all \(z\)._
Summarizing this we get
**Theorem 4.7** (Closed formula (2) for linear BSPDEs in the plane): _The solution \(u(z)\) of the BSPDE (4.8) is_
\[u(z)=\frac{1}{\Gamma(z)}E\Big{[}\xi\Gamma(Z)+\int_{z}^{Z}\Gamma(\zeta)b_{2}( \zeta)d\zeta\Big{|}\mathcal{F}_{z}\Big{]}, \tag{4.16}\]
_where_
\[\Gamma(dz) =\Gamma(z)b_{0}(z)dz+\kappa(z)B(dz),\quad 0\leq z\leq Z, \tag{4.17}\] \[\Gamma(0) =1,\]
_and \(\kappa\) given by (4.14)-(4.15) and (4.13), is the unique solution of the equation_
\[\kappa(z)+2\int_{0}^{z}\kappa(\zeta)d\zeta=\Gamma(z)b_{1}(z). \tag{4.18}\]
## 5 Maximum principle approaches
Given a subset \(U\) of \(\mathbb{R}\) and we denote by \(\mathcal{U}\) the set of all \(\mathcal{F}_{t,x}\)-adapted control processes \(u=\{u(t,x),t<T,x<X\}\) valued in \(U\). We therefore define the set of admissible control processes \(\mathcal{A}\subset\mathcal{U}\) to be the collection of all \(\mathcal{F}_{t,x}\)-adapted processes with values in \(U\).
Let \(f\) and \(g\) be given functions and consider the performance functional
\[J(u)=E\Big{[}\int_{R_{Z}}f(\zeta,Y(\zeta),u(\zeta))d\zeta+g(Y(Z))\Big{]},\]
where
\[R_{Z}=[0,T]\times[0,X],\]
with \(Z=(T,X)\) for some given \(T>0,X>0\), and the state \(Y\) of the system is described by the equation
\[Y(z)=Y(t,x)=Y(0)+\int_{R_{z}}\alpha(\zeta,Y(\zeta),u(\zeta))d\zeta+\int_{R_{z}} \beta(\zeta,Y(\zeta),u(\zeta))B(d\zeta),\quad z\leq Z, \tag{5.1}\]
where \(R_{z}=[0,t]\times[0,x]\) when \(z=(t,x)\), and \(u\) denotes a control process.
**Problem 5.1**: _We want to find \(\widehat{u}\in\mathcal{A}\) such that_
\[J(\widehat{u})=\sup_{u\in\mathcal{A}}J(u). \tag{5.2}\]
The maximum principle approach to this problem is to introduce the following associated Hamiltonian:
\[H(z,y,u,p,q,\overline{q})=f(z,y,u)+\alpha(z,y,u)p+\beta(z,y,u)[q+2\overline{ q}], \tag{5.3}\]
where \(\overline{q}(z)=\int_{R_{z}}q(\zeta)d\zeta\) and the adjoint processes \((p,q,r)=(p(t,x),q(t,x),r(t,x,t^{\prime},x^{\prime}))\) are given by the equation
\[\begin{cases}p(dz)&=-\frac{\partial H}{\partial y}(z,Y(z),p(z),q(z),\overline{ q}(z))dz\\ &+q(z)B(dz)+M_{r}(dz),\quad 0\leq(t,x)\leq(T,X),\\ p(Z)&=\frac{\partial q}{\partial y}(Y(Z)).\end{cases} \tag{5.4}\]
or, in integrated form,
\[p(z) =\frac{\partial g}{\partial y}(Y(Z))-\int_{R_{z}}\frac{\partial H }{\partial y}(\zeta,Y(\zeta),p(\zeta),q(\zeta),\overline{q}(\zeta))d\zeta\] \[+\int_{R_{z}}q(\zeta)B(d\zeta)+\iint\limits_{R_{z}\times R_{z}}r (\zeta,\zeta^{\prime})B(d\zeta)B(d\zeta^{\prime}),\quad z\leq Z. \tag{5.5}\]
There are two versions of the maximum principle for this problem, namely the so-called _sufficient maximum principle_ and the _necessary maximum principle_. We present them both below.
### The sufficient maximum principle
**Theorem 5.2** (Sufficient maximum principle): _Suppose \(\widehat{u}\in\mathcal{A}\) with corresponding solutions \(\widehat{Y},(\widehat{p},\widehat{q})\) of the equations above. Moreover, suppose that \(y\longmapsto g(y)\) is concave and \(y,u\longmapsto H(z,y,u,p,q,\overline{q})\) is concave for all \(p,q,\overline{q}\) and that_
\[\sup_{v\in A}H(z,\widehat{Y}(z),v,\widehat{p}(z),\widehat{q}(z),\widehat{\overline{q}}(z))=H(z,Y(z),\widehat{u}(z),\widehat{p}(z),\widehat{ q}(z),\widehat{\overline{q}}(z)), \tag{5.6}\]
_for some \(\widehat{u}\in\mathcal{A}\). Then \(\widehat{u}\) is an optimal control for problem (5.2)._
Proof. Suppose \(\widehat{u}\in\mathcal{A}\) satisfies (5.6) with corresponding \(\widehat{Y}\). Choose another \(u\in\mathcal{A}\). Then
\[J(u)-J(\widehat{u})=I_{1}+I_{2}, \tag{5.7}\]
where
\[I_{1}=E\Big{[}\int_{R_{Z}}\Big{\{}f(\zeta,Y(\zeta),u(\zeta))-f( \zeta,\widehat{Y}(\zeta),\widehat{u}(\zeta))\Big{\}}d\zeta\Big{]}=E\Big{[} \int_{R_{Z}}\tilde{f}(\zeta)d\zeta\Big{]} \tag{5.8}\]
and
\[I_{2}=E[g(Y(Z))-g(\widehat{Y}(Z))]. \tag{5.9}\]
Using the definition of \(H\) we can write
\[I_{1}=E\Big{[}\int_{R_{Z}}\Big{\{}H(\zeta)-\widehat{H}(\zeta)- \tilde{\alpha}(\zeta)\widehat{p}(\zeta)-\tilde{\beta}(\zeta)[\widehat{q}(\zeta )+2\int_{R_{z}}\widehat{q}(\zeta^{\prime})d\zeta^{\prime}]\Big{\}}d\zeta\Big{]}, \tag{5.10}\]
where \(\tilde{\alpha}=\alpha-\widehat{\alpha},\widehat{\alpha}=\alpha(\zeta, \widehat{Y}(\zeta),\widehat{u}(\zeta))\) etc.
Using the concavity of \(g\) and Lemma 3.4, and the fact that the \(B(dz)\)-integrals and the \(B(dz)B(dz^{\prime})\)-integrals are orthogonal (see [CW], Theorem 2.5), we get
\[I_{2} \leq E\Big{[}\frac{\partial g}{\partial y}(\widehat{Y}(Z)) \tilde{Y}(Z)\Big{]}=E\Big{[}\widehat{p}(Z)\tilde{Y}(Z)\Big{]}\] \[=E\Big{[}\int_{R_{Z}}\Big{\{}\widehat{p}(\zeta)\tilde{\alpha}( \zeta)-\frac{\partial H}{\partial y}(\zeta)\tilde{Y}(\zeta)+\tilde{\beta}( \zeta)[\widehat{q}(\zeta)+2\int_{R_{z}}\widehat{q}(\zeta^{\prime})d\zeta^{ \prime}]\Big{\}}d\zeta\Big{]}. \tag{5.11}\]
Adding (5.10) and (5.11) we get, using the concavity of \(H(y,u)\)
\[J(u)-J(\widehat{u}) \leq E\Big{[}\int_{R_{Z}}\Big{\{}H(\zeta)-\widehat{H}(\zeta)- \frac{\partial\widehat{H}}{\partial y}(\zeta)\tilde{Y}(\zeta)\Big{\}}d\zeta \Big{]} \tag{5.12}\] \[\leq E\Big{[}\int_{R_{Z}}\frac{\partial\widehat{H}}{\partial u}( \zeta)\tilde{u}(\zeta)d\zeta\Big{]}\leq 0\text{ by condition \eqref{eq:E1}.} \tag{5.13}\]
This proves that
\[J(u)-J(\widehat{u})\leq 0\text{ for all }u\in\mathcal{A}, \tag{5.14}\]
and therefore \(\widehat{u}\) is optimal. \(\square\)
### The necessary maximum principle
It is a drawback of the sufficient maximum principle that we have to assume that \(y\longmapsto g(y)\) and \((y,u)\longmapsto H(z,y,u,p,q,\overline{q})\) are concave. The following result does not need concavity, but we have to add conditions of the set \(\mathcal{A}\) of admissible controls instead, as follows:
1. \(\mathcal{A}\) is a convex set
2. For all \(z_{0}=(t_{0},x_{0})<Z=(T,X)\) and all bounded \(\mathcal{F}_{z_{0}}\)-measurable random variables \(\theta_{z_{0}}\), the control \[u_{z_{0}}(\zeta)=\theta_{z_{0}}\mathbf{1}_{R_{z_{0}}}(\zeta)\] is admissible, where \[\mathbf{1}_{R_{z_{0}}}(\zeta)=\begin{cases}1\text{ if }\zeta\in R_{z_{0}}\\ 0\text{ if }\zeta\notin R_{z_{0}}\end{cases}\] (5.15)
is the indicator function of the rectangle \(R_{z_{0}}=[t_{0},T]\times[x_{0},X]\).
**Lemma 5.3**: _For all \(u,v\in\mathcal{A}\) the derivative process_
\[G(\zeta):=lim_{\epsilon\to 0}\frac{1}{\epsilon}(Y^{u+\epsilon v}(\zeta)-Y^{u}( \zeta))\]
_satisfies the equation_
\[G(z) =G(0)+\int_{R_{z_{0}}}\Big{\{}\frac{\partial\alpha}{\partial y}( \zeta)G(\zeta)+\frac{\partial\alpha}{\partial u}(\zeta)v(\zeta)\Big{\}}d\zeta\] \[+\int_{R_{z_{0}}}\Big{\{}\frac{\partial\beta}{\partial y}(\zeta) G(\zeta)+\frac{\partial\beta}{\partial u}(\zeta)v(\zeta)\Big{\}}B(d\zeta),\]
_where \(\frac{\partial\alpha}{\partial y}(\zeta)=\frac{\partial\alpha}{\partial y}( \zeta,Y^{u}(\zeta),u(\zeta))\) etc._
Proof. This follows by the chain rule. \(\Box\)
**Lemma 5.4**: _For all \(u,v\in\mathcal{A}\), we have_
\[\frac{d}{d\epsilon}J(u+\epsilon v)_{\epsilon=0}=E\Big{[}\int_{R_{Z}}\frac{ \partial H}{\partial u}(\zeta,Y^{u}(\zeta),u(\zeta),p(\zeta),q(\zeta), \overline{q}(\zeta))v(\zeta)d\zeta\Big{]}.\]
Proof.
\[\frac{d}{d\epsilon}J(u+\epsilon v)_{\epsilon=0} =lim_{\epsilon\to 0}\frac{1}{\epsilon}E\Big{[}\int_{R_{Z}} \Big{\{}f(\zeta,Y^{u+\epsilon v}(\zeta),u+\epsilon v)(\zeta))-f(\zeta,Y^{u}( \zeta),u)(\zeta))\Big{\}}d\zeta\] \[+g(Y^{u+\epsilon v}(Z))-g(Y^{u}(Z))\Big{]}\] \[=E\Big{[}\int_{R_{Z}}\Big{\{}\frac{\partial f}{\partial y}(\zeta,Y^{u}(\zeta),u(\zeta))G(\zeta)+\frac{\partial f}{\partial u}(\zeta,Y^{u}( \zeta),u(\zeta))v(\zeta)\Big{\}}d\zeta\] \[+\frac{\partial g}{\partial y}(Y^{u}(Z))G(Z)\Big{]}=I_{1}+I_{2},\]
where
\[I_{1} =E\Big{[}\int_{R_{Z}}\Big{\{}\frac{\partial H}{\partial y}(\zeta )-\frac{\partial\alpha}{\partial y}(\zeta)p(\zeta)-\frac{\partial\beta}{ \partial y}(\zeta)(q(\zeta)+2\overline{q}(\zeta))\Big{\}}G(\zeta)d\zeta\] \[+\int_{R_{Z}}\Big{\{}\frac{\partial H}{\partial u}(\zeta)-\frac{ \partial\alpha}{\partial u}(\zeta)p(\zeta)-\frac{\partial\beta}{\partial u}( \zeta)(q(\zeta)+2\overline{q}(\zeta))\Big{\}}v(\zeta)d\zeta\Big{]},\]
and
\[I_{2} =E\Big{[}\frac{\partial g}{\partial y}(Y^{u}(Z))G(Z)\Big{]}=E[p(Z) G(Z)]\] \[=E\Big{[}\int_{R_{Z}}\Big{(}p(\zeta)\{\frac{\partial\alpha}{ \partial y}(\zeta)G(\zeta)+\frac{\partial\alpha}{\partial u}(\zeta)v(\zeta) \}-\frac{\partial H}{\partial y}(\zeta)G(\zeta)\] \[+(q(\zeta)+2\overline{q}(\zeta))(\frac{\partial\beta}{\partial y }(\zeta)G(\zeta)+\frac{\partial\beta}{\partial u}(\zeta)v(\zeta))\Big{)}d \zeta\Big{]}.\]
Adding \(I_{1}\) and \(I_{2}\), we get
\[\frac{d}{d\epsilon}J(u+\epsilon v)_{\epsilon=0}=E\Big{[}\int_{R_{Z}}\frac{ \partial H}{\partial u}(\zeta)v(\zeta)d\zeta\Big{]}.\]
\(\Box\)
From Lemma 6.4, we deduce the following:
**Theorem 5.5** (Necessary maximum principle): _Suppose \(\widehat{u}\in\mathcal{A}\) is optimal for Problem 2.5. Then_
\[\frac{\partial H}{\partial u}(\zeta,\widehat{Y}(\zeta),\widehat{u}(\zeta), \widehat{p}(\zeta),\widehat{q}(\zeta),\widehat{\overline{q}}(\zeta))=0\text{ for a.a. }\zeta.\]
Proof. Since \(J(\widehat{u}+\epsilon v)_{\epsilon=0}\leq J(\widehat{u})\) for all \(\epsilon,v\), we get by Lemma 6.4 that
\[E\Big{[}\int_{R_{Z}}\frac{\partial H}{\partial u}(\zeta)v(\zeta)d\zeta\Big{]} \leq 0,\text{ for all }v\in\mathcal{A}.\]
In particular, applying this to
\[v(\zeta)=\theta_{z_{0}}\mathbf{1}_{R_{z_{0}}}(\zeta)\]
as in A2, this gives
\[E\Big{[}\int_{R_{z_{0}}}\frac{\partial H}{\partial u}(\zeta)\theta_{z_{0}}d \zeta\Big{]}\leq 0.\]
Since this holds for all \(z_{0}\) we deduce that
\[\frac{\partial^{2}}{\partial t_{0}\partial x_{0}}\left(E\left[\int_{R_{z_{0}}} \frac{\partial H}{\partial u}(\zeta)\theta_{z_{0}}d\zeta\right]\right)=\frac{ \partial H}{\partial u}(z_{0})\theta_{z_{0}}\leq 0.\]
Since this holds for all \(\theta_{z_{0}}\in\mathbb{R}\), we conclude that
\[\frac{\partial H}{\partial u}(z_{0})=0.\]
\(\Box\)
## 6 Applications
### Return to the optimal harvesting problem in the plane
Suppose that the growth of a population at time \(t\) and position \(x\) with density \(Y(t,x)\) satisfies
\[Y_{u}(t,x)=Y(0,0)+\int_{0}^{t}\int_{0}^{x}\{\alpha_{0}Y_{u}(s,a)-u(s,a)\}dsda \quad+\int_{0}^{t}\int_{0}^{x}\beta_{0}Y_{u}(s,a)B(ds,da),\]
where \(\alpha_{0},\beta_{0}\) are given constants and \(Y(0,0)>0\).
For given constants \(T>0,X>0\) such that \(T>t,X>x\), define the combined utility of the harvesting and the terminal population by
\[J(u)=E\left[\int_{0}^{T}\int_{0}^{X}ln(u^{2}(s,a))dsda+\theta Y_{u}(T,X)\right],\]
where \(\theta\) is a given bounded, \(\mathcal{F}_{Z}\)-measurable random variable.
We want to find the harvesting strategy \(u^{*}(s,x)\) which maximizes the utility of the harvest, i.e.
\[J(u^{*})=\sup_{u\in\mathcal{A}}J(u).\]
**Problem 6.1**: _We want to find \(u^{*}\in\mathcal{A}\) such that_
\[J(u^{*})=\sup_{u\in\mathcal{A}}J(u).\]
The associated Hamiltonian to this case is
\[H(t,x,y,u,p,q,\overline{q})=ln(u^{2})+(\alpha_{0}y-u)p+\beta_{0}y[q+\overline{q}].\]
The Hamiltonian has a maximum at \(u\) given by the equation
\[\frac{\partial H}{\partial u}=\frac{2}{u}-p=0.\]
Therefore
\[u=\frac{2}{p}.\]
By Theorem 5.5 the component \(p\) of the solution \((p,q,r)\) of the BSDE
\[p(t,x) =\theta-\int_{R_{z}}(\alpha_{0}p(\zeta)+\beta_{0}[q(\zeta)+ \overline{q}(\zeta)])d\zeta\] \[+\int_{R_{z}}q(\zeta)B(d\zeta)+\iint\limits_{R_{z}\times R_{z}}r( \zeta,\zeta^{\prime})B(d\zeta)B(d\zeta^{\prime});\quad z\leq Z,\]
can be written
\[p(z)=\frac{1}{\Gamma(z)}E\Big{[}\frac{\Gamma(Z)}{\theta}\Big{|}\mathcal{F}_{z }\Big{]},\]
where \(\Gamma\) satisfies
\[\begin{cases}\Gamma(dz)&=-\Gamma(z)[-\alpha_{0}dz-\beta_{0}B(dz)];\quad 0 \leq z\leq Z,\\ \Gamma(0)&=1,\end{cases}\]
See also Remark 4.4.
We have proved:
**Theorem 6.2**: _Let \(z>0\) and assume that \(E\left[\frac{\Gamma(Z)}{\theta}\left|\mathcal{F}_{z}\right.\right]\neq 0\). Then the optimal harvesting rate \(u^{*}\) for Problem 6.1 is given by_
\[u^{*}(t,x)=u^{*}(z)=\frac{2}{p(z)}=\frac{2\Gamma(z)}{E\Big{[}\frac{\Gamma(Z)}{ \theta}\Big{|}\mathcal{F}_{z}\Big{]}}.\]
**Remark 6.3**: _If \(a_{0}=0\), then using the martingale property we get that_
\[E\left[\frac{\Gamma(Z)}{\theta}\left|\mathcal{F}_{z}\right.\right]=\frac{ \Gamma(z)}{\theta}.\]
_We know, however, from Remark 4.3 that \(\Gamma(z)\) has a probability density. So in this case, we see that \(u^{*}(z)=2\theta\) a.e._
### Return to the linear-quadratic (LQ) problem in the plane
To illustrate the sufficient maximum principle we apply it to solve the linear-quadratic (LQ) control problem for time-space random fields discussed in the introduction:
Suppose the state \(Y(t,x)\) is given by
\[Y(t,x)=Y(0,0)+\int_{0}^{t}\int_{0}^{x}u(s,a)dsda+\beta B(t,x);\quad t\geq 0,x \in\mathbb{R}. \tag{6.1}\]
We want to drive the state \(Y(t,x)\) to \(0\) at time \(T\) and point \(X\) with minimal use of energy. Hence we put
\[J(u)=-\tfrac{1}{2}E\left[\int_{0}^{T}\int_{0}^{X}u^{2}(s,a)dsda+\theta Y^{2}(T,X)\right], \tag{6.2}\]
where \(\theta>0\) is a given constant.
**Problem 6.4**: _We want to find \(u^{*}\in\mathcal{A}\) such that_
\[J(u^{*})=\sup_{u\in\mathcal{A}}J(u). \tag{6.3}\]
The Hamiltonian in this case is
\[H(t,x,y,u,p,q,\overline{q})=-\tfrac{1}{2}u^{2}+up+\beta[q+ \overline{q}] \tag{6.4}\]
The maximum of \(u\mapsto H(u)\) is obtained when \(\frac{\partial H}{\partial u}=-u+p=0\), i.e. when
\[u=p. \tag{6.5}\]
Thus the Hamiltonian is
\[H(t,x,y,u,p,q,\overline{q})=-\tfrac{1}{2}u^{2}+up+\beta[q+ \overline{q}] \tag{6.6}\]
The maximum of \(u\mapsto H(u)\) is obtained when \(\frac{\partial H}{\partial u}=-u+p=0\), i.e. when
\[u=p. \tag{6.7}\]
The adjoint equation is
\[p(dz) =q(z)B(dz);\quad z<Z=(T,X)\] \[p(T,X) =\theta Y(T,X) \tag{6.8}\]
Let us try to put
\[p(t,x)=\lambda(t,x)Y(t,x) \tag{6.9}\]
for some deterministic function \(\lambda\). Then by the Ito formula
\[\begin{split}\lambda(t,x)Y(t,x)&=\lambda(0,0)Y(0,0)\\ &+\int_{0}^{t}\int_{0}^{x}\Big{\{}Y(s,a)\frac{\partial^{2}\lambda} {\partial s\partial x}(s,a)+\lambda(s,a)u(s,a)\Big{\}}dsda\\ &+\text{ terms containing }B(ds,da).\end{split} \tag{6.10}\]
Using the concept of quadratic variation of \(2-\)parameter martingales (see e.g. Imkeller [I2]), one finds that the decomposition of a (continuous) \(2-\)parameter "semimartingale", which is given by a sum of a \(2-\)parameter process of bounded variation and a \(2-\)parameter martingale, is unique. So, comparing the latter equation with the adjoint equation, we see that we must have
\[Y(t,x)\frac{\partial^{2}\lambda}{\partial t\partial x}(t,x)+\lambda(t,x)u(t,x) =0\text{ for all }t,x. \tag{6.11}\]
Combining this with (6.9) we get
\[Y(t,x)\Big{[}\frac{\partial^{2}\lambda}{\partial t\partial x}(t,x)+\lambda^{ 2}(t,x)\Big{]}=0, \tag{6.12}\]
with terminal condition
\[\lambda(T,X)=\theta. \tag{6.13}\]
In addition we get from (6.9) the other boundary condition
\[\lambda(0,0)=\frac{E[\theta Y(T,X)]}{Y(0,0)}. \tag{6.14}\]
With this choice of \(u,p,\lambda\) we see that all the conditions of the sufficient maximum principle are satisfied, and we have proved the following:
**Theorem 6.5**: _The optimal control \(\widehat{u}\) for the LQ problem (6.4) is given in feedback form by_
\[\widehat{u}(t,x)=\lambda(t,x)Y(t,x);\quad t\leq T,x\leq X, \tag{6.15}\]
_where \(\lambda(t,x)\) solves the time-space Riccati equation_
\[\begin{cases}\frac{\partial^{2}\lambda}{\partial t\partial x}(t,x)+\lambda^{2 }(t,x)=0;\quad 0\leq t\leq T,0\leq x\leq X,\\ \lambda(T,X)=\theta,\\ \lambda(0,0)=\frac{E[\theta Y(T,X)]}{Y(0,0)}.\end{cases} \tag{6.16}\]
**Remark 6.6**: _Let \(\varphi_{1}\) be a solution to the Riccati equation_
\[\dot{\varphi}_{1}(t)=(\varphi_{1}(t))^{2},\varphi_{1}(0)=1,0\leq t\leq T\]
_and \(\varphi_{2}\) be a solution to_
\[\dot{\varphi}_{2}(x)=-(\varphi_{2}(x))^{2},\varphi_{2}(0)=\theta,0\leq x\leq X.\]
_Define \(\lambda(t,x)=\)\(\alpha_{1}(t)\alpha_{2}(x)\), where \(\alpha_{1}(t):=\varphi_{1}(T-t)\) and \(\alpha_{2}(x):=\varphi_{2}(X-x)\). Then_
\[\frac{\partial^{2}\lambda(t,x)}{\partial t\partial x}=\dot{\varphi}_{1}(T-t) \dot{\varphi}_{2}(X-x)=(\alpha_{1}(t))^{2}(-(\alpha_{2}(t))^{2})=-(\lambda(t, x))^{2}\]
_with \(\lambda(T,X)=\varphi_{1}(0)\varphi_{2}(0)=\theta.\) By solving the Riccati equations, we find that \(\lambda\) given by_
\[\lambda(t,x)=\frac{1}{(1-T+t)(\theta^{-1}+X-x)}\]
_is an explicit solution to the above hyperbolic PDE with boundary condition \(\lambda(T,X)=\theta\) for \(0<T<1\). Let us know have a look at the other condition \(\lambda(0,0)=\theta E\left[Y(T,X)\right]/Y(0,0)\): We observe that_
\[E\left[Y(t,x)\right]=Y(0,0)+\int_{0}^{t}\int_{0}^{x}\lambda(s,a)E\left[Y(s,a) \right]dsda.\]
_So, if we use Picard iteration combined with the fact that \(\lambda\) can be written as a product of a function in \(t\) and another function in \(x\), we see that the solution of the latter equation has the representation_
\[E\left[Y(t,x)\right]=Y(0,0)f\left(\int_{0}^{t}\int_{0}^{x}\lambda(s,a)dsda \right),\]
_where the function \(f:\mathbb{R}\longrightarrow\mathbb{R}\) is defined by_
\[f=\sum_{n\geq 0}\frac{y^{n}}{(n!)^{2}}.\]
_On the other hand,_
\[\int_{0}^{T}\int_{0}^{X}\lambda(s,a)dsda = \int_{0}^{T}\int_{0}^{X}\frac{1}{(1-T+s)(\theta^{-1}+X-a)}dsda\] \[= -\log(1-T)\log(1+X\theta).\]
_So the condition \(\lambda(0,0)=\theta E\left[Y(T,X)\right]/Y(0,0)\) is equivalent to_
\[\frac{1}{(1-T)(\theta^{-1}+X)}=\theta f(-\log(1-T)\log(1+X\theta))\]
\[1=(1-T)(1+X\theta)f(-\log(1-T)\log(1+X\theta)). \tag{6.17}\]
_For given \(T<1\) and \(\theta>0\) the expression on the right hand side of the latter equation converges to \((1-T)\) for \(X\to 0\). For \(X\to\infty\), this expression converges to \(\infty\). Because of continuity, we then see that there exists a \(X=X(T,\theta)>0\) such that the equation (6.17) is satisfied. Using such a time horizon \(X\), gives the other boundary condition._
### Example related to machine learning
In machine learning the (continuous-time) stochastic gradient descent method (see e.g. [MS] and the references therein) is used to minimize an objective function \(f:\mathbb{R}^{d}\longrightarrow\mathbb{R}\). Compared to the classical gradient descent method without noise, this approach is especially computationally efficient, when the dimension \(d\) in practical optimization problems is high. If the objective function is sufficiently smooth the critical points of \(f\) with respect to local or global minima may be found by means of solutions to SDEs of the type
\[dY_{t}=-\eta\nabla f(Y_{t})dt+\beta_{0}dB_{t},Y_{0}=x\in\mathbb{R}^{d},t\geq 0, \tag{6.18}\]
where \(\eta\geq 0\) is the learning rate (or step size), \(\beta_{0}\in\mathbb{R}^{d\times d}\), \(B_{t},t\geq 0\) a Brownian motion and where \(\nabla\) denotes the gradient of a function. In general, the selection of an "optimal" learning rate \(\eta\), which determines the optimal step size towards a minimum in the sense of speed, is in general difficult. If \(\eta\) is chosen too small, the solution may converge too slowly to a critical point. On the other hand, a too large \(\eta\) could result in overshoot or divergence. In order to gain a deeper understanding of the latter problem, one may consider instead of the SDE (6.18) a more general framework (at the possible expense of computational cost) in connection with the following type of hyperbolic SPDE:
\[Y(t,x)=y-\int_{0}^{t}\int_{0}^{x}u(s,a)\nabla f(Y(s,a))dsda+\beta_{0}B(t,x),y \in\mathbb{R}^{d},t,x\geq 0, \tag{6.19}\]
where \(u:\Omega\times\left[0,\infty\right)^{2}\longrightarrow\left[0,\infty\right)\) is a stochastic learning rate in time and space given by an adapted random field and where \(B\) is a Brownian sheet in \(\mathbb{R}^{d}\). Formally, by choosing in (6.19) \(u=\eta\delta_{x}\) for the Dirac delta function \(\delta_{x}\) in a fixed point \(x\) and \(\eta\geq 0\) we obtain an SDE of the type (6.18). So the random field dynamics (6.19) provides a more general framework than that in the one-parameter case (6.18) for finding the critical points of \(f\). On the other hand, we may view the integral term
\[\int_{0}^{x}u(s,a)\nabla f(Y(s,a))da\]
in (6.19) for a fixed \(x\) and a certain class of stochastic \(2-\)parameter learning rate processes as an (weighted) average of \(\nabla f(Y(s,a))\), \(0\leq a\leq x\) in (6.18). Here \(Y(s,a),0\leq a\leq x\) can
be interpreted as a group of mountain hikers in the optimisation landscape who communicate with each other with respect to (average) gradient information in order to find the descent to the valley (i.e. minimum). The latter, combined with the "exploration ability" of the Brownian sheet with respect to the spatial parameter direction in the optimisation landscape, suggests a solution that converges to rather flat minima, while escaping from sharp minima. The convergence to flat minima, however, is in many applications a favourable feature from a machine learning point of view (see [HS]).
In order to construct optimal stochastic \(2-\)parameter learning rate processes one may e.g study stochastic control problems based on the stochastic maximum principle for SPDEs driven by a Brownian sheet with respect to certain performance functionals as e.g.
\[J(u)=-E\left[\int_{0}^{T}\int_{0}^{X}u^{2}(s,a)dsda+f(Y(T,X))\right], \tag{6.20}\]
where one mimimizes the expected value of \(f(Y(T,X))\), while the "energy invested" in \(u\) is kept minimal.
Using the first order Taylor expansion, we can also approximate \(\nabla f\) in (6.19) by an affine function \(g\) given by \(g(x)=a+Ax\) for \(a\in\mathbb{R}^{d}\), \(A\in\mathbb{R}^{d\times d}\) and obtain a more simplified framework for our stochastic control problem with respect to \(u\). In this setting, let us now consider the case \(d=1\) and the following controlled process:
\[Y_{u}(t,x)=Y(0,0)-\int_{0}^{t}\int_{0}^{x}u(s,a)Y_{u}(s,a)dsda+\int_{0}^{t} \int_{0}^{x}\beta_{0}B(ds,da).\]
We want to study the performance functional
\[J(u)=-E\Big{[}\int_{0}^{T}\int_{0}^{X}u^{2}(s,a)dsda+\theta Y^{2}(T,X)\Big{]}.\]
In this case the associated Hamiltonian is
\[H(t,x,y,u,p,q,\overline{q})=-u^{2}-yup+\beta_{0}[q+\overline{q}],\]
and the adjoint BSDE is
\[p(z) =-2\theta Y(T,X)+\int_{R_{z}}u(\zeta)p(\zeta)d\zeta\] \[-\int_{R_{z}}q(\zeta)B(d\zeta)-\iint\limits_{R_{x}\times R_{z}}r (\zeta,\zeta^{\prime})B(d\zeta)B(d\zeta^{\prime}),\quad z\leq(T,X).\]
Maximising \(H\) with respect to \(u\), we get
\[u=-\tfrac{1}{2}yp.\]
We have proved:
**Theorem 6.7**: _The optimal control is \(u^{*}(z)=-\frac{1}{2}Y(z)p(z)\), where \((Y(z),p(z))\) is the solution of the following system of fully coupled forward-backward SPDEs driven by the Brownian sheet:_
\[Y_{u}(t,x)=Y(0,0)-\int_{0}^{t}\int_{0}^{x}u(s,a)Y_{u}(s,a)dsda+ \int_{0}^{t}\int_{0}^{x}\beta_{0}B(ds,da),\] \[p(z)=-2\theta Y(T,X)+\int_{R_{z}}u(\zeta)p(\zeta)d\zeta-\int_{R_ {z}}q(\zeta)B(d\zeta)-\iint\limits_{R_{z}\times R_{z}}r(\zeta,\zeta^{\prime})B (d\zeta)B(d\zeta^{\prime}).\]
**Remark 6.8**: _In the more general case, when \(\triangledown f(x)=Ax\) for \(A\in\mathbb{R}^{d\times d}\), one shows that the optimal control \(u^{*}\) with respect to the controlled process (6.19) and performance functional (6.20) is given by \(u^{*}(t,x)=-\frac{1}{2}(\triangledown f(Y(t,x)))^{*}p(t,x)\), where \((Y,p)\) solves a corresponding forward-backward system of SPDEs (\(*\) transpose)._
|
2309.09518 | NOMAD: A Natural, Occluded, Multi-scale Aerial Dataset, for Emergency
Response Scenarios | With the increasing reliance on small Unmanned Aerial Systems (sUAS) for
Emergency Response Scenarios, such as Search and Rescue, the integration of
computer vision capabilities has become a key factor in mission success.
Nevertheless, computer vision performance for detecting humans severely
degrades when shifting from ground to aerial views. Several aerial datasets
have been created to mitigate this problem, however, none of them has
specifically addressed the issue of occlusion, a critical component in
Emergency Response Scenarios. Natural Occluded Multi-scale Aerial Dataset
(NOMAD) presents a benchmark for human detection under occluded aerial views,
with five different aerial distances and rich imagery variance. NOMAD is
composed of 100 different Actors, all performing sequences of walking, laying
and hiding. It includes 42,825 frames, extracted from 5.4k resolution videos,
and manually annotated with a bounding box and a label describing 10 different
visibility levels, categorized according to the percentage of the human body
visible inside the bounding box. This allows computer vision models to be
evaluated on their detection performance across different ranges of occlusion.
NOMAD is designed to improve the effectiveness of aerial search and rescue and
to enhance collaboration between sUAS and humans, by providing a new benchmark
dataset for human detection under occluded aerial views. | Arturo Miguel Russell Bernal, Walter Scheirer, Jane Cleland-Huang | 2023-09-18T06:57:00Z | http://arxiv.org/abs/2309.09518v1 | # NOMAD: A Natural, Occluded, Multi-scale Aerial Dataset, for Emergency Response Scenarios
###### Abstract
With the increasing reliance on small Unmanned Aerial Systems (sUAS) for Emergency Response Scenarios, such as Search and Rescue, the integration of computer vision capabilities has become a key factor in mission success. Nevertheless, computer vision performance for detecting humans severely degrades when shifting from ground to aerial views. Several aerial datasets have been created to mitigate this problem, however, none of them has specifically addressed the issue of occlusion, a critical component in Emergency Response Scenarios. Natural Occluded Multi-scale Aerial Dataset (NOMAD) presents a benchmark for human detection under occluded aerial views, with five different aerial distances and rich imagery variance. NOMAD is composed of 100 different Actors, all performing sequences of walking, laying and hiding. It includes 42,825 frames, extracted from 5.4k resolution videos, and manually annotated with a bounding box and a label describing 10 different visibility levels, categorized according to the percentage of the human body visible inside the bounding box. This allows computer vision models to be evaluated on their detection performance across different ranges of occlusion. NOMAD is designed to improve the effectiveness of aerial search and rescue and to enhance collaboration between sUAS and humans, by providing a new benchmark dataset for human detection under occluded aerial views.
## 1 Introduction
Advances in technology, including improvements in edge computing and Artificial Intelligence (AI), have led to increased use of small Unmanned Aerial Systems (sUAS) across a broad range of applications [52, 53, 69], such as emergency response [6, 26, 53, 69]. SUAS are empowered to perform Computer Vision (CV) tasks, such as aerial surveillance and autonomous person detection and tracking, where timely and efficient performance potentially can make the difference between life or death [21, 27, 32]. Higher levels of sUAS autonomy, supported by CV, increase collaboration between humans and sUAS, allowing emergency responders to focus attention on mission level goals [1, 20] while sUAS perform lower-level person detection tasks.
However, there are many open challenges in deploying CV on sUAS for emergency response [19, 51]. These challenges include the non-trivial, highly prevalent problem of occlusion, which occurs when targets of aerial search are often partially hidden from view. For example, a drowning victim who is partially submerged in water, people buried in debris following an earthquake, hidden by smoke in a fire, or laying behind trees and rocks in search and rescue missions. Occlusion could also be intentional when a suspect is hiding from law-enforcement, caused by pose and image perspective, or introduced in far-distance aerial views due to glare, shades, blur, and low resolution. Prior CV research on occlusion has focused on generic object detection [47, 66], as well as on pedestrian detection [59], demonstrating how occlusion drastically affects model performance [59, 79, 82]. However, the occlusion problem is exacerbated even further when shifting from ground to aerial views [58], where additional challenges surrounding the incorporation of CV capable sUAS into emergency response scenarios include the biased training datasets, coupled with real-life challenges such as vibration, wind and atmospheric turbulence, harsh weather and low visibility conditions, diverse scenery, and the need for generalization at different distances and resolutions. CV systems deployed for emergency response must be able to reliably handle person detection under all of these conditions.
We therefore address these challenges through presenting NOMAD (Natural Occluded Multi-scale Aerial Dataset), a benchmark dataset aimed at human detection under occluded aerial views, as summarized in Fig. 1. NOMAD is composed of 100 different actors, each performing sequences of walking, laying and hiding. It includes 42,825 frames, extracted from 5.4k resolution videos. Actors are manually annotated with a bounding box and a label describing 10 different visibility levels, categorized according to the percentage of the human body visible inside the bounding box, allowing the detection performance of CV models to be evaluated across 10 different ranges of
occlusion. Figure 1 summarizes the key characteristics of our dataset, including: _Natural_: representing a variety of natural and man-made locations; cross-seasonal imagery, ranging from summer to winter scenarios; demographic variety on age and race, ranging from 18 to 78 years old, and including White Caucasians, Latinos, African descent, Asians, South Asians, Middle Eastern and Pacific Islander; _Occluded_: with routines created to include occlusion and a visibility label assigned to every bounding box annotated; _Multi-scale_: with five different distances, ranging from 10m to 90m altitude, and a ground reference view for every actor.
The remainder of this article is organized as follows. Section 2 presents related work. Section 3 describes the data collection process. Section 4 describes the data curation, key-frame selection and data annotation. Section 5 discusses NOMAD characteristics and its potential uses. Section 6 reports baseline results achieved using state-of-the-art CV detection models under different levels of occlusion, and Sec. 7 summarizes the contributions of the work.
## 2 Related Work
### Mobile Robotics for Emergency Response
There are numerous challenges associated with integrating mobile robotics into emergency response missions [19, 51, 53, 57, 69, 70]. Researchers, focusing on ground mobile robots, have explored mapping of emergency scenes [60, 64, 73], improved communication networks [49], and specialized architectures [33, 41]. User studies have demonstrated the benefits of including aerial robots in emergency response [74], potentially working in collaboration with ground robots [17], to enhance surveying and mapping capabilities [63]. Other studies have explored the integration of additional sensors, such as ground penetrating radar [62], or cellphone tracking for missing person search [3]. Finally, several researchers have explored efficient collaborations between humans and sUAS at the intersection of software engineering and human computer interaction [1, 2, 14, 20].
### Real-World Object Detection
There are numerous challenges related to utilizing aerial CV for real-time emergency response [47]. Real-time CV applications tend to leverage the latest versions of the YOLO family [39, 72], as well as their modifications [35, 45, 55, 56], while other methods explore attention for object detection [54] and multimodal techniques [4]. The most recent work has focused on incremental learning of unknown classes, in the modality known as Open World Object Detection [40, 80, 50, 84], as well as its variations [77, 81]. The challenges of object detection under occlusion have also been studied [16, 59, 66]. Finally, techniques incorporating human perception have been explored for object detection [61] and other machine learning tasks [8, 9, 24, 30, 34, 37, 67], demonstrating a plausible approach to handling occlusion [18, 29, 65, 82].
Figure 1: Development and characteristics of NOMAD. Integration of sUAS into emergency response scenarios have aided first responders and rescued victims [12, 15, 21, 27, 31, 32, 36, 42] (first column). Nevertheless, multiple challenges inherent to these situations degrade CV performance and halt SUAS full integration, including the highly prevalent problem of occlusion (second column). We present NOMAD, Natural Occluded Multi-scale Aerial Dataset, providing the research community with emergency response related videos and selected frames, as well as rich metadata and annotations, including a visibility label (third column). Facing emergency response scenarios, key characteristics of our dataset are: _Natural_: diversity of filming locations, cross-seasonal imagery, including winter scenarios, and a demographic diversity on gender, age and race, ranging from 18 to 78 years old, and including White Caucasians, Latinos, African descent, Asians, South Asians, Middle Eastern and Pacific Islander; _Occluded_: 10 defined ranges of occlusion, with a visibility label assigned to every bounding box; _Multi-scale_: five different distances, ranging from 10m to 90m altitude, and a ground reference view for every actor.
### Aerial Datasets
While many datasets have been collected to aid aerial detection of humans in search and rescue (SAR), none of them have addressed the critical issue of occlusion. HERIDAL [10] comprises of approximately 500 labelled 4,000 by 3,000 pixel images suitable for object detection tasks. SARD comprises 1,981 manually labeled images extracted from video frames of persons simulating search and rescue situations in roads, quarries, grass land, and forested areas, under diverse weather conditions. However, both datasets lack rich generalization characteristics and environmental diversity. The recently published WiSARD [11] dataset, comprises the richest set of images associated with wilderness SAR scenarios, with 33,786 labeled RGB images, 22,156 labeled thermal images, and a subset consisting of 15,453 temporally synchronized visual-thermal image pairs. In addition to the useful multimodal imagery, the dataset includes environmental diversity across seasons and times of the day and night. WiSARD represents the richest dataset for _blind search_ in wilderness scenarios, that is, search for any person on an area rather than the search for an specific person; NOMAD provides richer demographic diversity, includes man-made scenarios, provides rich metadata of actors, controlled multi-scales, and provides a new benchmark for occlusion. It is the only dataset, to our knowledge, to systematically address the issue of occlusion.
The BIRDSAI, VisDrone and UAVDT [7, 83, 28], incorporate occlusion labels into their annotations; however, they lack rich human metadata of humans. BIRDSAI is a long-wave thermal infrared dataset containing nighttime images of animals and humans in Southern Africa. While suitable for improving _blind search_ of persons in emergency scenarios, it only provides two levels of occlusion and lacks person metadata. VisDrone consists of 288 video clips formed by 261,908 frames and 10,209 static images, and is captured by various drone-mounted cameras, covering diverse locations, environments, objects (pedestrian, vehicles, bicycles, etc.), and density. However, it provides only three levels of occlusion and also lacks person metadata. Finally, while UAVDT provides four levels of occlusion, it focuses purely on vehicles and not people.
BRIAR, MEVID, UAV-Human, P-DESTRE and PRAI-1581 [22, 25, 43, 78, 46] provide rich metadata and are well suited for person re-identification. BRIAR and MEVID datasets offer great diversity of camera views, with BRIAR providing long range imagery of up to 1000m. BRIAR, so far, includes more than 350,000 still images and over 1,300 hours of video footage of approximately 1,000 subjects; MEVID, is part of the very-large-scale MEVA person activities dataset [23] and comprises 158 unique people wearing 598 outfits collected from 33 camera views. UAV-HUMAN includes 67,428 annotated video sequences of 119 subjects for action recognition, 22,476 annotated frames for pose estimation, 41,290 annotated frames of 1,144 identities for person re-identification, and 22,263 annotated frames for attribute recognition. While these three datasets represent the most complete datasets for their given purposes, none of them reference occlusion and all lack representation of emergency response scenarios. Finally, PRAI-1581 provides 1,581 identities and P-DESTRE provides rich metadata for 269 different identities, however, filming distances are only up to 60m and 6.7m, respectively. Additional categorized datasets can be found in [58].
Overall, NOMAD provides the demographic and environmental diversity needed to tackle the person detection task of emergency response scenarios from aerial views, while being the first dataset to include an occlusion metric for person detection, and to provide detailed metadata and controlled multi-scale, making it suitable for many other CV tasks as described in Sec. 5.
## 3 Data Collection Process
Our data collection process followed our IRB approved protocol 21-11-6913. In a preliminary pilot study, our data collection procedure included strict instructions regarding the percentage of the body that the actor should expose to the sUAS' camera at each step. However, we observed that these instructions were difficult to follow causing disconnected movements, and so we replaced the instructions with simpler ones that led to more natural behavior.
### Recruitment
As per our IRB protocol, all participants were at least 18 years old. Further, as the recruitment process evolved, participants from already well represented demographic groups were excluded in order to achieve a balanced gender distribution, a variety of age ranges, and a rich race distribution.
### Location Selection
Approval for use of premises was attained from owners and responsible agencies for all locations filmed in the dataset. The 12 locations included: 3 different Schools, 2 paintball courts, 1 forest park, 1 golf course, 1 lake shore, 1 quarry, 2 farms, and 1 AMA flying field. This resulted in diversity of locations, including both natural and man-made influenced, and provided a variety of different types of obstacles for occlusion purposes.
### Filming Sessions
All filming sessions followed IRB protocol guidelines with participants being informed of the purpose of their performance, the activities to be completed, and consent forms being signed. All flights were conducted by a certified FAA Part 107 remote pilot, and all FAA protocols were followed, with air space reserved through LAANC systems such as
AirMap and DroneUp. Although efforts were made to isolate the selected locations during the filming sessions, unexpected persons appeared during a few of the filming sessions. In most cases we stalled the filming until the person exited the scene; however, in a few cases, these persons agreed to appear on the dataset, signing a consent form. From here on, we call actors the participants performing the designated routine, while non-actors are other participants who agreed to appear in the dataset but were otherwise not engaged in the study.
Once the study introduction was completed, each actor was assigned a unique _obstacle_ at the filming location, and then given instructions for performing the standard routine with respect to their obstacle as follows:
* Starting Frame: With few exceptions, the first frame represented a view of the actor completely visible.
* Hiding: All actors were instructed to hide behind their obstacle two times, with small variations in their hiding trajectory. This step allowed us to obtain varying degrees of occluded aerial views.
* Laying: To provide a variety of poses, actors were asked to lay down when completely visible and when partially occluded by their obstacle.
* Walking: Finally, actors performed a small walking trajectory at the end of their routine.
* General instructions: actors were informed of the dataset's focus on emergency response scenarios, and were therefore asked to position themselves as if they were hiding, trying to be rescued, or in need of help.
For the water routines, where the primary occlusion source was the water itself, small but important variations in instructions were given to simulate various drowning scenarios. All actors were asked to repeat their routine five times, with the sUAS set at five progressively distant locations set to 10m, 30m, 50m, 70m, and 90m, with the distance measured, through the sUAS' feedback from the First Person View (FPV) screen, horizontally and vertically from the expected starting point of the actor. Figure 2 illustrates the sUAS position at a distance of 10m.
Additionally, a reference view of the actor was filmed, with the sUAS positioned a few meters in front of the actor, while the actor performed 360degrotations. The first rotation involved arms hanging down and the second with arms extended up, providing multiple views of the actor at ground level. Finally, true negatives were also filmed by asking the actor to locate himself/herself outside of the camera view; please note that in a few cases true negatives may still contain consented non-actor participants. At the conclusion of the session, actors were given a 20SUSD prepaid card.
## 4 Data Annotation Process
### Data curation
Although efforts were made to avoid filming non-participants, during the revision of the films, unexpected persons were observed on a couple of videos. For videos where the non-participant was only visible at the beginning, at the end, or at non-keyframes, trimming the video was a direct solution, representing no impact to the quality of the data. Nevertheless, situations where found where trimming the portion of the video where the non-participant appeared on screen would represent a loss of information of the actor's performance; these situations were solved by blacking out the non-participant area of the frames.
### Metadata
Metadata provided in this dataset can be divided into Demographic and Environmental categories. It includes all descriptive information about the actor, including outfit descriptions that could aid in re-identification tasks. Full list of metadata can be found in Tab. S1 in the Supplemental Material. Insights into selecting metadata factors was obtained through a previous series of semi-structured interviews with emergency responders, under IRB protocol 19-04-5269, to determine search terms used for describing missing persons. Clothing descriptions may include up to five words for salient figures. Hair length uses the same metric for males and females, with _bald_ meaning absence of noticeable hair, _short_ meaning ear-length, _medium_ ranging from ear- to shoulder-length, and _long_ meaning longer than shoulder-length. The Location descriptor School (Nature) aggregates filming sessions where the researcher should expect nature domination despite it being filmed at a school premises. The reported weather information was obtained from the nearest weather station to the filming location. Finally, the Exposure Value (EV) was separated from the Video descriptor. While the Video descriptor is a constant for all actors, the EV parameter was found to be different than 0 on a couple of films, indicating a change in the illumination, which is a relevant parameter for computer vi
Figure 2: Filming process. Sample positioning of the sUAS at 10m horizontally and vertically from the actor’s starting location.
sion tasks [75]. Table S2 from the Supplemental Material displays the predefined lists of available colors for describing clothing and hair color. The ranges were selected to include the most common colors across the hue range on the HSV color space; colors for the Hair descriptor were selected based on emergency responders' classifiers.
### Keyframe selection
Manually labeling every frame from the films would have been infeasible, and therefore we selected 85 keyframes at each of the five distances for every actor. This resulted in 425 keyframes per actor, with the exception of Actor001, for whom 750 keyframes were selected. Keyframe selection was performed in accordance to the following guidelines: (1) the 85 selected frames tracked the actor across their entire routine, (2) each starting and ending frame of a trajectory was selected as a keyframe, (3) each change in direction of the actor's trajectory generated a new keyframe, (4) the set of keyframes included different poses (e.g., standing, sitting, laying down), and finally (5) all keyframes had at least a part of the actor visible.
Finally, when the actor interacts with an obstacle, a custom sampling is performed to obtain views with different levels of occlusion as the actor moves behind and away from an obstacle. Figure 2(a) illustrates a sample routine with 12 keypoints selected following these guidelines. The red-dashed arrows indicate where sampling for occluded views would be performed. Activity labels were added to each keyframe as: Walking, Laying, Hiding, Hiding (Laying), Swimming, Drowning. Figure 2(b) shows sample labels for the 12 keypoints illustrated on Fig. 2(a).
### Annotations
All 42,825 selected frames were sent to Labelbox [44], a labelling company who employed expert annotators to add bounding boxes and visibility labels to all images.
#### 4.4.1 Occlusion Label
Figure 4 displays how percentages were assigned to each body part of a person. The following is the procedure used to calculate the visibility label, that is, the amount of an actor that was visible at a particular instant: (1) Given an image, identify the body parts of a person that are visible. (2) Review the percentages of the identified body parts based on Fig. 4. (3) If less than half of a body part is visible, assign half of the percentage indicated in Fig. 4. (4) If more than half of the body part is visible, assign the full percentage as indicated in Fig. 4. (5) Add up the percentages obtained from each body part. (6) Assign the sum to one of the ten ranges of visibility, with upper bounds of 10 to 100. For example, selecting 10 means that the sum obtained from the percentages is greater than zero but less than or equal to 10, while selecting 20 means that the sum is greater than 10 and less than or equal to 20, and so on.
Please note that shadows were not considered to be part of the human, and under normal circumstances, the actor's own clothing are not treated as a source of occlusion. Also note that although we are reporting a visibility metric, this is just the inverse of the occluded amount of the actor's body.
## 5 Dataset Characteristics
NOMAD provides 500 videos of 100 different actors, with each actor performing routines at five different distances, set as 10m, 30m, 50m, 70m, and 90m horizontally and vertically from the actor's initial position. It also provides 500 true negative videos of a couple of seconds du
Figure 4: Visibility label calculation. Percentages assigned to each body part of a person.
Figure 3: Keyframe selection process. (a) Sample routine with 12 keyframes selected. Sampling for occluded views is indicated by red-dashed arrows. (b) Sample Activity labels for the keyframes illustrated. Hiding (L) represents Hiding (Laying).
ration, with each true negative video corresponding to one routine video. Finally, one reference video per actor is provided. Videos' duration ranges from 30s to 180s according to the actor's pace. This resulted in 42,825 frames manually annotated with a bounding box and visibility label. All videos are of 30fps, MP4-H265 coding, 5.4k video quality, with all frames being of 5472 by 3078 pixels.
### Natural
Figure 5 shows the distribution of the 100 actors with respect to their filming locations. The variety of locations provides coverage of natural and man-made environments, and is aimed at training CV models for effectively supporting a wide range of emergency response scenarios.
To further increase robustness of our dataset in terms of environmental conditions, cross-seasonal imagery was collected, with temperatures ranging from 30F to 90F, wind speeds of 0MPH to 20MPH, morning, afternoon, and evening sessions, capturing hot sunny summer days, autumn colorful scenes, and winter's snowy conditions. Finally, we made every effort to mitigate potential demographic bias to support fair and equitable emergency response. Figure 6 presents the distributions of gender, age and race. The gender distribution shows a 50/50 male/female distribution, and although age distribution shows that the majority of the population was younger than 30 years old, actors across the range of 30 to 78 years old are still present in significant percentage. Finally, we show a comparison between our race distribution and the USA race distribution [13, 38], showing an improvement with the purpose of generalizing CV models and mitigating potential biases [48, 71]. While the USA federal census does not consider Latino/Hispanic as a race, and distribute their classification as an ethnicity distributed across races [13], we have incorporated it as a race in alignment with its recognition as a separate class by current computer vision models [68]. Lastly, although the non-rigid aspect of our routine creates uncertainty about specific levels of our visibility label, it allows the actor to provide data using more natural behaviour, adding fidelity to the actor's performance.
### Occluded
NOMAD provides the data needed to face occluded persons' detection during high-pressure, life-or-death emergencies. It labels each bounding box with the degree of visibility on 10 levels, providing a representative number of frames at each level as shown in Fig. 7. The higher amount of frames at lower visibilities responds to the manual selection process as well as to the increasing annotation difficulty and additional sources of occlusion at further distances.
### Multi-scale
Effective collaboration between sUAS and humans aims to exploit each of their individual strengths. sUAS have the ability to quickly scan large areas from greater altitudes, or to provide a focused close-up view of the target. NOMAD provides five different aerial distances, supporting both generalized models or models specialized for each distance. Table S3 from the Supplemental Material shows the expected maximum Ground Sampling Distance (GSD) for the five different distances, assuming that the actor is on the camera optical axis [5]. This is not always true as the actors moved to perform their routine and the camera gimbal position was often set to avoid potential areas of non-participants or areas outside the filming premises. This moved the actors away from the camera optical axis, increasing their GSD, and decreasing the number of pixels representing them, as well as creating a non-fixed pitch and adding real variance.
### Computer Vision Uses
The characteristics of NOMAD provide an environment to improve emergency response in four main areas of CV:
* Occlusion benchmark: the effort of NOMAD's ten levels of visibility aims to provide a new benchmark dataset to assess the research community's improvements on person detection under occlusion, a previously under-explored factor in aerial datasets.
* Person detection: search and rescue scenarios in remote areas tend to search for _any_ person (i.e., blind search). The demographic and environmental diversity provided by NOMAD, as well as its multi-scale component, supported by the bounding boxes' annotations, can be leveraged to improve this general CV task.
* Person re-identification: additional to _blind search_, descriptions of the searched person translate the detection task to a CV re-identification problem, especially on crowded scenarios. NOMAD provides a rich metadata and a reference view of every actor to support re-identification tasks from aerial views.
Figure 5: Distribution of the filming locations for the 100 actors.
* Person tracking: in many emergency response scenarios, the aim is to detect and then track. Due to the strategic selection of manually labelled keyframes, NOMAD allows the assessment of tracking techniques, following the actor's key movements and changes of direction throughout their full routine.
## 6 Computer Vision Models metrics
To demonstrate the use of NOMAD for benchmarking CV models at varying levels of occlusion, we compared the performance of three state-of-the-art CV models. Our first model was YOLOv8 from Ultralytics [39], representing the most recent upgrade to the YOLO family. YOLOv8 supports real-time detection with limited computational and memory resources, matching the requirements for sUAS-based aerial detection. Additionally, we selected a FasterRCNN and a RetinaNet model from the Detectron2 library [76]. The specific versions tested are YOLOv8l, FasterRCNN-R101-FPN, and RetinaNet-R101-FPN, with a reported [email protected]:0.95 of 52.9, 42.0 and 40.4 on the COCO benchmark, respectively. Both libraries provide a higher mAP model for YOLOv8 (YOLOv8x) and FasterRCNN (FasterRCNN-X101-FPN), nevertheless, the latency of these other models increases substantially compared to the gained mAP.
For evaluation purposes, 10 folds of 10 actors each were randomly created, with a constant seed for reproducibility across models. From each fold, 50 tests were performed, one per each distance-visibility (5 distances, 10 visibilities), with the results of these 50 tests being averaged across the 10 folds. Figure 8 shows the averaged [email protected]:0.95 score and standard deviation of the CV models against different levels of occlusion and distances. We can observe that YOLOv8l performs better on the closest distance than the other models, a result supported by the higher initial mAP reported; nevertheless all models suffer from critical degradation as the distance increases. This behaviour is expected as their training data is focused on ground views rather than
Figure 6: Distribution of the demographic descriptors of (a) Gender, (b) Age, and (c) Race, for our 100 actors. Our race distribution is compared to the (d) USA race distribution, improving generalization by mitigating possible biases.
Figure 7: Distribution of the visibility label across the 42,825 manually annotated frames.
aerial ones. Finally, although this degradation is expected to be mitigated by fine-tuning the models with aerial data, the results emphasize the degradation problem that occlusion represents, for even though the models achieve decent scores at the closest distance with full visibility, the mAP values drastically drop as the occlusion increases. The usefulness of NOMAD to the research community can be justified from the previous baseline by three reasons: (1) NOMAD is built with a real-world variance imagery (Natural), making it a fair benchmark towards emergency response scenarios; (2) occlusion on person detection can be assessed thanks to the granularity of our visibility label; (3) the multi-scale characteristic allows occlusion to be assessed also across different distances, key to the improvement of aerial detection on sUAS. To exemplify the difficulty of person detection on emergency response scenarios, Fig. 9 shows imagery from our tests with increasing difficulty, due to distance and occlusion.
## 7 Future Work and Conclusions
The structure and characteristics of NOMAD offer many opportunities for improvements in aerial human detection, recognition and tracking, especially the following:
* Detection under occlusion: NOMAD allows us to explore and understand the limits of detection under occluded views, with future work focusing on improving CV models' performance by exploiting human psychophysical metrics and temporal information under these views.
* Person re-identification: Addressing emergency response scenarios, future work will focus on improving person re-identification through leveraging software architectures that support hybrid onboard/offboard solutions and integrate the human into the loop.
* Real-world deployment: We have found through our own experiences in deploying CV on sUAS that there are major additional challenges and some degradation in results. Using the models trained on NOMAD, we will deploy and evaluate occlusion-ready CV models on physical sUAS.
In conclusion, as indicated by the results of our baseline evaluation, occlusion represents a non-trivial challenge that remains to be tackled. NOMAD's characteristics of Natural, Occluded, Multi-scale aerial views, provide a new benchmark dataset for tackling this challenge, and can serve as the next step in improving the accuracy of aerial search-and-detection for emergency response.
Figure 8: Performance across different levels of occlusion of (a) YOLOv8I’s, (b) FasterRCNN-R101’s, (c) RetinaNet-R101’s pretrained weights when tested on NOMAD, with the task of person detection. Oclussion increases as the level of visibility decreases, therefore, mAP scores fall drastically as we increase in distance and occlusion. The higher performance of the models at the closest distance is expected as the data resembles ground view from COCO training data, nevertheless, mAP scores fall significantly as we increase in occlusion even for the closest distance, calling for improvement and robustness of models against occlusion in aerial views.
Figure 9: Test samples. (a) Easy sample at 10m and 100 visibility. (b) Medium level difficulty sample (50m, 100 visibility). (c) Hard sample due to heavy occlusion (10m, 30 visibility). (d) Hard sample due to high distance and light occlusion (90m, 90 visibility). |
2305.00602 | Feedback-driven anisotropy in the circumgalactic medium for quenching
galaxies in the SIMBA simulations | We use the SIMBA galaxy formation simulation suite to explore anisotropies in
the properties of circumgalactic gas that result from accretion and feedback
processes. We particularly focus on the impact of bipolar active galactic
nuclei (AGN) jet feedback as implemented in SIMBA, which quenches galaxies and
has a dramatic effect on large-scale gas properties. We show that jet feedback
at low redshifts is most common in the stellar mass range $(1-5)\times
10^{10}M_\odot$, so we focus on galaxies with active jets in this mass range.
In comparison to runs without jet feedback, jets cause lower densities and
higher temperatures along the galaxy minor axis (SIMBA jet direction) at radii
>=$0.5r_{200c}-4r_{200c}$ and beyond. This effect is less apparent at higher or
lower stellar masses, and is strongest within green valley galaxies. The
metallicity also shows strong anisotropy out to large scales, driven by star
formation feedback. We find substantially stronger anisotropy at
<=$0.5r_{200c}$, but this also exists in runs with no explicit feedback,
suggesting that it is due to anisotropic accretion. Finally, we explore
anisotropy in the bulk radial motion of the gas, finding that both star
formation and AGN wind feedback contribute to pushing the gas outwards along
the minor axis at <=1 Mpc, but AGN jet feedback further causes bulk outflow
along the minor axis out to several Mpc, which drives quenching via gas
starvation. These results provide observational signatures for the operation of
AGN feedback in galaxy quenching. | Tianyi Yang, Romeel Davé, Weiguang Cui, Yan-Chuan Cai, John A. Peacock, Daniele Sorini | 2023-04-30T23:46:59Z | http://arxiv.org/abs/2305.00602v2 | Feedback-driven anisotropy in the circumgalactic medium for quenching galaxies in the Simba simulations
###### Abstract
We use the Simba galaxy formation simulation suite to explore anisotropies in the properties of circumgalactic gas that result from accretion and feedback processes. We particularly focus on the impact of bipolar active galactic nuclei (AGN) jet feedback as implemented in Simba, which quenches galaxies and has a dramatic effect on large-scale gas properties. We show that jet feedback at low redshifts is most common in the stellar mass range \((1-5)\times 10^{10}M_{\odot}\), so we focus on galaxies with active jets in this mass range. In comparison to runs without jet feedback, jets cause lower densities and higher temperatures along the galaxy minor axis (Simba jet direction) at radii \(\gtrsim 0.5r_{200c}-4r_{200c}\) and beyond. This effect is less apparent at higher or lower stellar masses, and is strongest within green valley galaxies. The metallicity also shows strong anisotropy out to large scales, driven by star formation feedback. We find substantially stronger anisotropy at \(\lesssim 0.5r_{200c}\), but this also exists in runs with no explicit feedback, suggesting that it is due to anisotropic accretion. Finally, we explore anisotropy in the bulk radial motion of the gas, finding that both star formation and AGN wind feedback contribute to pushing the gas outwards along the minor axis at \(\lesssim 1\) Mpc, but AGN jet feedback further causes bulk outflow along the minor axis out to several Mpc, which drives quenching via gas starvation. These results provide observational signatures for the operation of AGN feedback in galaxy quenching.
keywords: galaxies: evolution; galaxies: formation; galaxies: general; galaxies: jets; methods: numerical
## 1 Introduction
The circumgalactic medium (CGM) is the gaseous environment surrounding a galaxy, which can extend up to hundreds of kpc from the galactic centre. The CGM is closely related to the process of galaxy evolution because it is the site where galactic inflows and outflows interplay (see e.g. Tumlinson et al., 2017, and references therein). Cold gas in the CGM accretes onto the central galaxy, fuelling future star forming activity. Meanwhile, gas can be carried out into the CGM by galactic-scale outflows, which causes gas depletion and results in suppression of star formation in the central regions. These outflows can be driven by stellar feedback in low-mass galaxies, feedback from active galactic nuclei (AGN) in high-mass galaxies, or even a combination of the two mechanisms (see e.g. Somerville and Dave, 2015, and references therein).
AGN feedback effects, which are sourced by gas accretion onto supermassive black holes (SMBH), are known to be a major source of energy input to the CGM. AGN feedback is thought to be an important ingredient in regulating the growth of central black holes and suppressing the star-forming activity in galaxies (e.g. Guillard et al., 2015; Morganti, 2017; Harrison, 2017). According to observations, SMBH exist at the centres of most massive galaxies (see e.g. Kormendy and Ho, 2013, and references therein), and the AGN feedback mechanisms appear in two modes: 'quasar' and 'radio' mode (see e.g. Ho, 2008; Fabian, 2012; Heckman and Best, 2014). The 'quasar' mode is found in luminous AGN with high accretion rates. In this case, the feedback energy is released into the surroundings in the form of radiation from the central accretion disk, which can drive powerful winds. The 'radio' mode is usually found in galaxies hosting AGN with low accretion rates. These are radiatively inefficient but are capable of releasing large amount of feedback energy into the CGM by means of bubbles or radio jets. But despite clear observational evidence for AGN feedback (e.g. Maiolino et al., 2012), much remains to be understood about the
detailed operation of these processes, and in particular how they interact with the CGM.
Hydrodynamical simulations provide opportunities for studying how AGN feedback reshapes the properties and evolution of the CGM. To reproduce various observed galaxy properties, it is necessary to include the modelling of AGN feedback in simulations: this enables suppression of star formation in massive galaxies and prevents them undergoing excessive growth (Le Brun et al., 2014; Schaye et al., 2015; Sijacki et al., 2015; McCarthy et al., 2017; Weinberger et al., 2018; Dave et al., 2019). AGN feedback is also important in allowing simulations to reproduce other observed thermodynamic and chemical properties of the gas, such as the hot gas fraction in groups and CGM metal absorption line properties (McCarthy et al., 2010; Dave et al., 2019; Oppenheimer et al., 2021).
Although simulations have successfully reproduced a wide range of observed galaxy properties, the implementations of SMBH feedback in different codes are distinct in a number of ways. Regarding the form of feedback energy, there are two major ways of implementing this: either by heating up the surrounding gas isotropically with the expected amount of energy from the AGN (e.g. in cosmo-OWLS and EAGLE simulations: Le Brun et al., 2014; Schaye et al., 2015), or by ejecting gas particles with kinetic kicks along a random or bipolar direction (e.g. in IllustrTSG and SIMBA simulations: Weinberger et al., 2018; Dave et al., 2019). Both mechanisms succeed in matching galaxy observations, but the way in which the released energy propagates into the CGM must be different. This may have a strong impact on the properties of the CGM and thus produce distinctive observable features in the resulting CGM gas distribution.
The interplay between galactic feedback outflows and the CGM has been widely studied in both observations and simulations, over a large range of redshifts and stellar masses. This includes, for example: the distribution of highly ionised gas such as oxygen (e.g. Nelson et al., 2018; Kakkad et al., 2020); the abundance of MgII-traced cold gas (e.g. Bordoloi et al., 2011; Kacprzak et al., 2012; Bouche et al., 2012; Nielsen et al., 2015; Nelson et al., 2021); emission lines such as 21 cm and H\(\alpha\) (e.g. Putman et al., 2012; Kakkad et al., 2023, e.g.); the warm diffuse gas via the thermal Sunyaev-Zeldovich (tSZ) effect (e.g. Lokken et al., 2022; Yang et al., 2022; Orlowski-Scherer et al., 2022); Ly\(\alpha\) and metal absorption lines (e.g. Turner et al., 2014; Meiksin et al., 2014, 2015, 2017; Turner et al., 2017; Sorini et al., 2018, 2020; Appleby et al., 2021, 2023); and the hot dense atmosphere probed by X-rays (e.g. Truong et al., 2020, 2021, 2021). In particular, some observations have shown that, for disk-dominated galaxies, galactic outflows emerging from the disk are preferentially ejected biconically into the CGM, where the strongest outflow features are captured along the minor axis of the disk (e.g. in Bordoloi et al., 2011; Bouche et al., 2012). Infalling gas is preferentially accreted in the galaxy plane (e.g. in Bouche et al., 2013; Nielsen et al., 2015). For red passive galaxies, however, their CGM distribution tends to be more isotropic (e.g. in Kacprzak et al., 2012; Nielsen et al., 2015). In simulations, although different feedback models are implemented, this type of angular dependence is also widely found (e.g. Peroux et al., 2020; Mitchell et al., 2020; Pillepich et al., 2021). These studies suggest that the CGM properties connect closely to the feedback activity inside galaxies. Specifically, outflow features in the CGM are generally more prominent along the minor axis of the disk (edge-on projection), and accretion is more easily observed along the galaxy disk (face-on projection).
However, the detailed outflow features and their angular dependence are quite sensitive to the adopted AGN feedback model, which in turn affects the predicted CGM distribution and the galaxy evolution processes. In particular, the outflow angular dependence and the predicted CGM properties can differ significantly in the EAGLE and TNG simulations (Nelson et al., 2019; Mitchell et al., 2020; Davies et al., 2020). Under the TNG framework, Terrazas et al. (2020) found that the resulting \(M_{\rm BH}-M_{\ast}-\)sSFR relation of galaxies depends strongly on the chosen parameters of the AGN feedback model in TNG. They further confirmed that kinetic wind feedback is required in order to reproduce a quiescent galaxy population that is consistent with observations. Furthermore, the CGM anisotropic features can be altered by the form of released energy, depending on whether the kinetic or the thermal feedback mode dominates around galaxies (Zinger et al., 2020; Ramesh et al., 2023). The sensitivity of CGM anisotropy to AGN models has been further discussed by comparing results predicted from simulations to observations, such as the X-ray hardness (Truong et al., 2021) and the satellite distribution around their centrals (Martin-Navarro et al., 2021). Therefore, by studying the feedback-driven anisotropy in the CGM, we aim to identify and predict the observational consequences owing to feedback. This provides possibile ways to constrain AGN models with further observations, which is a crucial step towards understand how galaxies evolve and undergo quenching.
In this paper, we focus on the anisotropic behaviour of the CGM and its relation to jet activity using Simba simulations, including in particular different runs with various feedback models turned on/off. We study the spatial CGM distribution around central galaxies by stacking a large sample of simulated galaxies. The CGM anisotropy is quantified by the quadrupole moments of various physical quantities, and we further examine the dependence of these signals on the implemented AGN models, central galaxy mass and star formation status. Finally, we explore the anisotropy in gas radial motion and the redshift evolution of the feedback-driven anisotropic features. These provide us insight into how different feedback mechanisms regulate galactic outflow and eventually drive galaxy quenching under the Simba framework.
The paper is organised as follows. We first introduce a brief summary of the Simba simulation suits, especially the implemented feedback models, as well as our methodology in SS2. Then in SS3, we present our main anisotropic results regarding the CGM properties considered in this work: mass, density, metallicity and thermal pressure. Around Simba-100 central galaxies, we study how the angular dependence of CGM distribution varies with their host properties, such as star formation status (SS3.1) and mass (SS3.2). Then in SS3.3, we present the effect of feedback models on the resulting CGM properties, using Simba-50 model variants. In SS4, by exploring the bulk radial gas motion around Simba-50 galaxies (SS4.1) and by tracing progenitors of \(z=0.0\) quenched samples (SS4.2), we discuss the connection of AGN feedback models in Simba-50 variants and further see what drives galaxy quenching in Simba model. We discuss and summarise our main findings in SS5, SS6 and SS7.
## 2 Methodology
We begin with an overview of the Simba simulations in SS2.1. Owing to its importance for this work, we summarise the modelling of black hole feedback adopted in Simba models in SS2.2. We move on to the selection of galaxies in SS2.3, and finally introduce our methodology used to characterise the anisotropy of CGM properties in SS2.4 and SS2.5.
### The Simba simulations
Simba(Dave et al., 2019) is a suite of hydrodynamic simulation using the Gizzo code (Hopkins, 2015). Dark matter and gas particles are evolved within a periodic cubical volume with a cosmology broadly concordant with _Planck_ 2015 (Planck Collaboration et al., 2016): \(\Omega_{m,0}=0.3,\Omega_{\Lambda,0}=0.7,\Omega_{b,0}=0.048,H_{0}=68\,\mathrm{ km}\,\mathrm{s}^{-1}\,\mathrm{Mpc}^{-1},\sigma_{8}=0.82\) and \(n_{g}=0.97\). The fiducial run (denoted Simba-100) has a box length of 100 comoving \(h^{-1}\) Mpc (hereafter \(h^{-1}\,\mathrm{cMpc}\)), evolving from \(z=249\) to \(z=0\) with \(1024^{3}\) dark matter particles and \(1024^{3}\) gas elements. To explore the variation of anisotropic features with input feedback models, there are several \(50\,h^{-1}\,\mathrm{cMpc}\) boxes (denoted Simba-50) using \(512^{3}\) dark matter particles and \(512^{3}\) gas elements. The mass resolution for both cases is \(1.82\times 10^{7}\,M_{\odot}\) for gas cells and \(9.58\times 10^{7}\,M_{\odot}\) for dark matter particles. The initial conditions for all Simba runs are identical for a given box size.
In Simba, star formation is modelled using an H\({}_{2}\)-based star formation rate. This is given by the H\({}_{2}\) density divided by the dynamical time with SFR \(=\epsilon_{\rm e}\rho_{\rm H_{2}}/t_{\rm dyn}\), where \(\epsilon_{\rm e}=0.02\)(Kennicutt, 1998). The H\({}_{2}\) fraction is calculated using the subgrid model of Krumholz and Gnedin (2011) based on the metallicity and local column density, with some minor modifications to account for the variations in numerical resolution (Dave et al., 2016). The chemical enrichment model tracks eleven elements in total (H, He, C, N, O, Ne, Mg, Si, S, Ca, Fe) from Type II supernovae (SNe), Type Ia SNe, and Asymptotic Giant Branch (AGB) stars. The star formation-driven galactic winds are modelled as decoupled two-phase metal-loaded winds, with 30% of ejected wind particles being hot and with a redshift-independent mass loading factor that scales with stellar mass (Angles-Alcazar et al., 2017). The SF wind velocity, modified from the scaling in Muratov et al. (2015), is computed via the following equation:
\[v_{w}=1.6\left(\frac{v_{c}}{200\,\mathrm{km}\,\mathrm{s}^{-1}}\right)^{0.12}v_{ c}+\Delta v(0.25R_{\rm vir}), \tag{1}\]
where \(v_{c}\) is the galaxy's circular velocity at \(0.25R_{\rm vir}\), and \(\Delta v(0.25R_{\rm vir})\) is an extra velocity kick corresponding to the gravitational potential difference between the wind launch radius and \(0.25R_{\rm vir}\)(Lokas and Mamon, 2001).
Black holes are seeded and grown during the simulation, and the accretion energy drives feedback that causes star formation to become quenched. Black hole growth in Simba is modelled with a two-mode accretion model. For cold gas with T \(<10^{5}\)K, the gas inflow is implemented using the torque-limited accretion model (Angles-Alcazar et al., 2017). While hot gas (T \(>10^{5}\)K) is accreted onto black holes via Bondi accretion (Bondi, 1952). The major improvement of the black-hole growth model adopted by Simba is the torque-limited accretion for the cold gas, which does not require the black hole to self-regulate its own growth (Angles-Alcazar et al., 2015). This allows for the implementation of a more physical AGN feedback model, which will be discussed in SS2.2. Other input physical mechanisms such as radiative cooling and heating, the formation and evolution of dust are also included in Simba runs. Specifics of these models are available in Dave et al. (2019).
### Black hole feedback models in Simba
AGN feedback is responsible for quenching galaxies in Simba. It is implemented by a two-mode model, which is motivated by the observed dichotomy in black hole growth (e.g. in Heckman and Best, 2014). A 'radiative mode' is applied when a black hole is accreting at high Eddington ratios (\(f_{\rm Edd}=\dot{M}_{\rm BH}/\dot{M}_{\rm Edd}\)), which mimics the molecular and warm ionised gas outflow (Perna et al., 2017). AGN wind particles are ejected without modifications of the gas temperature, with a typical electron temperature of \(\sim 10^{4}\) K and with a velocity of \(\sim 1000\,\mathrm{km}\,\mathrm{s}^{-1}\)(Perna et al., 2017). In this case, the outflow velocity is related to the black hole mass via:
\[v_{w,\rm EL}=500+500(\log_{10}M_{\rm BH}-6)/3\,\mathrm{km}\,\mathrm{s}^{-1}. \tag{2}\]
While at low Eddington ratios, a 'jet mode' is applied to drive high-velocity hot gas outflows (Fabian, 2012). The jet direction is set parallel/anti-parallel to the inner gas disk of the host galaxy. Its angular momentum vector is computed using the 256 closest gas particles around the central black hole (typically \(\sim 1\) kpc), with an upper limit of \(R_{\rm inner~{}disk}\leq 2\,h^{-1}\,\mathrm{ckpc}\). Gas particles are heated to the virial temperature of the halo before ejection and the velocity becomes stronger as \(f_{\rm Edd}\) drops:
\[v_{w,\rm jet}=v_{w,\rm EL}+7000\log_{10}(0.2/f_{\rm Edd})\,\,\,\,\mathrm{km}\, \mathrm{s}^{-1}. \tag{3}\]
The transition between radiative and jet mode happens when black holes have \(f_{\rm Edd}<0.2\) and \(M_{\rm lim,BH}\geq 10^{7.5}M_{\odot}\). The velocity increase is capped at \(7000\,\mathrm{km}\,\mathrm{s}^{-1}\) above \(v_{w,\rm El}\) when \(f_{\rm Edd}\leq 0.02\).
Finally, X-ray heating by the accretion disk is also included by Simba when its jet model is turned on and gas fractions within the black hole kernel are below 0.2, as motivated by Choi et al. (2012). This mimics the deposition of high-energy photons into the surrounding gas and is implemented in two modes: for non-ISM gas (with hydrogen number density of \(n_{\rm H}<0.13\,\mathrm{cm}^{-3}\)), gas temperature values are directly increased based on the local radiation flux. For ISM gas, half of the radiation energy is applied as a radial kick outwards to gas particles, while the remainder is added as heat. X-ray feedback causes only modest changes to the galaxy stellar mass function, but it is crucial in order to achieve full quenching of the star formation in massive galaxies (see SS4 in Dave et al., 2019).
There are several model variants implemented in Simba-50 run: 'allphys', which includes all the aforementioned physics identical to the Simba-100 fiducial run; 'nojet', which turns off the bipolar jet and X-ray feedback; 'noagn', which only includes stellar feedback but with all AGN feedback turned off; 'nox', which only turns off the X-ray feedback; and 'nofb', where all explicit feedback is turned off. A brief description and summary of these models is given in Table 1. In SS3.3, we show a comparison of our results for various different Simba-50 runs when discussing the sensitivity of CGM properties to the physics models. After convergence tests
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Model & stellar feedback & AGN radiative mode & AGN jet modea \\ \hline ’allphys’ & ✓ & ✓ & ✓ & \(f_{\rm Edd,lim}\) with \(f_{\rm Edd}\) and \(M_{\rm BH,lim}\) with jet cutc \\ \hline ’nox’ & ✓ & ✓ & ✓ & ✗ \\ \hline ’nojet’ & ✓ & ✓ & ✗ & ✗ \\ \hline ’nogen’ & ✓ & ✓ & ✗ & ✗ \\ \hline ’nofb’ & ✗ & ✗ & ✗ & ✗ \\ \hline \end{tabular}
\end{table}
Table 1: Feedback descriptions for different Simba-50 runs. Note that the Simba-100 fiducial run employs the same ’allphys’ model as the Simba-50 run.
between Simba-50-'allphys' and the Simba-100 run (Appendix A), we thereafter include results from Simba-100 only, where more samples are available for further analysis.
### Galaxy selection
The main sample considered in this study consists of the galaxies at \(z=0\) that are central, have black hole accretion rate \(>0\) and stellar mass \(>10^{10}M_{\odot}\). In this work, stellar mass (denoted by \(M_{*}\)) is defined as the total mass of stellar particles within 30 ckpc spherical apertures. The choice of this stellar mass cut is adopted owing to the AGN fraction\(-M_{*}\) plot from the simulations (Figure 1, which is discussed later) as well as from observations (e.g. Kauffmann et al., 2003), which suggest that the strong AGN fraction declines significantly below \(M_{*}=10^{10}M_{\odot}\). We also divide our main sample into the following sub-catalogues when necessary:
* _'Jet-active' galaxies_. For a better visualisation of the anisotropic features, we also analyse galaxies with low Eddington accretion ratio and high central black hole mass (\(0<f_{\rm edd}<0.02\), \(M_{\rm BH}\geq 10^{7.5}M_{\odot}\)). As discussed in SS2.2, these are the criteria adopted in the "allphys' model when full jet speeds are achieved. Note that we will use the term 'jet-active galaxies' to refer exclusively to galaxies that have jets with the full jet speeds. This is only meaningful in the 'allphys' or 'nox' models when a jet is implemented. For other models that have no jets, but comparing with simulations run with the same initial conditions as in the allphys' model, we can identify galaxies that are the counterparts of those in the 'allphys' simulations. We will refer to these galaxies as 'jet-active counterparts'.
* _Galaxy type_. Galaxies are categorised into star forming (SF), green valley (GV) and quenched (Q) galaxies based on the observed star-forming main sequence (SFMS) in Belfiore et al. (2018). Their best fit line is given by: \[\log({\rm SFR}/M_{\odot}{\rm yr}^{-1})=0.73\,\log(M_{*}/M_{\odot})-7.33,\] (4) with a scatter of 0.39 dex. According to this line, the lower boundary of SF galaxies is the upper dashed white line shown in Figure 1, which is \(1\sigma\) below the best-fit SFMS line. GV galaxies are defined as having SFR values down to 1 dex below this line. Quenched galaxies are therefore all samples below the GV region. To account for the redshift evolution of the main sequence, we empirically boost their normalisation by a factor of \((1+z)^{2}\) when considering \(z>0\).
Figure 1 shows the SFR\(-M_{*}\) plots for all central galaxies in Simba-100 at three different redshifts (\(z=1.0\), 0.5 and 0.0), overplotted with the jet-active ratio and number contours of main sample galaxies. Jet-active ratio is defined as the ratio between the numbers of galaxies in the 'jet-active' sample and in the main sample, per SFR\(-M_{*}\) bin. Galaxies are demarcated as SF, GV and Q based on the boundary lines discussed above. Vertical dashed lines group the galaxies into three stellar mass bins that will be considered in this study. We will show in this study that AGN jet feedback causes more galaxies to enter into the GV region as time evolves, and these eventually become quenched. Furthermore, at all redshifts the majority of 'jet-active' galaxies reside in the GV region with \(10^{10}M_{\odot}<M_{*}<5\times 10^{10}M_{\odot}\), suggesting that the efficiency of jet production peaks for these galaxies. This is also broadly consistent with observations (e.g. in Nandra et al., 2007; Schawinski et al., 2007; Povic et al., 2012).
Additionally, we further include an analysis regarding how the anisotropic features evolved with redshift. We will discuss the sample selection of this analysis below.
### Gas properties around galaxies via stacking
Since the results obtained from individual galaxies can be noisy, maps of gas properties are combined together via stacking. Galaxies are stacked in either edge-on or face-on projections. They are rotated with respect to the minor axes of their inner gas discs - which is the assumed jet direction (see the definition in SS2.2), as well as all gas particles within a cube of size \(4\times r_{200c}\). Here, \(r_{200c}\) is our adopted definition of the virial radius: the radius within which the mean density is 200 times the critical density. Therefore, edge-on projection is when the vertical direction is aligned along the minor axes of the galaxies, while the face-on projection is when viewing the galaxies along the plane of their 'inner discs'. We deliberately chose to stack galaxies along this direction because this is the bipolar feedback direction as defined in the Simba-'allphys' model, where one should in principle 'observe' the most anisotropic features in the stacked map if there are any. For comparison, we also repeat the same analysis by stacking galaxies along their stellar angular momentum directions, as discussed in Appendix B. This does not affect the anisotropic features of the CGM properties.
For the projection, particle column density are derived by summing along the line of sight, while averaged quantities such as gas temperature and metallicity are computed in a mass-weighted manner along the line of sight. Gas metallicity is converted into solar units by dividing by the solar metal mass fraction, for which we use 0.0127. Only non star-forming gas cells are selected while analyzing CGM gas properties. To highlight the spatial anisotropy of the CGM properties, we further normalise the stacked map with respect to the azimuthal average. All maps have a resolution of \(500\times 500\) pixels. The coordinates are standardised such that all galaxies have the same size in units of the virial radius while stacking. The resolution in physical units corresponds to \(\sim 3.2\,\rm{ckpc}\) at the lowest stellar mass bin (with \(M_{*}=1\times 10^{10}-5\times 10^{10}M_{\odot}\)) and \(\sim 6.5\,\rm{ckpc}\) at the highest mass bin (with \(M_{*}>1\times 10^{11}M_{\odot}\)).
Figure 2 shows the edge-on maps of CGM properties around central 'jet-active' galaxies selected from Simba-50-'allphys' at \(z=0\), for the lowest stellar mass bin. We present the gas temperature \(T_{\rm mw}\), gas column density \(\Sigma_{\rm gas}\), expected SZ\(-y\) signal and dark matter column density \(\Sigma_{\rm DM}\) on the map. From these maps, it is obvious that the CGM distribution is bipolar, with the direction of the pole being aligned with the direction of the jet.
### Characterizing the anisotropy
In our study, the angular location of the CGM is defined with respect to the minor axis of inner gas disk (SS2.4). Therefore, an azimuthal angle of \(0^{\circ}\) denotes alignment with the minor axis, where for models with AGN feedback turned on this is identical to the outflow direction. We first average the four quadrants of the stacked maps (e.g. Figure 2), assuming that no distinctive features are present in any particular quadrant. Then, to characterise the angular dependence of the CGM properties, we compute the quadrupole moment of CGM properties in the stacked map as follows:
\[\xi_{\ell}^{r}(r)=\int_{0}^{1}\xi(r,\mu)(1+2\ell)P_{\ell}(\mu)\,d\mu, \tag{5}\]
where \(\ell=2\); \(\mu=\cos\theta=y_{\rm corr}/r\); \(\mu=1\) corresponds to to angular alignment with the minor axis; \(r\) is the 2D projected distance from the galactocentric centre; and \(P_{\ell}(\mu)\) are the Legendre polynomials with \(P_{2}(\mu)=\frac{1}{2}\left(3\mu^{2}-1\right)\). If the CGM distribution \(\xi(r,\mu)\) is purely isotropic around galaxies, the above integration yields zero. Otherwise, it yields a positive value if the CGM tends to be distributed
Figure 1: Star formation rate vs stellar mass relation for all central galaxies (the main sample) in the Simba-100 run at \(z=1.0\) (_top_), \(0.5\) (_middle_) and \(0.0\) (_bottom_). The colourbar shows the jet-active ratio at different redshifts (as defined in §2.3). Contours present the number distribution of main sample galaxies with \(M_{*}\geq 10^{9.9}\). This lower \(M_{*}\) cut is empirically chosen so that the contours overlap with the colourmap. Regions of star forming (SF), green valley (GV) and quenched (Q) galaxies are demarcated by the white dashed lines, which are the selection criteria adopted by Belfiore et al. (2018) (see text for details). Vertical dashed lines divide the galaxies into three stellar mass bins that will be considered in further analysis below. It is noticeable that at all redshifts, ‘jet-active’ galaxies are the most populous within the GV region between \(M_{*}=10^{10}M_{\odot}\) and \(5\times 10^{10}M_{\odot}\).
closer to the minor axis, or a negative value if the distribution is along the galaxy plane (a disk-like feature).
## 3 Anisotropic distribution of CGM in simulations
We expect the CGM properties to be regulated by AGN feedback processes, which is also connected to the host galaxies of the AGN, and possibly also their host dark matter haloes. In this section, we investigate how the CGM anisotropy is connected to the host galaxy properties, i.e. star formation status (SS3.1) and host halo
Figure 2: Anisotropy of the CGM properties for ‘jet-active’ central galaxies at \(z=0.0\) (edge-on view). Galaxies are selected from the Simha-50 ‘-‘allphys’ model with stellar mass between \(1\times 10^{10}\)\(M_{\odot}\) and \(5\times 10^{10}\)\(M_{\odot}\). There are 377 galaxies in total and the averaged host halo mass is \((\log_{10}(M_{200c}))=12.04\). _Top row:_ mass-weighted temperature map (\(T_{\rm{mw}}\), _left_) and gas column density map (\(\Sigma_{\rm{gas}}\), _right_). _Bottom row:_ thermal SZ \(y\) map (SZ\(-y\), _left_) and dark matter column density map (\(\Sigma_{\rm{dm}}\), _right_). Maps are are normalised with respect to the azimuthal average. Minor axes of galaxies are aligned along the inner disk momentum directions (jet directions in this case), as well as all particles within a cubic region of \(\pm 4r_{200c}\). Note that the topmost and leftmost coordinates are in units of cMpc for comparison. On each panel, the small window in the upper right corner zooms into the central square region with size \(r_{200c}\).
mass (SS3.2). In SS3.3, we investigate how the distribution of CGM properties varies with the implemented Simba-50 feedback models. Note that our definition of CGM is broad - not restricted to the gas within the virial radius, but including all gas outside the virial radius that is connected to the physical activity in the central galaxies.
### Dependence on galaxy type
Generally, galaxies start out with active star formation and evolve towards a red quiescent state, passing through the GV (Rodriguez Montero et al., 2019) - a phase where strong central suppression of star formation and lowered gas fractions are both clearly apparent (see Figure 1; also Belfiore et al., 2018; Appleby et al., 2020). AGN feedback is believed to be a major source of energy for quenching galaxies. Recently, Cui et al. (2021) showed that jet feedback is the key mechanism that quenches galaxies in the Simba simulations. To connect with this picture, we first examine in this section how this evolution of star forming state and stellar mass imprints itself on the CGM anisotropy.
As discussed in SS2.3, we subdivide the central galaxies into SF, GV and Q samples selected from the Simba-100 fiducial run, and Figure 1 indicates that strong AGN feedback mostly occurs within the stellar mass range \(10^{10}M_{\odot}\lesssim M_{*}\lesssim 10^{11}M_{\odot}\), while massive galaxies with \(M_{*}>10^{11}M_{\odot}\) show little feedback activity. Therefore, we only discuss results obtained from galaxies within a low-mass bin (with \(M_{*}=(1-5)\times 10^{10}M_{\odot}\)) and an intermediate-mass bin (with \(M_{*}=5-10\times 10^{10}M_{\odot}\)).
Figure 3 shows the resulting mass-weighted gas temperature (\(T_{\rm{mw}}\)), gas surface density (\(Z_{\rm{ggs}}\)), mass-weighted gas metallicity (\(Z_{\rm{mw}}\)), and thermal SZ-\(y\) (\(y\)) quadrupole measurements (computed using eq. 5) for Simba-100 -jet-active' central galaxies at \(z=0\) within low-mass and intermediate-mass bins. The shaded regions show the \(1\sigma\) uncertainties in the quadrupole measurement determined by bootstrap resampling, as follows. For all projected galaxy maps measured from a given model, we construct a bootstrap catalogue by resampling galaxy maps with replacements but keeping the sample size the same as the original. These resampled maps are stacked as before, and the quadrupoles of each CGM property are directly measured on the normalised map. We then repeat this process 1000 times and compute the average as well as the standard deviation across the bootstrap samples.
By inspecting the \(T_{\rm{mw}}\) (left panel) and \(\Sigma_{\rm{gas}}\) (second to left) curves obtained from the low-mass bin, it is clear that for SF galaxies a gaseous disk feature is prominent in the central region, causing a negative quadrupole value at \(r\lesssim 0.5r_{\rm{200c}}\). This provides fuel to sustain star formation, although this lessens substantially going to GV and quenched systems. We will later show this quadrupolar enhancement is independent of feedback, so the anisotropy within \(r\lesssim 0.5r_{\rm{200c}}\) is not a test for feedback models.
At larger radii, however, we continue to see an enhancement of gas temperatures along the minor axis (jet direction) for SF and GV populations. We will later demonstrate that this is a signature of the large-scale impact of jets on the CGM. The cumulative effect of the jet remains powerful and significant up to the limit of this plot (\(\sim 4r_{\rm{200c}}\)). For the SF case, the anisotropy falls at large distances, but it is still non-zero, indicating some residual effect from star-forming winds. For the GV case, \(\xi_{T}\) shows a dip and then increases slightly again at \(r>r_{\rm{200c}}\).
As a test, we applied the same analysis to SF galaxies without including any selection on jet activity. The resulting curves (shown as solid cyan lines) are mostly identical, except for the SF case in the low-mass bin. For these SF galaxies, the signature of the quadrupole is substantially reduced at \(r>r_{\rm{200c}}\), especially for the \(\xi_{T}\) curve, lying instead on top of the red Q curve. This indicates that the large-scale quadrupole seen in the SF case arises from selecting only those SF galaxies with ongoing jet activity. The GV curve remains unchanged because low-mass GV galaxies already tend to be jet active in the majority of cases.
An interesting situation occurs for SF galaxies in the intermediate mass bin (dotted blue curve). Here, there is a strong dip in \(\xi_{T}\) between \((0.5-1)r_{\rm{200c}}\), even more prominent than for GV galaxies. This even persists when removing the active jet criterion, since at these higher masses even SF galaxies have active jets in most cases (see Figure 1). By visually inspecting the edge-on projected map for these systems, we notice that a cavity is present along the minor axis, and a more isotropic hot gas distribution is formed around the central region, which suggests that the signatures of inflow are being strongly curtailed relative to lower-mass SF systems while the jet activity creates significant disturbances in the CGM at larger scales.
We now consider the \(Z_{\rm{mw}}\) results (third panel in Figure 3). One can see that a significant metal enrichment at large distances is measured in the SF samples, for both stellar mass bins considered in this study. As we will demonstrate later, this is primarily the consequence of active star formation, with some enhancement from strong jet feedback pushing metal-enriched gas out to large distances. For the GV and quenched cases, metallicity anisotropy features are much weaker, since their star formation rates and hence SF-driven winds are curtailed.
We now turn to the thermal pressure anisotropy around galaxies, as would be measured by the SZ \(y\) decrement (\(\xi_{y}\), right panel). The level of anisotropy here is much weaker than the anisotropy seen in other thermodynamic properties. This occurs because the thermal pressure is the product of gas temperature and density, and the latter two quantities have opposite angular dependences. This further suggests that the CGM of these galaxies rapidly equilibrate any pressure anisotropies introduced by non-spherical feedback. Unfortunately, it also suggests that SZ measures are not ideal probes of feedback-induced CGM anisotropy, unless combined with other measures that can disentangle the density and temperature. That said, in the star forming case the opposing trends of both curves do not completely cancel out, producing a slight but significant anisotropy in the thermal pressure at \(r\gtrsim 0.5r_{\rm{200c}}\).
In summary, our quadrupole analysis by galaxy SF activity indicates that at \(r<0.5r_{\rm{200c}}\) there are large quadrupoles in temperature and surface density that correlate with SF activity. Beyond this, the quadrupole persists only when selecting jet-active galaxies, indicating that one must probe the small quadrupole at \(r\gtrsim r_{\rm{200c}}\) in order to test AGN feedback via CGM anisotropy. Despite these features, the \(y\)-decrement does not show strong anisotropy, indicating that the CGM rapidly isotropises any feedback-induced pressure variations. The metallicity anisotropy is strongest for SF galaxies, suggesting a connection to SF-driven outflows. In SS3.3 we will flesh out these interpretations by examining how the quadrupoles depend on the physics that is included in the models. But next, we focus on exploring the mass dependence as a function of orientation angle.
### Dependence on halo mass
In the previous section we looked at the gas anisotropy for the two stellar mass bins where jet activity is seen, and focused on their connections to star forming activity. Here, we explore the trends as a function of halo mass.
The top row in Figure 4 shows the \(T_{\rm{mw}}\), \(\Sigma_{\rm{ggs}}\) and \(Z_{\rm{mw}}\) quadrupole radial profiles for the Simba-100 'jet-active' central
galaxies within different stellar mass bins. We leave out the thermal SZ\(-y\) feature due to its insignificant anisotropic levels compared to other CGM properties. Here we include all samples with stellar mass \(M_{*}\geq 10^{10}M_{\odot}\) and then further divide them into four bins.
The \(T_{\rm mw}\) curves (upper left) show a significant change for \(M_{*}>10^{11}M_{\odot}\), below which the anisotropy shows a weak dependence on halo mass, while above this mass the quadrupole essentially disappears. More subtly, the dip in \(\varepsilon_{T}\) at \(\sim(0.5-1)r200c\) is most prominent at \((5-10)\times 10^{10}M_{\odot}\), suggesting that the inflow signature is attenuating while the anisotropy induced by the cumulative effect of jets is increasing.
A similarly significant change is seen in \(\varepsilon_{\rm X}\) at \(M_{*}>10^{11}M_{\odot}\), above which the CGM gas suddenly becomes highly isotropic. The metallicity anisotropy shows a more complex behaviour, but again because quite low and even negative (i.e. aligned along the major axis) at high \(M_{*}\).
As an alternative view of these trends, the bottom row of Figure 4 presents the quadrupole values evaluated at three different galactocentric distance (shown in different colours) as a function of halo mass (with the approximately equivalent stellar mass shown along the top axis), for edge-on views. The radii are chosen beyond where the influence of anisotropic accretion is dominant (\(r\geq r200c\)), to focus on the impact of AGN jet feedback.
For the edge-on case, the anisotropies of all three quantities are maximised at similar stellar mass ranges, with \(\log_{10}(M_{*}/M_{\odot})\sim 10.5-11.0\) or \(\log_{10}(M_{200c}/M_{\odot})\sim 12.2-12.5\). Combined with the strong amount of jet activity at lower masses (\(M_{*}<5\times 10^{10}M_{\odot}\)), this suggests that the net anisotropy at these larger radii is the result of the cumulative injection of bipolar feedback during the quenching process; subsequent to quenching being complete, this then rapidly becomes isotropised to leave no quadrupolar signature at the highest masses. We also performed an identical analysis for the face-on case, where the quadrupole values of three quantities fluctuate around zero.
Truong et al. (2021) considered the same issue using II-lustrisING central galaxies at \(z=0\). They found the mass
Figure 3: Comparison of mass-weighted gas temperature (\(T_{\rm mw}\)), gas column density (\(\Sigma_{\rm gas}\)), mass-weighted gas metallicity (\(Z_{\rm mw}\)) and thermal SZ\(-y\) (\(y\)) quadrupole curves for different galaxy types (SF: blue, GV: green, Q: red), as a function of projected galactocentric distance. This aims to characterise the angular dependence of the CGM properties. Results are obtained for Stars-to-100 jet-active’ central galaxies at \(z=0.0\) within a low stellar mass bin (\(M_{*}=(1-5)\times 10^{10}M_{\odot}\), solid lines) and intermediate mass bin (\(M_{*}=5-10\times 10^{10}M_{\odot}\), dotted lines). For comparison, results for all SF central galaxies with accretion rate > 0.0 within the lowest stellar mass bin are shown as cyan solid lines. Shaded regions show the bootstrap errors. For clarity, we omit the error regions around the curves obtained from the intermediate mass bin. In the top row, an inset within each panel zooms into the central region with \(r\lesssim 0.5r20c\).
weighted temperature and gas density anisotropy to be peaked at \(M_{*}\sim 10^{10.5-11}M_{\odot}\), which is consistent with our findings. However, the distance dependence of our measured CGM anisotropy is much weaker than the TNG results: they found that, for low masses, the level of anisotropy depends significantly on the radius at which it is evaluated. Also, their metallicity anisotropy falls monotonically with galaxy mass, whereas our results suggest a turnover at a transitional mass range. These differences might arise from different methods for estimating the anisotropy, but are more likely to reflect the different subgrid physics adopted by the two simulations.
Overall, according to the quadrupole measurements, the anisotropy of \(T_{\rm{mw}}\), \(\Sigma_{\rm{gas}}\) and \(Z_{\rm{mw}}\) are all maximised around the same stellar mass range, with \(\log_{10}(M_{*}/M_{\odot})\sim 10.5-11.0\) or \(\log_{10}(M_{200c}/M_{\odot})\sim 12.2-12.5\), with the anisotropy being weaker at lower masses and completely absent at higher masses. This result does not depend on the galactocentric radius. This peak in level of anisotropy at the transitional mass probably reflects the cumulative action of bipolar jets that causes galaxy quenching, with a rapid isotropisation together with a shutting off of inflow after quenching.
### Dependence on Simba feedback models
We can most directly diagnose which feedback process is responsible for anisotropy at various scales and masses by looking at the Simba-50 variants, in which feedback mechanisms are shut off individually. We begin by visually inspecting the CGM distribution maps around Simba-50 galaxies evolved under different feedback models, and then quantify the differences using our quadrupole measure.
In Figure 2, we have already shown the edge-on stacked maps of CGM properties around central 'jet-active' galaxies selected from Simba-50-'allphys' at \(z=0\). For comparison, we conduct the same exercise but for central galaxies selected from the 'nojet' and 'nofb' run, where the former one with bipolar kinetic jet feedback plus X-ray heating turned off (top row), and the latter run with all simulated feedback activity turned off (bottom row). Host haloes across models are matched by using the dark matter particle IDs. The resulting stacked maps are showed in Figure 5. In this case, we only show the stacked results for \(T_{\rm{mw}}\) and \(\Sigma_{\rm{gas}}\), because the anisotropic feature is not clearly seen in the \(y\) and \(\Sigma_{\rm{DM}}\) 'allphys' maps.
By inspecting the differences presented in Figure 2 and 5, it is clear that around 'allphys' galaxies, owing to the jet activity, the gas temperature is significantly higher along the minor axis (jet direction), while conversely the gas density is extended along the major
Figure 4: _Upper row:_ mass-weighted temperature (\(T_{\rm{mw}}\)), gas column density (\(\Sigma_{\rm{gas}}\)),and metallicity (\(Z_{\rm{mw}}\)) quadrupole curves as a function of galactocentric distance for the Simba-100 ’jet-active’ galaxies at \(z=0.0\). This compares the results obtained from the edge-on projected maps within four stellar mass bins. Shaded regions show the bootstrap errors. In each panel, an inset zooms into the central region with \(r\leq 0.5r_{200c}\). _Lower row:_ quadrupole values for different CGM properties evaluated at various galactocentric distances. The symbols represent the galaxy-population stacked average within a given mass bin and the shaded regions show the bootstrap errors. Only the quadrupoles measured from the edge-on view are shown here for illustration. Halo masses and stellar masses are both labelled on the same plot for comparison. For Simba galaxies, the CGM presents the most significant anisotropic features at \(\log_{10}(M_{*}/M_{\odot})\sim 10.5-11.0\) or \(\log_{10}(M_{200c}/M_{\odot})\sim 12.2-12.5\).
axis. After turning off the jet mode feedback (top row of Figure 5), the strong large-scale bipolar anisotropy in the \(T_{\rm mw}\) and \(\Sigma_{\rm gas}\) maps disappears. Furthermore, due to the radiative AGN winds and star-forming winds still present in the 'no-jet' run, it is noticeable that the gas temperature is generally hotter and more extended around 'no-jet' galaxies compared to those in 'nofb' (bottom row). In contrast, the small-scale anisotropy (\(\lesssim 0.5r_{200}\)) in \(T_{\rm mw}\) and \(\Sigma_{\rm gas}\) remains present in all runs, suggesting that it is a signature of cosmological gas accretion that is independent of feedback.
Figure 6 presents the quadrupole results of different CGM properties as a function of projected galactocentric distance, measured from different AGN models (different coloured lines), which quantifies the trends seen in the projection maps. For reference, we also compute the quadrupole using a face-on projection (dashed lines), which as expected shows no anisotropy.
The quadrupole shows similar radial trends in the inner region (\(r\lesssim 0.5r_{200}\)) among all models. Hence the strong anisotropy within the inner halo is not a signature of feedback. To a certain degree, the inclusion of jet feedback suppresses the accretion onto the central black hole, causing a more elongated gas distribution along the
Figure 5: The same as Figure 2 but here showing Sima-50–nojet’ (_top row_) and Sima-50–nofb’ (_bottom row_) model results for comparison. Host haloes across models are matched by their halo masses. The averaged halo mass is \(\langle\log_{10}(M_{200c})\rangle=11.95\) for the Sima-50–nojet’ model and 11.90 for the Sima-50–’nofb’ model.
major axis for models without jets. But given the strength of the intrinsic cosmological accretion quadrupole, it will probably be difficult to test any feedback-induced anisotropy within this regime.
At larger radii, it can be seen that our quadrupole calculation is more sensitive to different feedback models for the temperature and surface density. In particular, jet feedback generates higher temperature as well as a lower density region along the minor axis, and the cumulative jet effect still remains evident at distances \(\geq 4r_{200c}\). These features disappear when jets are off, as showed in the 'nojet' and 'nofb' case. This provides a'sweet spot' for testing the AGN models in observations if using the CGM anisotropy.
The bottom panel of Figure 6 presents the quadrupole measurement for thermal SZ\(-\)y and \(\Sigma_{\rm DM}\) from the 'allphys' and 'nojet' models. Unlike the strongly anisotropic features visible in \(T_{\rm mw}\) and \(\Sigma_{\rm gas}\), here the amplitudes of both quantities fluctuate around zero. As discussed previously, the opposing trends present in the gas temperature and density cancel out the anisotropic features in the thermal pressure (\(y\)). The relatively isotropic dark matter distribution (bottom right panel) on large scales indicates that the anisotropy of gas properties is not due to large-scale inflows/outflows of dynamical mass.
Moving on to the model dependence of \(Z_{\rm mw}\) anisotropies. Figure 7 shows the edge-on stacked \(Z_{\rm mw}\) maps across Simba-50 models1, and Figure 8 shows the corresponding quadrupole measurements. We present the results obtained from all model variants because visual inspection showed that the CGM metallicity is the most sensitive property to the feedback mechanisms. This might be expected because the resulting metallicity distribution in the CGM strongly depends on both star formation status and on the gas outflow models.
Footnote 1: Here we omit the presentation of the ‘nox’ map because \(Z_{\rm mw}\) anisotropies show similar trends in the ‘nox’ and the ‘allphys’ model.
In the 'allphys' model, we do not see any significant anisotropic feature around the galaxies: the measured quadrupole curve is fairly flat at all distances and only slightly above zero. This is primarily due to the curtailment of star formation and SF-driven winds in central galaxies, causing a weak metal enrichment of the CGM. As can be seen from Figure 3, the metalli
Figure 6: Quadrupole of mass-weighted temperature (\(T_{\rm mw}\), _top left_), gas column density map (\(\Sigma_{\rm gas}\), _top right_), the thermal SZ map (\(y\), _bottom left_) and the dark matter column density map (\(\Sigma_{\rm DM}\), _bottom right_), measured across Simba-50 model variants as a function of projected galactocentric distance. Results measured from both edge-on (solid lines) and face-on (dotted lines) projections are presented here. Shaded regions show the bootstrap errors. In the top row, an inset within each panel zooms into the central region with \(r\leq 0.5r_{200c}\).
prominent if a sample of galaxies with high star formation rate is selected, even with active jets turned on in their hosts. The 'nox' model shows similar results at larger radii, suggesting that the X-ray feedback is not the dominant factor for CGM metal anisotropy in this lowest stellar mass bin.
Interestingly, the 'nojet' model shows the most significant enhancement of metals along the minor axis. Compared to the 'all-phys' case, the majority of 'nojet' populations have strong ongoing star forming activity due to the lack of jets. Furthermore, as we shall see below (SS4.1, Figure 9), the SF wind and radiative galactic outflow can carry gas out of the central star forming regions up to \(\sim 2\) cMpc. This can effectively carry metal-rich gas out to \(\sim 2r_{200c}\). The 'noagn' model shows weaker anisotropy because the stellar feedback is the only source that carries the metallic gas out, which is weak compared to AGN wind feedback. When all available feedback effects are turned off, metals can only be retained within galactic disks, as seen in the 'nofb' model. Therefore, although star formation status of galaxies is the primary determining factor for
Figure 8: \(Z_{\rm mw}\) quadrupole measurement as a function of galactocentric distance in different Simha models. For clarity, only results from the edge-on projection are shown. Shaded regions show the bootstrap errors. An inset panel zooms into the central region with \(r\lesssim 0.5r_{200c}\).
Figure 7: Anisotropy of the mass-weighted metallicity (\(Z_{\rm mw}\)) for ‘jet-active’ central galaxies at \(z=0.0\) in different Simha-50 models (\(M_{*}=1-5\times 10^{10}\ M_{\odot}\)). Only the edge-on projection is shown. Maps are normalised with respect to the azimuthal average. Galaxies as well as their surrounding particle fields are rotated and stacked with the same approach as in Figure 2. On each panel, the small window in the upper right corner zooms in to the central square region with a size of \(r_{200c}\).
\(Z_{\rm mw}\) anisotropy, how far the CGM can be enriched also depends on the power of AGN winds. In SS4.1, we will investigate the wind power by exploring how the bulk radial gas motion depends on feedback models.
Because different studies used various tools to characterise the CGM anisotropy, it can be hard to make direct comparisons between them. Qualitatively, in agreement with the findings in the IllustrisTNG simulation (e.g. Nelson et al., 2019; Terrazas et al., 2020; Truong et al., 2021), our quadrupole results suggest that the kinetic jet in Shma is powerful in reshaping the CGM properties and therefore regulating the star forming activity of galaxies. As a consequence of jet feedback, gas accretion along the major axis can be prevented in the inner region. Meanwhile, gas particles can be heated and expelled out to a large galactocentric distance along the minor axis. However, the anisotropic features in \(Z_{\rm mw}\) seen in our 'alphys' model are visually less prominent compared to the IllustrisTNG results (e.g. Peroux et al., 2020; Truong et al., 2021), suggesting that some CGM properties are sensitive to the subgrid models adopted by different simulations. This is also apparent across Simba model variants (e.g. in Figure 6).
To conclude, the angular dependence of the CGM is sensitive to the feedback models. The jet activity in the centre of a galaxy regulates its CGM on large scales, at \(\sim 0.5r_{200c}-4r_{200c}\) and beyond. The cumulative jet effect causes a higher temperature and lower densities along the galaxy minor axis (jet direction), in distinction to the isotropic distribution around samples where there is no jet. Due to the anisotropic accretion, the quadrupole moments show the strongest enhancement in the inner region (\(r\lesssim 0.5r_{200c}\)), but this feature is independent of feedback variants and therefore cannot be used for testing the feedback models. The CGM metallicity enrichment shows an complicated interplay between star formation activity and the effectiveness of feedback-driven winds, which will be examined in the next section.
## 4 Feedback Regulation of Galaxy Quenching
The anisotropic distribution of CGM presented in the previous section is a snapshot in time, but it should bear the cumulative imprint of feedback processes, and is connected to the properties of galaxies throughout cosmic time. In a way, it is a consequence of the interplay between CGM and galaxies. To understand how the the action of feedback take place, and how the anisotropy emerges throughout the evolution of the galaxies, we investigate the velocity field of the CGM in SS4.1, and trace the properties of progenitor galaxies in SS4.2.
### Jet activity and radial outflows
To see the feedback process in action, we compare the radial velocity profiles of gas along the minor and major axes. To make an approximate separation between the jet direction and the disk direction, we consider gas particles around each galaxy within a three-dimensional cone with an opening angle of \(\pm 45^{\circ}\), and measure the average radial velocity profile within the cones using the following equation:
\[\overline{v}_{par,\ \rm rad\ total}(r_{\rm 3D})=\frac{1}{N}\sum_{i=0}^{N}\left( \frac{\mathbf{r_{i}}\cdot\mathbf{r_{i}}}{|\mathbf{r_{i}}|}+H_{0}|\mathbf{r_{i}}|\right), \tag{6}\]
where \(\mathbf{v_{i}}\) is the relative peculiar velocity of a particle, and \(\mathbf{r_{i}}\) is its position vector relative to the black hole position; \(N\) is the number of particles in each shell. The first term is the local peculiar velocity term and the second term accounts for the Hubble flow with \(H_{0}=67.74\ \rm km\,s^{-1}\ \rm Mpc^{-1}\). '\(par\)' stands for either non star-forming gas or dark matter particles. The results are shown in Figure 9.
We can see that the CGM radial velocities are mostly positive (left panel), tending towards the Hubble flow at large scales, and reducing to nearly zero at small radii. This suggests that there is no gas accretion for these galaxies on average. Their host haloes have also ceased to accrete dark matter, as seen in the middle panel. We do not expect dark matter to be directly influenced by feedback processes, and thus the phase-space curves of dark matter from different models are consistent with each other. They provide references for the CGM velocities, whose ratios to the dark matter version are shown in the right-hand panel. It is clear that the CGM in the 'allphys' and 'noiet' models shows stronger outflow than the dark matter version. For the 'jet-active' central galaxies (green curves in the left panel), the CGM outflow is much stronger than all other models. The outflow is even stronger along the minor axis (jet direction). This is evidence that jets are responsible for these strong outflows, which induce the large-scale CGM anisotropy we have seen in the previous section.
It is worth noting that for the 'noiet' model, there is also a sign of enhanced outflow within \(\sim 1.5\ \rm\,c\rm\,Mpc\) along the minor axis (blue-dashed curve on the left), but this is much less effective than jet-induced AGN feedback in carrying gas out to further distances.
These results support the physical picture that, along the minor axis, kinetic jet feedback is the most powerful driver in expelling hot gas out to large scales (\(\sim 3-4\ \rm\,cMpc\)). Additionally, radiative AGN winds are also capable of carrying gas out to \(\sim 1-2\ \rm\,cMpc\), but the radiative mode alone is not able to suppress the star formation activity and the chemical enrichment due to stellar feedback. This explains the strong large-scale metallicity enrichment seen for the 'nojet' case in Figure 7. The effect of stellar feedback may have a minor impact within \(r_{\rm 3D}\lesssim 1\ \rm\,cMpc\), but its is much weaker compared to the AGN winds. In general, bipolar kinetic jet feedback drives gas ejection and heating out to large distances, prevents gas cooling in the central star forming region. We conclude that the combination of both these 'ejective' and 'preventative' modes is essential for the effective quenching of Simba galaxies.
When splitting the sample into Star-forming, Green Valley and Quenched populations, we find that the SF galaxies have the strongest outflows along the minor axis at small radii, followed by GV and Q galaxies (Figure 10). Moreover, the difference in the strength of the outflow between the minor (dashed lines) and major axes (solid lines) decreases across the three types of galaxies. This again suggests a strong correlation between jet activity and galaxy type. In addition, there may be signs of thermalisation of the energy carried out by jets. A possible scenario is that jets in SF galaxies initially carry kinetic energy out along the minor axis and the energy thermalises the CGM as they travel, increasing the CGM temperature and boosting the outflow along the minor axis. These processes gradually quench star formation, causing the galaxies to evolve from SF into GV, and eventually Q. A supporting argument for this picture is that on the left panel the blue-dashed curve is the highest at small radii (\(r_{\rm 3D}<1.2\ \rm\,cMpc\)), but the green- and red-dashed curves take over at larger radii. It is possible that we are seeing the jet energy propagating outwards from the galactic centres, gradually increasing the outflow velocity at large radii while causing the galaxies to evolve to be GV and Q.
### Redshift evolution of feedback activity
The possible scenario of galaxy evolution from SF to GV and to Q regulated by feedback processes is best examined by tracing the evolution of galaxies directly in our simulations. To do this, we identify the progenitors of a sample of quenched galaxies with \(M_{*}=(1-5)\times 10^{10}M_{\odot}\) at \(z=0\).
We identify progenitors by finding galaxies at a high redshift that have the most stellar particles in common with those at \(z=0\). We make extra cuts in accretion rate and stellar mass such that galaxies at higher redshifts have accretion rate \(>0\) and have stellar mass within the range identical to those at \(z=0\). We do not select the progenitors to be 'jet-active', in order to include the cumulative effect of all possible feedback mechanisms at higher redshifts. Results at \(z=1\), 0.5 and 0 are presented in Table 2 and Figure 11.
By default, galaxies selected at \(z=0.0\) are all 'jet-active' quenched members. As shown in this table, their progenitors at \(z=1\) are mainly star-forming galaxies, with a fraction of them being jet-active, i.e. they have achieved the full jet speed. Note that the majority of the remaining galaxies at \(z=1\) still have jet activity, but their jets have not reached full speed. Most of these galaxies evolve into GV by \(z=0.5\), and become jet-active. By \(z=0\), all of them are quenched, and remain jet-active. This is also illustrated on the top-left panel of Figure 11, where the continued decline of star-forming population with redshift is evident.
The \(T_{\rm mw}\) and \(\Sigma_{\rm gas}\) quadrupoles, measured at different redshifts, are shown in the middle and right top panel. For reference, the edge-on stacked \(T_{\rm mw}\) maps at three redshifts are presented in the bottom row. Based on the \(T_{\rm mw}\) quadrupole, an 'ejective' effect from feedback at high redshift is clearly seen at outer region. Combining with the \(T_{\rm mw}\) map given in the bottom row, one can see that those high\(-z\) progenitors were undergoing strong bipolar AGN feedback at earlier times. Owing to this, gas particles were heated and ejected from the central star forming region. Hot gas then accumulated in the CGM and eventually thermalised in the outer regions, until the time when the hot gas is isotropically distributed around the central galaxies and the fuel for further star formation is depleted. Then jets
Figure 10: Gas total radial velocity as a function of galactocentric distance in 3D, using different types of ‘jet-active’ central galaxies at \(z=0.0\) within \(M_{*}=(1-5)\times 10^{10}M_{\odot}\) (_left_) and \(M_{*}=5-10\times 10^{10}M_{\odot}\) (_right_). Measurements are performed within the same cone regions as those for Figure 9. Shaded regions represent the bootstrap error. Galaxy types are distinguished by colours, using the same colour scheme as in Figure 3.
Figure 9: Total radial velocity for gas component (_left_) and dark matter component (_middle_) as a function of galactocentric distance in 3D, using ‘jet-active’ central galaxies with \(M_{*}=(1-5)\times 10^{10}M_{\odot}\) at \(z=0.0\). Measurements are performed within a 3D cone with an opening angle of \(\pm 45^{\circ}\) around the axis. Dashed lines show the results from the cone aligned with minor axis, while the solid lines show the average curves from cones along the \(X\)-axis and the \(Y\)-axis on the galaxy major plane. Shaded regions represent the bootstrap errors. Models are distinguished by colours with the same colour scheme as in Figure 7. Black dotted line: Hubble flow for comparison. To highlight the effective region of each AGN variants, the ratios between the gas and dark matter curves of each model are given in the _rightmost_ panel for comparison.
are responsible for maintaining the galaxies in a quenched state at \(z=0.0\).
At face value, there is a correlation between star-formation rate with the level of CGM anisotropy on large scales across different redshifts, i.e. as the population evolves from being star-forming at high redshift to being quenched at low redshift (Table 2), the level of CGM temperature anisotropy also decreases (middle panel of Figure 11). It is not immediately clear whether it is star formation or AGN jets that are responsible for the anisotropy. To test this, we repeat our stacking analysis with a sample of galaxies at \(z=1\) that have no jet activity at all, but still with star formation and radiative feedback. Results are presented in Figure 12. It is clear that the CGM for this sample is anisotropic, but the feature is present at relatively small scales, confined to be within \(1-2r_{200}\). Based on this, we conclude that while other feedback processes can contribute to the CGM anisotropy on small scales, jet activity should be responsible for it on large scales. This is a distinct feature for the jet model that may provide observational signatures for detecting the direct impact of the jets on the CGM, and being able to tell the jet model apart from other feedback mechanisms.
In general, this supports the picture that strong AGN feedback plays a crucial role in effectively suppressing the star formation and maintaining the galaxies in the 'quenched' state at \(z\lesssim 1\). This once again demonstrates the strength of the Simba AGN feedback model: the feedback winds are capable of ejecting and heating up the gas to a large distance ('ejective' effect). As shown by Sorini et al. (2022), the Simba AGN-jet feedback mode can displace baryons out to \(\sim 4r_{200c}\) at \(z<1\) and even \(\sim 10r_{200c}\) at \(z=0\), and increases the amount of hot gas in the outskirts of the halo. On relatively small scales, this process is further helped by star-formation and radiative feedback. Hot gas accumulates in the CGM and then thermalises around central galaxies, which prevents the gas from cooling and falling back onto the inner star-forming region ('preventative' effect). This gas starvation drives galaxy quenching and the jet further maintains the quenched state at low redshift.
Figure 11: Redshift evolution of feedback activity. Progenitors are traced back to \(z=1.0\) for \(M_{*}=(1-5)\times 10^{10}M_{\odot}\) quenched galaxies at \(z=0.0\), where progenitors are defined as having the most stellar particles in common. A one-to-one matching is performed within the same stellar mass range with accretion rate > 0 across snapshots. _Top_: the histogram of star formation rate evolved with redshifts and the redshift evolution of \(T_{\rm{max}}\), \(\Sigma_{\rm{gas}}\) quadrupole curves as a function of galactocentric distance. Only results from \(z=0.0\) (red solid), 0.5 (green solid) and 1.0 (blue solid) are shown here for illustration. An inset within each panel zooms into the central region with \(r\lesssim 0.5r_{200c}\). _Bottom_: The edge-on \(T_{\rm{max}}\) stacked maps at different redshifts for visual inspection. It is clear that the progenitors of quenched galaxies undergo stronger AGN feedback at earlier times.
## 5 Comparisons to other simulations
The anisotropic distribution of CGM is not unique to the allphs model of Simba. In fact other models with kinetic AGN feedback, such as the one adopted by the TNG100 simulation may also predict CGM anisotropy, but existing studies have not covered a range of scales as large as the one explored here (Truong et al., 2021) (see also Figure 12).
Truong et al. (2021) presented qualitatively similar analyses to the TNG100 simulation; instead of using the quadrupole, the level of CGM anisotropy in that work was characterised by the minor-to-major axis ratio. The angular dependence displayed on their median stacked CGM maps, around central galaxies with \(M_{*}=10^{11\pm 0.1}M_{\odot}\) from TNG100 at \(z=0.0\), are qualitatively similar to our results: an under-dense region of CGM gas along the minor axis with an enhancement of temperature and metallicity, even though the direction of jets is randomised at each timestep in the TNG model. In addition, the levels of anisotropy in \(T_{\rm MW}\) and \(\Sigma_{\rm gas}\) are also maximised at a transitional mass range with \(M_{*}\sim 10^{10.5-11}M_{\odot}\) (\(M_{200c}\sim 10^{12.1-12.5}M_{\odot}\)). These are consistent with our findings shown in Figure 4.
However, there are some noticeable differences between the study with TNG and our results from the 'allphys' models, including (i) a more prominent metallicity (\(Z_{\rm max}\)) anisotropy along the minor axis in TNG, and features in that study that monotonically decrease with galaxy masses; (ii) a stronger dependence on the galactocentric distance in TNG when evaluating the level of CGM anisotropy, especially for galaxies with \(M_{*}\lesssim 10^{8}M_{\odot}\) (\(M_{200c}\lesssim 10^{12.5}M_{\odot}\)); (iii) stronger anisotropic features in \(T_{\rm MW}\) and \(\Sigma_{\rm gas}\) for massive galaxies in TNG that are quenched and non-disky compared to their star-forming counterparts.
These differences in CGM properties are not surprising, given the different feedback models implemented in the TNG and Simba simulations. As discussed in Truong et al. (2021), the resulting CGM anisotropy around TNG, Illustris and EAGLE galaxies can be significantly affected by the detailed stellar and AGN feedback mechanisms. Also, previous tSZ-\(y\) studies (e.g. in Yang et al., 2022) have shown that, compared to the TNG model, the adopted Simba feedback model is more energetic at heating and expelling gas into the CGM. Therefore, gas in the CGM can be thermalised more efficiently around galaxies, producing a more isotropic hot gas distribution around massive quenched galaxies compared to those in the TNG simulation. This could explain why the anisotropy levels around Simba quenched galaxies are lower that those seen in TNG. Meanwhile, the strong suppression of star formation owing to the powerful feedback causes the depletion of metal-enriched gas in the central region. As seen in Figure 3, the level of \(Z_{\rm MW}\) anisotropy can be much more prominent compared to their GV and Q counterparts.
## 6 Implications for observations
We have used the Simba model to show that jet activity from AGN can provide powerful energy feedback, which regulates star-forming activity for the galaxy, and leaves imprints on the properties of the CGM. This opens possibilities for constraining feedback models with observational signatures of the CGM. For this, one key step in to be able to identify the direction of the jet for the stacking of the CGM, which can be challenging in practice. However, one can use other observable means to infer the jet directions.
The stellar disk is an observable that may correlate with the jet direction due to the expected alignment of the angular momentum of the centre of a disk galaxy with that of the stellar disk, or even with the HI gas. We have explicitly studied the angular momentum between the BH and stellar disk from the Simba simulations, and confirmed that there is a strong, though not perfect, correlation between them. This is further evident from the results presented in Figure 11, where we repeat our analyses for the anisotropy of the CGM, but instead of using the jet directions, we use the stellar angular momentum vectors as proxies. We see that the results remains similar. Therefore, at least in Simba simulations, the directions of stellar angular momentum are good approximations for the jet direction. This provides one possible guide for future observational analyses of this kind.
From both observations and simulations, the satellite quenching fraction is found to be anisotropically distributed around the central hosts, where a larger quenching fraction is found along the minor axis compared to that within the major plane (e.g. in Zaritsky et al., 1997; Martin-Navarro et al., 2021). Also, previous studies have shown that the spin vectors of high-mass haloes and galaxies (with \(M_{\rm halo}\gtrsim 10^{12}h^{-1}M_{\odot}\)) are on average perpendicular to the filament axes (e.g. in Ganeshaiah Veena et al., 2018, 2019). This provides us with some possible indirect means of linking the direction of galactic outflow with large-scale accretion flows.
Once the jet direction is found, the large-scale CGM anistropy in temperature and gas density is a clear signature that can be sought in observational data. It is also clearly distinct from the relatively small-scale features induced by other feedback processes (Truong et al., 2021). We have shown that the thermal SZ effect, which is sensitive to the gas pressure, is probably not a good observable for this because the anisotropy shown in the SZ-\(y\) maps is weak due
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Redshift & star-forming & green valley & quenched & total \\ \hline \(z=1.0\) & 218 (33) & 130 (124) & 25 (24) & 373 (181) \\ \hline \(z=0.5\) & 34 (27) & 249 (247) & 90 (90) & 373 (364) \\ \hline \(z=0.0\) & 0 (0) & 0 (0) & 373 (373) & 373 (373) \\ \hline \end{tabular}
\end{table}
Table 2: Redshift evolution of the galaxy types for progenitor-descendant galaxy pairs. The number of ‘jet-active’ galaxies within each galaxy type are shown in the brackets.
Figure 12: The edge-on \(T_{\rm MW}\) stacked maps for \(z=1.0\) progenitors without kinetic-jet mode. It is clear that the star formation and radiative feedback alone are not strong enough to expel gas out to large distances.
to the cancelling effect between an increased gas temperature and a decreased gas density. However, the X-ray emission is sensitive to the square of the gas density, and only weakly on the gas temperature. This is different from that of the tSZ signature. The combination of X-ray data with tSZ measurements may allow us to measure the temperature map of the CGM (e.g. Adam et al., 2017).
Furthermore, the most anisotropic direction or jet direction can be directly determined using observational data, such as from X-ray and radio surveys. The current state-of-the-art X-ray missions, _Chandra_ and _XMM-Newton_(Giles et al., 2016; Luo et al., 2017), have already revealed giant cavities and hot gas shock fronts along the minor axes of massive galaxies (e.g. Hlavacek-Larrondo et al., 2015; Liu et al., 2019). Specifically, new generation X-ray surveys such as _eROSITA_ are capable of sampling millions of AGN by scanning the whole sky within a much wider range of energies (0.2-10 keV: Predehl et al., 2021). Truong et al. (2021) have demonstrated that the predicted X-ray hardness from an _eROSITA_-like survey can be helpful and promising for capturing bipolar CGM features, and this property is quite sensitive to the adopted feedback models in simulations. However, one needs to stack a large number of samples (\(\gtrsim 10^{4}\)) to reach a signal-to-noise ratio greater than 3 around halo mass \(M_{200c}\sim 10^{12}M_{\odot}\). The all-sky coverage of _eROSITA_ allows the stacking of large samples and allows us to conduct anisotropy analyses of the thermodynamic gas properties in the CGM. For AGN possessing collimated radio jets, surveys such as _VLA_ and _LOFAR_ have good sensitivity to extended sources (Baldi et al., 2018; Croston et al., 2019). As suggested by our results, the accumulated CGM anisotropy features from powerful jet feedback should still be significant up to several Mpc. The advent of these surveys will enable the investigation of feedback-induced CGM anisotropy out to larger radii.
## 7 Conclusions and Discussion
We have used Simba simulation model variants to explore how the properties of galaxies and their circumgalactic medium (CGM) are regulated by feedback processes, with a focus on the jet activity of their central black holes. We find that at redshift \(z<1\), central galaxies with active AGN jet feedback are most commonly found in the stellar mass range between \((1-5)\times 10^{10}M_{\odot}\), and most commonly within the green valley (Figure 1). Driven by the powerful bipolar jet-activity, the CGM becomes anisotropic in temperature, gas density and matelicity at the galactocentric distance of \(4r_{200}\) and beyond. This is supported by direct evidence from the outflow of the gas extended to several cMpc, and that the outflow is stronger along the jet direction. We have also show evidence that jet activity is responsible for driving the galaxy to evolve from being star-forming, to green valley, and eventually into a quenched state.
On large scales (\(r>0.5r_{200}\)), the jets increase the gas temperature around the minor axis of the galaxy while suppressing their density. These features are similar to those seen in star-formation and radiative feedback processes, but the latter occurs at relatively small scales (\(r<2r_{200}\)). The difference in the scale of impact on the CGM is potentially a unique signature for distinguishing the jet model from other type of feedback processes. Due to the cancellation effect, the resultant gas pressure remains relatively isotropic. This makes it challenging to use the thermal SZ effect alone to detect the effect of the jets on the CGM, but it may be possible with the combination of SZ and X-ray observables.
The CGM metallicity is strongly enhanced along the minor axis around SF galaxies, peaking at around \(r_{200}\). This disappears for GV galaxies, and Q galaxies even show a slight metallicity enhancement along the major axis. Given that star formation is expected source for metal enrichment, it is likely that star formation provides the metals, which are then carried out by the jets to larger scales. We do not find any obvious anisotropy for the dark matter, as expected.
On small scales (\(r<0.5r_{200}\)) we also find strong anisotropy for the CGM, but this is common to different feedback models, even when all explicit feedback is off, indicating that it is associated with anisotropic cosmological accretion rather than any feedback process.
Understanding how feedback activity regulates the baryonic cycle is a crucial step in galaxy formation and evolution. From the above analysis, it is apparent that the CGM anisotropic features depend strongly the feedback models, and it is a consequence of both 'ejective' and 'preventative' effects that leave an imprint on the resulting CGM properties. The Simba 'allphys' simulation and its model variants provide us an ideal opportunity to compare and predict what may happen around galaxies selected from different environments. By observing the CGM distribution, one can further infer the host galaxy properties, such as the star formation status and the impact of feedback.
While this study has focused on theoretical aspects, we hope to provide inspiration and observational directions for future CGM studies, such as from which direction one expects to observe the strongest CGM anisotropy signal. Apart from the central region, the accumulated effect of feedback activity around low\(-z\) galaxies can be 'observed' out to several Mpc, so there are good prospects of making the necessary measurements in practice. With the advent of new surveys and improved simulation models, these probes of anisotropic CGM properties should enable us to learn ever more about the detailed operation of feedback in galaxy formation.
## Acknowledgements
We are grateful for the publicly available simulations from the Simba project. During this work, DS and JAP were supported by the STFC consolidated grant no. RA5496. DS was further supported by the Swiss National Science Foundation (SNSF) Professorship grant no. 202671. WC is supported by the STFC AGP Grant ST/V000594/1. He further acknowledges the science research grants from the China Manned Space Project with NO. CMS-CSST-2021-A01 and CMS-CSST-2021-B01. RD acknowledges support from the Wolfson Research Merit Award program of the U.K. Royal Society. YC acknowledges the support of the Royal Society through a University Research Fellowship and an Enhancement Award. This work used the DiRAC@Durham facility managed by the Institute for Computational Cosmology on behalf of the STFC DiRAC HPC Facility. The equipment was funded by BEIS capital funding via STFC capital grants ST/P002293/1, ST/R002371/1 and ST/S002502/1, Durham University and STFC operations grant ST/R000832/1. DiRAC is part of the National e-Infrastructure.
## Data Availability
The raw Simba simulation data and halo catalogues used in this paper are available at [https://simba.roe.ac.uk](https://simba.roe.ac.uk). The remaining data will be made available on request to the lead author. |
2309.12879 | Model for transitional turbulence in a planar shear flow | A central obstacle to understanding the route to turbulence in wall-bounded
flows is that the flows are composed of complex, highly fluctuating, and
strongly nonlinear states. We address this challenge by deriving from the
Navier-Stokes equations a simplified model that describes transitional
turbulence in a planar setting. The Reynolds-averaged and
turbulent-kinetic-energy equations are projected onto a minimal set of
wall-normal modes and justified model closures are used for the Reynolds
stresses and turbulent dissipation and transport. The model reproduces the
oblique turbulent-laminar patterns ubiquitous in wall-bounded shear flows. It
also captures the pattern wavelengths and angles, and the large-scale flow
associated with both stationary patterns and growing turbulent spots. Patterns
are shown to arise with decreasing Reynolds number via a linear instability of
uniform turbulence. Linear analysis reveals implications for the critical angle
at onset. | Santiago J. Benavides, Dwight Barkley | 2023-09-22T14:07:46Z | http://arxiv.org/abs/2309.12879v1 | # Model for transitional turbulence in a planar shear flow
###### Abstract
A central obstacle to understanding the route to turbulence in wall-bounded flows is that the flows are composed of complex, highly fluctuating, and strongly nonlinear states. We address this challenge by deriving from the Navier-Stokes equations a simplified model that describes transitional turbulence in a planar setting. The Reynolds-averaged and turbulent-kinetic-energy equations are projected onto a minimal set of wall-normal modes and justified model closures are used for the Reynolds stresses and turbulent dissipation and transport. The model reproduces the oblique turbulent-laminar patterns ubiquitous in wall-bounded shear flows. It also captures the pattern wavelengths and angles, and the large-scale flow associated with both stationary patterns and growing turbulent spots. Patterns are shown to arise with decreasing Reynolds number via a linear instability of uniform turbulence. Linear analysis reveals implications for the critical angle at onset.
The route to turbulence in many wall-bounded shear flows is mediated by a fascinating regime in which turbulence cannot be sustained throughout the system; rather it occurs intermittently within laminar flow [1; 2; 3; 4; 5]. While the laminar state is stable to small perturbations, strongly nonlinear patches of turbulence may be sustained via interactions with the neighboring quiescent laminar flow [6; 7; 8; 9]. One of the more intriguing and pervasive manifestations of intermittent turbulence is the alternation of regions of turbulent and laminar flow oriented obliquely to the streamwise direction [10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24]. Figure 1(a) illustrates a large-scale pattern formed by such alternation in the planar shear flow described fully below. The scale of these patterns is more than an order of magnitude larger than the constrained direction of mean shear - the scale at which the turbulence is generated [10; 11; 12; 15; 18]. Observations of oblique turbulent structures (often now called turbulent bands or stripes) trace back to the observations of spiral turbulence between counter rotating cylinders [25], and have been the subject of much study ever since [3; 4].
The boundaries of the intermittent turbulence regime are of particular interest. With decreasing flow rate, or Reynolds number (\(Re\)), turbulent bands take a localized and fragmented form, and eventually a critical point is reached below which turbulence is not sustained. Universal scaling laws associated with directed percolation (DP) have been established for such critical points in some flows [26; 27; 28; 22]. With increasing \(Re\), laminar regions disappear and bands give way to a uniform turbulent state whose mean recovers the full symmetry of the flow geometry. Equivalently, a symmetry breaking of the uniform turbulent state occurs with decrease \(Re\). A substantial body of work has analysed and characterized the transition from uniform turbulence [29; 30; 31; 9; 10; 17; 24]. Most recently, Kashyap _et al._ provide strong evidence for a linear instability of the uniform turbulent state by considering the ensemble-averaged relaxation rates to perturbations [24], while Gome _et al._[31; 9] analyze the energy balances and fluctuations associated with this transition.
Models provide a powerful means to investigate the complex transition scenarios exhibited by wall-bounded turbulence and several distinct approaches have been proposed [32; 33; 34; 35; 36; 37; 38; 39; 40]. The most relevant is the modeling of pipe flow [34; 35; 36] by two scalar fields: the amplitudes of large-scale (coarse-grained) turbulence and mean flow.
Figure 1: (a) Illustration of oblique turbulent-laminar patterns in a planar shear flow confined by stress-free walls and driven by a body-force. These states are observed in the intermittent regime where turbulence can exist, but is not uniform (space filling). Visualized in color is the time- and vertically-averaged turbulent kinetic energy (TKE) from a direct numerical simulation at \(Re=140\) in a streamwise-spanwise periodic domain. The arrows correspond to the time- and vertically-averaged velocity field. The averages were taken over a window of 865 advective time units. (b) Flow geometry, with illustrations of the five wall-normal (vertical) modes in the model used to represent the large-scale flow. One of these modes is the same laminar flow: \(\sin(\pi y/2)\mathbf{e}_{x}\).
The amplitudes depend only on the axial coordinate and are governed by advection-reaction-diffusion equations modeling the interaction between the mean flow and turbulence This modeling does not readily extend to the planar case, however, primarily because the large-scale flow in the planar case must be described by a vector field. Fluid advection plays a significant role in planar flows [7; 9; 18] and the vector nature of the mean flow must be accounted for in modeling. In this Letter we derive a quantitative model of planar shear turbulence directly from the Navier-Stokes equations through specified truncations and model closures.
We start our derivation from the most mathematically tractable system exhibiting turbulent-laminar bands: a shear layer confined between stress-free boundaries, driven by a sinusoidal body force. In the transition literature, the system is called Waleffe flow (Wf) [41]. It reproduces the same transitional phenomena as plane Couette flow (pcf) [4; 19], and has been used to access very large system sizes [27]. Planar shear flows with nonzero net flow, such as plane Poiseille flow, posses different symmetries, and while they share many transitional phenomena, they also differ from Wf and pcf [4; 42; 43; 20].
We describe the dynamics of the large-scale (coarse-grained) variables. We apply Reynolds averaging to the Navier-Stokes equations and denote the mean (or 'large-scale') velocity by \(\mathbf{u}\) and the fluctuations by \(\mathbf{u}^{\prime}\), where \(\langle\mathbf{u}^{\prime}\rangle=0\) with \(\langle\cdot\rangle\) a suitable averaging procedure. We define the turbulent kinetic energy (TKE) as \(q\equiv\langle\mathbf{u}^{\prime}\cdot\mathbf{u}^{\prime}\rangle/2\). In practice, one averages over intermediate spatial-temporal scales, long on the scale of the turbulence but short on the scale of the patterns. Averaging has the effect of removing small turbulent scales, and while \(\mathbf{u}\) is formally defined as a mean, we view it as a large-scale flow. Our coordinates \((x,y,z)\) correspond to the streamwise, wall-normal, and spanwise directions, respectively. The components of the large-scale velocity \(\mathbf{u}\) are denoted by \((u,v,w)\). We non-dimensionalize using the vertical half gap \(h\) and the maximum laminar velocity \(U\), resulting in a domain of dimensions \(L_{x}\times 2\times L_{z}\), with \(y\in(-1,1)\). We use \(\beta\equiv\pi/2\) for the first vertical wavenumber.
After Reynolds averaging one obtains [44],
\[\frac{\partial\mathbf{u}}{\partial t}+\mathbf{u}\cdot\nabla \mathbf{u}=-\nabla p+\frac{1}{Re}\nabla^{2}\mathbf{u}-\alpha\mathbf{u}_{H}+ \mathbf{f}+\nabla\cdot\mathcal{R}, \tag{1a}\] \[\frac{\partial q}{\partial t}+\mathbf{u}\cdot\nabla q+\nabla \cdot\mathbf{T}=\mathcal{P}-\varepsilon+\frac{1}{Re}\nabla^{2}q-2\alpha q, \tag{1b}\]
where \(p\) is the pressure, \(Re\equiv Uh/\nu\) is the Reynolds number, \(\nu\) is the kinematic viscosity, \(\mathbf{f}\equiv(\alpha+\beta^{2}/Re)\sin{(\beta y)}\,\mathbf{e}_{x}\) is the body force which supports a linearly stable laminar flow \(\mathbf{u}_{lam}=\sin(\beta y)\mathbf{e}_{x}\), and \(\alpha\) is the drag coefficient. The drag term only acts on the horizontal velocity components \(\mathbf{u}_{H}\equiv u\,\mathbf{e}_{x}+w\,\mathbf{e}_{z}\) to mimic the effect of no-slip boundaries, and is only relevant for \(y\)-independent modes and at large horizontal scales [4; 45]. The Reynolds-stress tensor \(\mathcal{R}\), the turbulent production \(\mathcal{P}\), the pseudo-dissipation rate \(\varepsilon\), and the transport terms \(\mathbf{T}\), are defined by standard expressions [44]. Each involves fluctuation correlations and closure comes from modeling these quantities. Equations (1) are accompanied by an incompressibility constraint \(\nabla\cdot\mathbf{u}=0\), periodic boundary conditions in the \((x,z)\) directions, and stress-free boundary conditions \(\partial_{y}u=v=\partial_{y}w=\partial_{y}q=0\) at \(y=\pm 1\).
We turn to modeling and continue to use variables \(\mathbf{u}\) and \(q\). Our first step is to represent \(\mathbf{u}\) and \(q\) with a minimal set of wall-normal (vertical) modes given by
\[u(x,y,z,t) = u_{0}(x,z,t)+u_{1}(x,z,t)\sin(\beta y), \tag{2a}\] \[v(x,y,z,t) = v_{1}(x,z,t)\cos(\beta y),\] (2b) \[w(x,y,z,t) = w_{0}(x,z,t)+w_{1}(x,z,t)\sin(\beta y),\] (2c) \[q(x,y,z,t) = q_{0}(x,z,t)+q_{1}(x,z,t)\sin(\beta y), \tag{2d}\]
and pressure \(p(x,y,z,t)=p_{0}(x,z,t)+p_{1}(x,z,t)\sin(\beta y)\). The retained modes are the minimal set describing a three-dimensional vector field. The field \(q_{1}\) will be obtained quasi-statically from the other fields.
To complete the model, we specify closures for \(\mathcal{R}\), \(\mathcal{P}\), \(\varepsilon\), and \(\mathbf{T}\). We express these as functions of \(q\) and \(Re\) that are as simple as possible and consistent with data from direct numerical simulations (DNS) of the Navier-Stokes equations. For the Reynolds stress \(\mathcal{R}\), we let \(\mathcal{R}_{12}=\mathcal{R}_{21}=-\langle u^{\prime}v^{\prime}\rangle=A(q_{0 })\cos(\beta y)\), and neglect all other tensor components. We use \(A(q_{0})=a\,((q_{0}^{2}+\eta^{2})^{1/2}-\eta)\), where \(a\) and \(\eta\) parameters with \(\eta\) small. This gives \(A\) approximately linear in \(q_{0}\), except near \(q_{0}=0\) where \(A\) becomes approximately quadratic in \(q_{0}\). This cut-off in production at small \(q_{0}\) is responsible for maintaining the stability of the model's laminar fixed point for all \(Re\). The turbulent production \(\mathcal{P}\) is specified from \(\mathcal{R}\) and \(\nabla\mathbf{u}\), and requires no additional modeling. For the transport term we invoke the gradient-diffusion hypothesis [44] that states \(\mathbf{T}=-\nu_{T}(q_{0};Re)\nabla q_{0}\). DNS suggest that \(\varepsilon\) and \(\nu_{T}\) can be sufficiently well modeled as linear functions of \(q_{0}\) and we use: \(\varepsilon(q_{0};Re)=c\,q_{0}/Re\) and \(\nu_{T}(q_{0};Re)=d\,Re\,q_{0}\), with parameters \(c\) and \(d\). Modelling the \(q_{1}\) dynamics as an instantaneous balance between nonlinear advection and dissipation, \(q_{1}\) is determined instantaneously from \(u_{1},w_{1}\), and \(q_{0}\).
In summary, the model is represented by six fields, \(u_{0},u_{1},v_{1},w_{0},w_{1},q_{0}\), corresponding to the mode amplitudes of large-scale flow and TKE. The evolution equations of these dynamical fields in \((x,z,t)\) are obtained by substituting expansion (2) into equations (1), using model closures for \(\mathcal{R}\), \(\mathcal{P}\), \(\varepsilon\), and \(\mathbf{T}\), using a quasi-static approximation for \(q_{1}\), imposing incompressibility, and applying a Galerkin projection. The six evolution equations are stated in full in the Supplemental Material, together with details on the model derivation and closures [46].
The model possesses spatially uniform steady states of the form \((u_{0},u_{1},v_{1},w_{0},w_{1},q_{0})=(0,u_{ss},0,0,0,q_{ss})\), where \(u_{ss}\) and \(q_{ss}\) satisfy the equations for \(u_{1}\) and \(q_{0}\) after dropping \(x,z\), and \(t\) derivatives [46]. We denote these steady states by \((u_{ss},q_{ss})\). Laminar flow is the steady
state \((1,0)\), existing for all \(Re\). Above \(Re=72.4\), the model also possesses a pair of non-trivial steady states with \(u_{ss}<1\) and \(q_{ss}>0\), corresponding to uniform turbulence. One of these states (the 'upper state') is stable to spatially uniform perturbations. Hence, the spatially uniform dynamics is bistable for \(Re\geq 72.4\).
We simulate the six model equations on a doubly-periodic domain of size \(L_{x}\times L_{z}\) using the open source pseudo-spectral code Dedalus [47]. Second-order Runge-Kutta time stepping is used with a time step \(\Delta t=0.04\) (approximately ten times larger than needed in an equivalent DNS). A Fourier-spectral method with 3/2 dealiasing is used with a resolution of one grid point per space unit (approximately ten times fewer than for a DNS). Two initial conditions (ICs) are used. A localized IC comprises a finite region of turbulence of length 25 and width 14 tilted at \(24^{\circ}\) to the \(x\) direction. The second IC is uniform turbulence \((u_{0},u_{1},v_{1},w_{0},w_{1},q_{0})(x,z,0)=(0,u_{ss},0,0,0,q_{ss})\). In all cases, the initial large-scale flow fields are additionally seeded with small-scale noise. The code and analysis scripts can be found in the GitHub repository of the first author [48].
The model results that follow reproduce many published features of bands seen in experiments and DNS. Figure 2(a) shows a simulation at \(Re=75\) starting from the localized IC. During the initial evolution, a large-scale, quadrupolar velocity field is established. Such fields are well documented [18; 19; 23; 50; 51; 52; 53; 54; 53], and reproducing them is an important validation of the model. The turbulent patch elongates through the growth of its two tips. They eventually join and the system settles into a single steady, straight turbulent band tilted at \(\theta=24^{\circ}\) to the streamwise direction (Movies S1-S2[46]). The large-scale flow is typical of that observed in experiments and DNS [7; 18; 54; 18]. While the final angle is set in part by the domain, during band elongation (Fig. 2(a), \(t=750\)), the tilt angle is not far from \(\theta=24^{\circ}\).
Figure 2(b) shows a simulation at \(Re=78\). At this slightly higher value of \(Re\) we observe more growth at the tips and also lateral splitting of the band, eventually resulting in two identical steady bands (Movies S3-S4[46]). Figure 2(c) shows the \(Re=78\) case, but starting from the uniform IC. A competition between symmetrically related orientations takes place until eventually a single pair of steady bands emerges (Movie S5[46]). Both lateral splitting [54; 55; 56; 12; 54] and band competition [24; 24; 10; 57; 15; 11] are well-documented phenomena.
Figure 2(d) shows a vertical slice of the reconstructed turbulence field \(q\) (top panel) and large-scale velocities (bottom two panels), taken across the final steady band at \(Re=75\) (cyan line in Fig. 2(a), t = 25350). The plots are strikingly similar to DNS [19; 4; 7; 9]. The top panel highlights the so-called overhang turbulent regions and demonstrates the role of \(q_{1}\) in the model. The middle panel visualizes the streamlines of the large-scale flow in the slice plane, while the bottom panel shows contours of the well-documented along-band flow (through the slice plane). The wall-normal structure seen in Fig. 2(d) highlights that, while model fields are functions of \((x,z)\), they describe nontrivial, three-dimensional flows of the type seen in DNS. The ability to represent such flows is key to the model success.
A survey of the model dynamics reveals the following. The single-band solution can be continued to as low as \(Re=66\), below the onset of bistability in the uniform dynamics. Going upward in \(Re\), beyond \(Re=85\), no bands are observed, and starting from the localized IC, turbu
Figure 2: Temporal evolution of the model for increasing \(Re\), starting from localized (a)-(b), and uniformly turbulent (c) initial conditions. Visualized is the model field \(q_{0}\), representing the vertically-averaged turbulent kinetic energy. Vectors and streamlines show the vertically-averaged large scale flow. Panel (d) shows a vertical slice of the reconstructed turbulence field \(q\) (top panel) and large scale flow velocities (bottom two panels), taken across the final steady band in (a) (see cyan line). The middle panel in (d) shows the two-dimensional in-plane flow, or equivalently contours of the streamfunction \(\psi_{\parallel}\), whereas the bottom panel shows the along-band flow visualized as a deviation from the laminar flow, \(\Delta u_{\parallel}\equiv u_{\parallel}-u_{lam,\parallel}\), where \(\parallel\) is the direction parallel to the band. Panel (e) shows an instantaneous snapshot of a run at \(Re=80\) performed in a larger domain than those in panels (a)-(c). Note the two competing band orientation domains.
lence spreads until a uniformly turbulent steady state (\(u_{ss},q_{ss}\)) is attained. The model dynamics in the range \(78<Re<85\) is rich. Not all combinations of ICs, \(Re\), and domain size result in simple steady bands. We observe instances of criss-cross-like steady states as well as steady states with both orientations separated by domain boundaries [10; 11]. In Fig. 2(e) we show a representative snapshot from a simulation that eventually reaches a steady state with two band orientations. Some runs never reach a steady state; bands nearly form, become unstable, break up, and again nearly form to repeat the cycle. (Movies S6-S11 [46].)
With the model, we can directly address the transition from uniform turbulence to patterns via linear stability analysis. Straightforward linearization of the model equations about \((u_{ss},q_{ss})\) gives an eigenvalue problem for the temporal growth rates of linear modes of the form \(\hat{\mathbf{u}}\exp(ik_{x}x+ik_{z}z)\)[46]. With decreasing \(Re\), a positive real eigenvalue first appears at a critical Reynolds number \(Re_{c}=85.1\), with a wavelength \(\lambda_{c}=23.7\) and tilt angle \(\theta_{c}=22.5^{\circ}\). Figure 3 shows the maximum growth rates in the \((k_{x},k_{z})\) plane for \(Re\) just below \(Re_{c}\) (Fig. 3(b)) and just above the onset of bistability (Fig. 3(a)). With decreasing \(Re\) the unstable region enlarges and the fastest growing mode shifts to larger wavelengths and a slightly lower tilt angle (\(\lambda\approx 32\) and \(\theta\approx 21.5^{\circ}\) at \(Re=73\)). These results align very well with Kashyap _et al._[24] who extracted evidence of a linear instability of uniform turbulence in plane Poiseuille flow (pPf) and obtained a critical angle of \(\theta_{c}\approx 23^{\circ}\). Nonlinear model simulations indicate that the bifurcation to bands at \(Re_{c}\) is supercritical.
The model provides a powerful means to explore the mechanisms at work in the formation of bands and in the dynamics of the system. Here we report on one such result: a bound on the selected band angle at onset. (See Supplemental Material [42] for the calculations supporting the following statements.) Because pattern wavelengths are much larger than the half-gap, i.e. \(k\ll\beta\), some terms in the stability equations are small and can be dropped without significantly altering eigenvalues (Fig. 3(c)). The approximate system's critical values are \(\tilde{Re}_{c}=87.3\), \(\tilde{\lambda}_{c}=22.4\) and \(\tilde{\theta}_{c}=22.0^{\circ}\). The conditions for a critical zero eigenvalue can be written down and, while they are too complex to fully solve analytically, they can be used to show that the band tilt angle at onset must obey \(\tilde{\theta}_{c}<45^{\circ}\) (as highlighted in Fig. 3). Remarkably, this result only requires the stability of the uniform turbulent state to spatially uniform perturbations and is independent of form of the closures. It is possible to improve the bound considerably by exploiting the specific closures used in the model.
In conclusion, we have obtained a quantitatively accurate model of planar shear turbulence from the Navier-Stokes equations, we have shown how it reproduces many known features of transitional turbulence, and we have provided new insights into origins of oblique turbulent bands. Key to the model's success is its ability to capture the large-scale flows associated with turbulent structures. The model notably retains the non-locality of the Navier-Stokes equations via incompressibility of the large-scale flow. The turbulence closures are natural and simple.
The model opens several new avenues of research. As has been done for pipe flow [58; 59; 36], a detailed understanding of the transition scenario can be obtained using dynamical systems and bifurcation theory. This could bring new insights into how turbulence is triggered. The small set of physical mechanisms in the model can be exploited to understand how energy balances adjust and ultimately break down with decreasing \(Re\), and this could lead to physical insights into the selection of particular states [31; 9]. One could add new mechanisms to the model or modify the model to describe other flow configurations, in particular pressure-driven pipes and channels. Finally, the effect of turbulent fluctuations can be included via a noise term, and the model can be used to investigate rare events [60; 61; 62] and percolation transitions [27; 28; 22] in the planar setting.
###### Acknowledgements.
We wish to thank Sebastien Gome and Laurette Tuckerman for valuable discussions. This work was supported by a grant from the Simons Foundation (grant no. 662985).
Figure 3: Maximum growth rate for perturbations of the model equations linearized around the uniformly turbulent state (\(u_{ss},q_{ss}\)) (see text) at a Reynolds number (a) just above the onset of bistability, and (b) just below the critical Reynolds number where a positive growth rate appears. The solid black line represents the neutral stability curve. Panel (c) shows maximum growth rates of the simplified linearized equations under the long-wavelength assumption (\(k\ll\beta\)). |
2304.13543 | Robust decentralised proof-of-position algorithms for smart city
applications | We present a decentralised class of algorithms called Tree-Proof-of-Position
(T-PoP). T-PoP algorithms rely on the web of interconnected devices in a smart
city to establish how likely it is that an agent is in the position they claim
to be. T-PoP operates under adversarial assumptions, by which some agents are
incentivised to be dishonest. We present a theoretical formulation for T-PoP
and its security properties, and we validate this model through a large number
of Monte-Carlo simulations. We specifically focus on two instances of T-PoP and
analyse their security and reliability properties under a range of adversarial
conditions. Use-cases and applications are discussed towards the end of this
paper. | Aida Manzano Kharman, Pietro Ferraro, Anthony Quinn, Robert Shorten | 2023-03-31T21:28:47Z | http://arxiv.org/abs/2304.13543v1 | # Robust decentralised proof-of-position algorithms for smart city applications
###### Abstract
We present a decentralised class of algorithms called Tree-Proof-of-Position (T-PoP). T-PoP algorithms rely on the web of interconnected devices in a smart city to establish how likely it is that an agent is in the position they claim to be. T-PoP operates under adversarial assumptions, by which some agents are incentivised to be dishonest. We present a theoretical formulation for T-PoP and its security properties, and we validate this model through a large number of Monte-Carlo simulations. We specifically focus on two instances of T-PoP and analyse their security and reliability properties under a range of adversarial conditions. Use-cases and applications are discussed towards the end of this paper.
## I Introduction
A basic problem across a range of application areas is the need for decentralised agents to be able to certify their position in a trustworthy and certifiable manner. For example, in crowd-sourcing applications arising in the context of smart cities, the need for agents to certify their position in a trustworthy manner is essential; one such use-case arises when vehicle cameras are used to identify parking spot locations or vacant or available electric charge points [1]. Other examples of this nature are emerging in the context Smart Mobility applications when vehicles need to prove their location to avail of certain services; for example, in the case of hybrid vehicles using their electric engine mode in a city to avoid an environmental charge (as in London); when making use of a fast or slow lane on a highway and paying the associated charge; or when infotainment services are offered to vehicles when adopting certain positions.
Our objective in this paper is to propose a suite of algorithms whereby agents may certify their position collaboratively, but in a decentralised manner. Our algorithms are designed to be robust in the sense that they do not require the use of centralised infrastructure, and in the sense that they are designed to operate successfully in an adversarial environment (in the presence of agents that are interested in coercing the system for their own personal objectives). The need to be independent of a centralised authority is fundamental to our work, as such authorities may be compromised or a subject to data and privacy leaks [2]. While our original motivation arises from automotive applications, the work presented here is relevant and may find application in other disciplines and applications, and may also help to encode basic elements of fairness, social justice and civil rights. More specifically, in an era characterised by fake news, and deep fake technology, the ability to associate sensing information with a verifiable geographic position, is not only essential in establishing the veracity of sensed information, but also in developing robust decision making analytics based on this data. Currently across many such applications, sensed information is assumed more trustworthy if a number of people agree on it. In scenarios where we cannot verify ourselves what happened, we search for 'truth' by listening to our peers and believing what a majority claims [3]. So our research question becomes: how can we provide agents with the ability to claim that they are at a given place in time, without hinging the security of our protocol on the honesty of a centralised power? While we are not the first to attempt to address the aforementioned research question, upon exploring existing solutions, we found that none addressed the requirements of applications in smart city contexts. Namely, the solution must be truly decentralised, and it must be robust to attacks whilst preserving user privacy.
Our work is motivated by recent developments in distributed ledger technologies (DLT); in particular, in the design of distributed acyclic graph based distributed ledgers. However, while the design of such ledgers is concerned with architectures that can provide peer-to-peer trustworthy record keeping, we are interested in realising DAG-based algorithms that encode reliable position information.
### _Related Work_
Several papers been published on the topic of proof-of-position; see for example [4, 5, 6, 7, 8]. Most of these papers are unsuitable for the type of applications that we are interested in due to unrealistic trust assumptions and _de facto_ centralisation in the systems that they propose. In the remainder of this section we give a snapshot of some of this prior work.
An early example of a decentralised proof-of-location scheme, termed APPLAUS, was presented in [9]. The APPLAUS scheme makes a number of valuable contributions; namely it looks to address collusion attacks using graph clustering and computing a 'betweeness' metric. In [10], nodes in the graph that are weakly connected are considered less trustworthy. They also present a weight function that decays with time, and compute the trustworthiness of a node by calculating a node's ratio of approvals to neighbours. These contributions serve as a starting point to the work here presented. However, in their work, users must register their public/private keys with a trusted Certificate Authority, thereby breaking an assumption of being truely decentralised.
A focal point of our work is that we do not assume a trusted centralized authority, and indeed we argue that introducing this assumption makes a system de-facto centralised and poses security and privacy risks. Another algorithm known as SHARP is introduced in [11]. Here, the authors present a private proximity test that does not require a user to reveal their actual location to a server, and furthermore, they present a secure handshake method wherein users do not need to have a pre-shared secret. A noteable contribution in this work is that a witness1 may only extract the session key if they are indeed in the vicinity of the prover 2. The security metric in this work is to ensure that the location tags are unforgeable, thus implying that the protocol is robust towards location cheating. A weakness of the protocol is that a user in a given location can generate a valid proof and they could then relay this valid proof to a malicious agent that is not in the same location as them. Another algorithm known as Vouch + is presented in [12]. This is another decentralised approach to prove location, with a focus on addressing high speed and platooning scenarios. The major disadvantage of the work presented is that its security relies on selecting a proof provider that is honest. This assumption, in our opinion, is too strong. We aim to develop a protocol wherein the prover could lie, and the system would still have a probabilistic guarantee of detecting this. As another example, SPARSE, the protocol presented in [4] does not allow the prover to pick their own witnesses, making collusion significantly harder. Furthermore, SPARSE does address necessary security concerns, and achieves integrity, unforgeability and very importantly: non-transferability. However, similarly to [12], the prover is assumed to be a trusted entity which supposedly does not publish users' identity and data.
Footnote 1: An agent that will verify to seeing another agent wishing to prover their position.
Footnote 2: An agent that wishes to prove their position.
### _Contributions_
We present a generalised model for a class of decentralised, proof-of-position algorithms. We also present a mathematical model to describe the operation of this class of algorithm and to facilitate further analysis. Simulations are also presented that validate our mathematical model, and we present a framework for users to tailor the operating conditions of the algorithm to satisfy their security and reliability requirements. We also provide probabilistic guarantees of detecting dishonest provers and collusion attacks.
**Comment:** The algorithm can also be implemented in a privacy preserving manner given that T-PoP does not require the agent running the algorithm to actually reveal their true position, but rather a cryptographic commitment [13] to one's position suffices. Depending on the security requirements of the application, T-PoP users can pick a commitment scheme with varying binding and hiding, as long as the commitment scheme supports the computation of Euclidean distance between two points.
Finally, we do not constrain the freedom of adversarial agents to misbehave. We consider not only the possibility of them being dishonest about their own position, but also colluding to lie about other agents' position.
### _Structure of the paper_
Our paper is structured as follows: first we introduce the T-PoP protocol and explain its functioning in section II, next we present a theoretical model for the T-PoP class of algorithms in section III and finally we simulate T-PoP in a more realistic scenario in section IV, thus validating our theoretical model too.
## II Tree - Proof of Position protocol
We begin by providing a high level explanation of how the protocol operates. Subsequently, we will provide the necessary definitions for each stage of and explain them in a detailed manner. We assume that agents willing to participate in the protocol are situated in a two dimensional area \(T\subseteq\mathbb{R}^{2}\) (the protocol can be seamlessly extended to a three-dimensional space). Each agent \(a_{i}\) is characterised by their _true_ position \(s_{i}=(x_{i},y_{i})\in T\) and by their _claimed_ position \(\hat{s}_{i}=(\hat{x}_{i},\hat{y}_{i})\in T\) while the set of all agents is denoted by \(A\). Notice that it is possible that \(\hat{s}_{i}\neq s_{i}\) (in the event an agent is lying). An agent, \(a_{j}\), is (allegedly) \(a_{i}\)'s neighbour if \(||\hat{s}_{i}-\hat{s}_{j}||<r_{i}\), where \(r_{i}>0\) is each agent's range-of-sight. T-PoP is performed in three steps, as depicted in Figure 1:
* **Commit**: At the beginning of T-PoP, each agent, \(a_{i}\in A\), commits to their claimed position, \(\hat{s}_{i}\) nd publishes \(\hat{s}_{i}\) on a distributed ledger (DL). This ensures that the agent's commitment3 cannot be changed later. Footnote 3: The only necessary requirement for our protocol is that the commitment is binding [13] To ensure user privacy, we favour schemes that allow for the computation of the Euclidean distance between two points which can be achieved by leveraging encryption schemes that are fully homomorphic. It is also necessary to achieve non-repudiation, which can be done through the use of digital signatures. Frequently used examples include [14] and [15]. This ensures an agent cannot later deny having claimed to be in a given position [16]. Finally, non-transferability is needed to ensure that if an honest prover generated a valid location proof through T-PoP, they cannot then transfer their honest proof to a malicious actor. A user’s identity is unique upon being issued, and should this be in the form of a private key, we introduce the assumption that users do not share it.
* **Tree Construction**: Each agent, \(a_{i}\), then constructs a tree of depth \(d\in\mathbb{N}^{+}\), incorporating the committed positions of agents, called _witnesses_, at levels \(l\in\{0,\ldots,d\}\). A specific \(a_{i}\)--which we denote as \(g\)--is the root of the resulting tree. These \(g\in A\)-indexed trees are also committed to the DL as they are part of the proof-of-position protocol. For every _prover_, \(g\), the tree is constructed as follows:
* \(g\) is the root node at level 0.
* For each \(l\in\{1,...,d\}\), each node at level \(l-1\) will name \(w_{l}\)_witnesses_. A witness at level \(l\) is an agent, \(a_{j}\), that is a neighbour (see above) of a witness, \(a_{i}\in W_{l-1}\), at level \(l-1\) (note that if \(\hat{s}_{i}\neq s_{i}\) and \(a_{i}\) is lying about their position it is possible that \(a_{i}\) and \(a_{j}\) might not actually be _true_ neighbours). \(a_{i}\) is called the \(parent\) of witness \(a_{j}\). The set of all witnesses at level \(l\) is called \(W_{l}\), with \(|W_{l}|\equiv n_{l}\).
tree, it should not be named again by another agent. Otherwise, if this happens, the prover will be considered dishonest. In practice, the root node \(g\), names \(w_{1}\) witnesses who in turn would each name \(w_{2}\) witnesses and so on, until we reach depth \(d\). The number of witnesses per level, \(n_{l}\), can therefore be computed recursively: \[n_{l}=w_{l}n_{l-1},\;l=1,\ldots,d,\] (1) with \(n_{0}\equiv 1\). Figure 2 depicts the operation of this process.
* **Verification**: The agent wishing to prove their position runs the verification stage with the tree as an input, initialized with \(l=d\). 1. Each witness at level \(l\) states whether their parent at level \(l-1\) is their neighbour or not. If the answer is yes, and the witness has not yet been named in the tree, this witness becomes a confirmed level \(l\) witness. The total number of confirmed level \(l\) witnesses is denoted as \(M_{l}\leq n_{l}\), and the total number of witnesses that confirm parent \(b\) at any level, \(l\), is denoted by \(K_{b}\leq w_{l}\). It follows that \[M_{l}=\sum_{b\in W_{l}}K_{b}\leq n_{l}\] (2) 2. If \(K_{b}<t\cdot w_{l}\), \(t\in(0,1]\), parent \(b\) is eliminated from the tree. Here, \(t\) is a parameter of T-PoP, called the _threshold_, which is used to regulate the Security and Reliability properties of the algorithm, defined in Section III. 3. If \(M_{l}<t\cdot n_{l}\) then the algorithm interrupts and outputs that root \(g\) is lying about their position. Otherwise we move on to level \(l-1\) and we repeat this process. Note that any parent removed by the previous step will not be included in this next iteration of T-PoP.
T-PoP is therefore an algorithm depending on a set of parameters, \(\theta\equiv\{t,d,w_{1},...,w_{d}\}\). The influence of these parameters on the performance of the algorithm will be explored in Section IV, via two examples. The pseudo-code for the _Tree Construction_ and _Verification_ stages of the protocol can be found in Algorithms 1 and 2 respectively.
**Example:** Consider the T-PoP example in Figure 3, in which \(\theta=\{t=0.5,d=2,w_{1}=2,w_{2}=2\}\), and so \(n_{1}=2\) and \(n_{2}=4\) (1). Solid arrows mean that a witness approves their parent and dotted lines mean that a witness does not approve their parent. Agents \(a_{5}\) and \(a_{6}\) are dishonest agents, so that their committed positions, \(\hat{s}_{5}\) and \(\hat{s}_{6}\), are different from their true positions. However, agent \(a_{2}\) does not know this, it saw those cars next to it and it picked \(a_{5}\) and \(a_{6}\) as witnesses. So, \(a_{5}\) and \(a_{6}\) do not confirm that \(a_{2}\) is a neighbour of theirs, whereas \(a_{3}\) and \(a_{4}\) confirm that \(a_{1}\) is a neighbour of theirs. In line with point 2 of _Verification_ (above), agent \(a_{1}\) has enough confirmed witnesses (\(K_{a_{1}}=2\geq t\times w_{2}=0.5\times 2\)) and stays in the tree, while agent \(a_{2}\) does not have enough confirmed witnesses (\(K_{a_{2}}=0<0.5\times 2\)), and so \(a_{2}\) is removed from the tree. However, since the total number of confirmed witnesses at level 2 is \(M_{2}=2\geq t\times n_{2}=0.5\times 4\), T-PoP does not stop for \(g\) (_Verification_, point 3), and we move to level 1. At level 1, \(a_{2}\) has been removed but \(a_{1}\) confirms that \(g\) is its neighbour. As per points 2 and 3 of _Verification_, the final output of T-PoP is that \(g\) is _truthful_ about their position. \(t\) is critical in determining the output of T-PoP. For instance, if \(t=1\), then \(M_{2}=2<t\times n_{2}=1\times 4=4\), causing T-PoP to stop at point 3 of _Verification_, and returning an output of _untruthful_ for \(g\).
### _Possible Adversarial Behaviours_
In order to analyse the properties of T-PoP, we introduce two qualities that each agent, \(a_{i}\in A\), will exhibit:
**Definition 1** (Honest and Dishonest agents).: _Every \(a_{i}\in A\) is either honest or dishonest. The set of honest agents is denoted by \(H\subseteq A\), and the set of dishonest agents is denoted by \(\overline{H}\). A dishonest agent will always commit a position \(\hat{s}_{i}\neq s_{i}\). A honest agent on the other hand will always commit a position \(\hat{s}_{i}=s_{i}\)._
**Definition 2** (Coerced and Non-Coerced Agents).: _Every \(a_{i}\in A\) is either coerced or non-coerced. The set of coerced agents is denoted by \(C\subseteq A\), and the set of non-coerced agents by \(\overline{C}\). A coerced agent will claim to see agents that are not actually in its vicinity, if the latter are dishonest._
Fig. 1: High-level Overview of the T-PoP protocol
Fig. 2: Tree building examples. Agent \(a_{i}\) commits their alleged position \(\hat{s}_{i}\) to a distributed ledger. The panel on the top right shows the construction of a tree for \(d=1\) and \(w_{1}=4\), while the panel on the bottom right shows the construction of a tree for \(d=2,w_{1}=2,w_{2}=2\).
\(a_{i}\) will interact with its neighbours in different ways--as defined next--depending on which of the four possible states it falls into with respect to the two 2-state classes above.
**Definition 3** (Neighbour-adding logic).: _Every agent, \(a_{i}\in A\), adds neighbours, \(a_{j}\), according to the following logic:_
* _If_ \(a_{i}\in\overline{H}\)_, it can add_ \(a_{j}\) _as a neighbour if_ \(a_{j}\)_'s position, is within the range-of-sight_ \(r_{i}\)_, of_ \(a_{i}\)_'s fake position,_ \(\hat{s}_{i}\neq s_{i}\)_. This implies that_ \(a_{i}\) _checks who is in the_ \(r_{i}\)_-neighbour of the fake position that they committed._
* _If_ \(a_{i}\in H\)_, it can add_ \(a_{j}\) _as a neighbour if_ \(a_{j}\)_'s committed position is within the range-of-sight,_ \(r_{i}\)_, of_ \(a_{i}\)_'s true position,_ \(s_{i}\)_._
* _If_ \(a_{i}\in\overline{C}\)_, it can only add_ \(a_{j}\)_'s true position,_ \(s_{j}\)_, if this is within_ \(a_{i}\)_'s range-of-sight,_ \(r_{i}\)_._
* _If_ \(a_{i}\in C\)_, it can add_ \(a_{j}\)_'s true position,_ \(s_{j}\)_, if_ \(a_{j}\) _is honest, and its fake position,_ \(\hat{s}_{j}\)_, if_ \(a_{j}\) _is dishonest._
## III Theoretical Analysis
The stochastic nature of T-PoP is modelled via the probabilistic graphical model in Figure 4, for the case where \(d=2,w_{1}=2,w_{2}=2\). We assume that the Honesty and Coercion states of each agent are independently and identically distributed (iid) Bernoulli trials. Formally, for each agent, we define two independent random variables, \(h\sim\mathcal{B}(p_{h})\) and \(c\sim\mathcal{B}(p_{c})\), where \(p_{h}\in[0,1]\) and \(p_{c}\in[0,1]\) are the probabilities of any agent being honest and coerced, respectively (and it follows that \(1-p_{h}\) and \(1-p_{c}\) are the probabilities of an agent being respectively dishonest and non-coerced). Depending on the outcome of these trials for a witness at level \(l\), it will then deterministically confirm that the witness at level \(l-1\), which named them, is its neighbour or not (note that agents might be lying about whether another agent is their _true_ neighbour or not). The outcome of this interaction has been described in definition 3, and is summarized in the truth table (Table I). If agent, \(a_{i}\), verifies agent \(a_{j}\)'s position, the outcome is 1, and 0 otherwise. In this model, we assume that the density of agents in \(T\) is very high. This means that while provers construct their tree following Algorithm 1, they are always able to find \(w_{l}\) witnesses at each level and that each witness is always unique. While this assumption might sound unrealistic, as in many cases agents might be alone and not have enough witnesses around them, we believe that studying the outcome of the model in this high-density scenario provides a good assessment of the qualities of T-PoP. Indeed, we argue that if an agent is honest but does not have sufficient witnesses, it is fair to consider them less trustworthy. Once the tree has
Fig. 3: Example of T-PoP algorithm with \(d=2,w_{1}=2,w_{2}=2\).
been created, the _Verification_ step can be used to provide the outcome of the algorithm, which can be either 0 (if the algorithm deems the prover dishonest) or 1 (if the algorithm deems the prover honest). Given a prover, \(g\) (the root of the tree), we define a random variable, \(C(g)\in\{0,1\}\), whose outcome depends on the ensemble of iid random variables, \(h,c\), in its constructed tree, and on T-PoP parameters, \(\theta\equiv\{t,d,w_{1},...,w_{d}\}\). In order to analyse T-PoP's performance, we consider two metrics: reliability and security.
**Definition 4**.: Security, \(S,\;\) _is a conditional probability quantifying the ability of the algorithm to detect malicious agents. Specifically, it is the true-negative conditional probability, which, under stationarity assumptions, is independent of \(i\in\{1,\ldots,|A|\}\):_
\[S\equiv\Pr[C(g)=0|a_{i}\in\overline{H}]\]
**Definition 5**.: Reliability, \(R,\) _is a conditional probability quantifying the ability for the algorithm to detect honest agents. Specifically, it is the true-positive conditional probability. Once again, under stationarity assumptions:_
\[R\equiv\Pr[C(g)=1|a_{i}\in H]\]
In Figure 5, we display empirically evaluated \(R\) and \(S\) for two sets of parameters, respectively \(\theta_{1}=\{t=1,d=1,w_{1}=6\}\) and \(\theta_{2}=\{t=1,d=2,w_{1}=2,w_{2}=2\}\), varying \(p_{h}\) and \(p_{c}\) in their ranges, \([0,1]\), with steps of 0.02. To emphasize the functional dependence of these probabilistic performance metrics on the honesty and coercion probabilities of the iid agents, we denote these these metrics by \(R(p_{h},p_{c})\) and \(S(p_{h},p_{c})\). The values for \(R(p_{h},p_{c})\) and \(S(p_{h},p_{c})\) are obtained through empirical evaluation via extensive Monte Carlo simulations (we simulated 5000 trees for each choice of parameters) of the graphical model.
## IV Simulations
In this section we present an agent-based simulator, coded in Python, to replicate a more realistic scenario for T-PoP and to validate the graphical theoretical model that we presented in the previous Section. Each agent has a number of varying attributes such as their range-of-sight, position, velocity, unique identifier and whether they are honest or dishonest and coerced or not. Depending on the latter variables, each agent will commit to their true position or a fake one, and will add agents to their set of neighbours as outlined in definition 3. We then create an environment with a fixed density of agents in it, and place these randomly and uniformly across the environment. We allow them to move according to their velocity vector, within the bounds of the environment. Each time the agents move, all agents construct a new set of neighbours and discard the previous one. Next, each agent wishing to claim their position runs T-PoP; namely, they run the _Tree Construction_ and the _Verification_ algorithms. Our simulator can be found in this GitHub Repository.. Preliminary simulations show that the density of agents in the environment vastly affected the performance of T-PoP. This was especially noticeable when the average number of agents per range-of-sight in the environment was lower than the total number of nodes of the tree being constructed, which greatly increased the number of False Negatives, thus making T-PoP unsuitable for low density environments. Other key variables are the threshold, depth and number of witnesses used. A greater threshold increases security, but also reduces reliability. Increasing the number of witnesses increased both security and reliability, however, this may not be a suitable measure for sparser scenarios, or cases where agents are moving at high speed, and may cause a communication overhead. We advocate for the users to select the appropriate threshold, depth and number of witnesses based on the individual needs of their own application. Lowering the threshold can lower security, but provides more flexibility in the system. The user can then select an appropriate number of witnesses based on the expected density of their network, and use the depth parameter to find an appropriate trade-off between security and reliability, and communication overhead and flexibility.
### _Preliminary results_
Our objective in this section is twofold. On the one-hand, we want to show some preliminary results on the performance of T-PoP for a given choice of operating conditions. On the other hand, we are interested in validating
Fig. 4: Probability Model of T-PoP with parameters \(d=2,w_{1}=2,w_{2}=2\). The red lines indicate that those variables influence the output of a specific node.
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline \(a_{i}\)\(a_{j}\) & \(h\) and \(\overline{c}\) & \(h\) and \(c\) & \(\overline{h}\) and \(c\) & \(\overline{h}\) and \(\overline{c}\) \\ \hline \(h\) and \(\overline{c}\) & 1 & 1 & 1 & 1 \\ \hline \(h\) and \(c\) & 1 & 1 & 0 & 0 \\ \hline \(\overline{h}\) and \(c\) & 1 & 0 & 1 & 0 \\ \hline \(\overline{h}\) and \(\overline{c}\) & 1 & 0 & 0 & 1 \\ \hline \end{tabular}
\end{table} TABLE I: A truth table showing confirmation (1) or rejection (0) of a parent’s (\(a_{i}\)) position by a witness (\(a_{j}\)), depending on the honesty (\(h\)) and coercion (\(c\)) states of each agent. Notice that the relationship between \(a_{i}\) and \(a_{j}\) is symmetrical.
Figure 5: T-PoP performance for the **graphical probability model** (Figure 4). The panels in the left column show reliability, \(R\), while the panels in the right column show security, \(S\). The first row is associated with model parameters, \(\theta_{1}\), while the second row is associated with model parameters, \(\theta_{2}\).
Figure 6: T-PoP performance for **agent-based model**. The panels on the left show reliability \(R\), while the panels on the right show security, \(S\). The first row is associated with model parameters \(\theta_{1}\), the second row is associated with model parameters \(\theta_{2}\). Notice the close similarity to Figure 5.
the results from the probabilistic graphical model presented in the previous section, with a view to creating an analytical framework for analysis of the T-PoP class of algorithms. This gives us confidence that the results obtained for simple model parameter settings (e.g. \(d\) small) still hold in more realistic scenarios.
The simulations have been set up as follows: we considered each possible combination of \(p_{h}\) and \(p_{c}\) in the ranges \([0,1]\), with steps of 0.02. For each combination we ran 50 Monte Carlo simulations and we computed empirical estimates of the values of \(R(p_{h},p_{c})\) and \(S(p_{h},p_{c})\). Simulations are set up in such a way that on average each agent has 50 neighbours in their range of sight \(r_{i}\). While this number might appear very high, we wanted to make sure that the results obtained were comparable to the ones obtained with the probabilistic graph model. Moreover, real-life situations with high density of pedestrians (e.g., the underground during peak hours) would map well into this scenario. We ran these simulations for the choice of parameters \(\theta_{1}\) and \(\theta_{2}\).
The results are shown in Figure 6. While T-PoP with \(\theta_{1}\) yields better performance overall (as both \(R(p_{h},p_{c})\) and \(S(p_{h},p_{c})\) are higher for each choice of \(p_{h}\) and \(p_{c}\)) the second set of simulations shows that decreasing the number of witnesses by a third and increasing the depth level by 1 allows us to achieve similar results. This is useful because--while the total number of nodes in each prover's tree is the same for both scenarios--a tree of depth 2 with 2 witnesses per parent places a smaller communication overhead on the prover, because it only needs to name 2 witnesses, as opposed to 6. In this way, the load is shared among the prover and the witnesses.
Overall, in high density scenarios, the results of both simulations show that--if \(p_{h}>0.9\) and \(p_{c}<0.2\)--T-PoP is capable of achieving \(S>0.85\%\) and \(R>0.9\%\) for \(\theta_{1}\), and \(S>0.7\%\) and \(R>0.9\%\) for \(\theta_{2}\).
For lower proportions of honest agents and higher proportions of coerced agents (i.e. in the presence of many colluding, dishonest and coerced agents), the performance of T-PoP degrades. This is to be expected in a decentralised system such as T-PoP, since it is virtually impossible to distinguish between a group of honest agents verifying each other and a group of dishonest and coerced agents collaborating to verify each other in a fraudulent manner. Accordingly, we can observe across all figures that--even when the percentage of honest agents is low--the security remains high at the expense of reliability. We observe that--whilst, indeed, T-PoP can detect true negatives (i.e. be secure) in highly (and perhaps even unrealistically highly) adversarial environments--the drawback is that it penalises honest agents too harshly (i.e. is unreliable). This is a consequence of the collaborative nature of the algorithm. When the number of honest agents in the system is low (i.e. \(p_{h}\downarrow 0\)), they will--with high probability (w.h.p.)--be misclassified as dishonest because they will select dishonest witnesses w.h.p.
### _Validation of the graphical model (Figure 4)_
For validation of the graphical probability model, we make use of the Jensen-Shannon Divergence (JSD) [17] to
Fig. 7: Jensen-Shannon divergence (JSD) between \(R_{s}\) and \(R_{m}\) (left column) and between \(S_{s}\) and \(S_{m}\) (right column) for \(\theta_{1}\) (top row) and \(\theta_{2}\) (bottom row).
quantify the distance between the probability distributions obtained through the agent-based model (i.e. the T-PoP implementation) and the graphical model. In what follows, we refer to the values of \(R\) and \(S\) obtained from the simulated agent-based model as \(R_{s}\) and \(S_{s}\), and the ones obtained from the graphical model as \(R_{m}\) and \(S_{m}\). We compute two JSD-based metrics: (i) the \((p_{h},p_{c})\)-indexed (i.e. pointwise) JSD map between \(R_{m}\) and \(R_{s}\), and between \(S_{m}\) and \(S_{s}\), respectively; and (ii) the global JSD between the normalized \(R_{m}\) and \(R_{s}\) maps, and the normalized \(S_{m}\) and \(S_{s}\) maps, respectively. By "normalized", we mean that each of these positive maps is divided by its element sum, yielding a probability mass function (pmf). In case (ii), we can therefore condense into a single number the difference between the performance figures (\(R\) and \(S\), respectively) for the simulated T-PoP system and its graphical model (Figure 4).
The results for the point-wise evaluation ((i) above) of the JSD are shown in Figure 7, while the global evaluation ((ii) above) is summarised in table II. Note that \(0\leq\) JSD \(\leq\) 1, with lower values achieved when probabilities are close in value (i.e. in cases of good agreement between the behaviour of the simulated system and the graphical model). It is clear that--at least for high density scenarios--the behaviour of the graphical model closely mirrors that of the implemented T-PoP system. Nevertheless, the pointwise JSD results reveal significant discrepancies in the security (\(S\)) metric when \(p_{h}\uparrow 1\)(i.e. for high proportions of honest users).
## V Conclusion
We have presented a proof-of-position class of algorithms that are fully decentralised. They can be run by any agent participating in the network and they do not assume trust in a central authority, nor do they rely on physical infrastructure. We also considered a range of attack vectors by allowing agents not only to lie about their own position, but also about others' positions. Our algorithm can also be computed in a privacy-preserving manner, as there is no need for the true location of an agent to be revealed to the network. We also developed a theoretical graphical model for this class of proof-of-position algorithms, and statistically validated the model via comparative analysis of their respective performances. In future work, we will use the theoretical model to predict the performance of T-PoP as a function of its operating conditions, \(\theta\). Specifically, we will be interested in characterising the effect of the depth (\(d\)), threshold (\(t\)) and number of witnesses (\(w_{l}\)) on the security and reliability of the T-PoP class of algorithms. Developing such a framework can allow users to select the optimal operating conditions of the algorithm to meet their needs, based on their expected density, fault tolerance and proportion of honest and non-coerced agents in their system. The theoretical model will also allow performance guarantees to be deduced for T-PoP. Finally, we intend to explore the suitability of T-PoP for specific use-cases in the presence of more complex adversarial scenarios.
**Acknowledgments:** The authors would like to thank the IOTA Foundation for funding this research.
|
2309.12851 | A search for Elves in Mini-EUSO data using CNN-based one-class
classifier | Mini-EUSO is a small, near-UV telescope observing the Earth and its
atmosphere from the International Space Station. The time resolution of 2.5
microseconds and the instantaneous ground coverage of about $320\times 320$
km$^2$ allows it to detect some Transient Luminous Events, including Elves.
Elves, with their almost circular shape and a radius expanding in time form
cone-like structures in space-time, which are usually easy to be recognised by
the eye, but not simple to filter out from the myriad of other events, many of
them not yet categorised. In this work, we present a fast and efficient
approach for detecting Elves in the data using a 3D CNN-based one-class
classifier. | Lech Wiktor Piotrowski | 2023-09-22T13:23:39Z | http://arxiv.org/abs/2309.12851v1 | # A search for Elves in Mini-EUSO data using CNN-based one-class classifier
###### Abstract:
Mini-EUSO is a small, near-UV telescope observing the Earth and its atmosphere from the International Space Station. The time resolution of 2.5 microseconds and the instantaneous ground coverage of about \(320\times 320\) km\({}^{2}\) allows it to detect some Transient Luminous Events, including Elves. Elves, with their almost circular shape and a radius expanding in time form cone-like structures in space-time, which are usually easy to be recognised by the eye, but not simple to filter out from the myriad of other events, many of them not yet categorised. In this work, we present a fast and efficient approach for detecting Elves in the data using a 3D CNN-based one-class classifier.
## 1 Introduction
Mini-EUSO [1] is a small orbital telescope, designed within the JEM-EUSO programme [2], observing the night-time Earth from the International Space Station (ISS) through a UV-transparent window inside the Zvezda module. It is composed of two 25 cm diameter Fresnel lenses focusing light on a Photon Detection Module (PDM) consisting of 36 multi-anode photomultipliers (MAPMTs), encompassing 2304 pixels. The field of view (FoV) is \(44^{\circ}\times 44^{\circ}\), with a single-pixel side covering roughly 6 km on the ground, and the whole PDM more than 300 km. The spectral acceptance spans between 290 and 430 nm, making Mini-EUSO mostly a UV telescope. The PDM data is gathered in 3 time resolutions. D1 data consists of packets of 128 frames, each with 2.5 \(\mu\)s exposure, stored upon receiving a fast-events trigger from the FPGA. D2 data packet consists of 128 frames, each being an average of 128 D1 frames, forming a 320 \(\mu\)s block. It is collected after receiving a separate, slow-events trigger. D3 data are untriggered, forming a continuous "movie" with a single frame being an average of \(128\times 128\) D1 frames, spanning 40.96 ms.
The PDM is a very sensitive instrument and thus can be damaged by excessive light. Thus, two main levels of protection were introduced. The first one switches a part (an "EC-unit" composed of 4 MAPMTs) of the detector to lower gain if a few very bright pixels are detected in it. This happens quite often when going over the cities, etc. The second one is an analogue over-current protection, sensitive to the summed signal in all the EC-unit pixels. The telescope is also equipped with a small near infra-red camera and a visible light camera set to take photos with 5 s exposure time, photodiodes for detecting the night/day transitions, and a small silicon photomultiplier.
Three time resolutions of Mini-EUSO allow it to observe a wide range of phenomena. It can create a near-UV map of Earth, observe vast numbers of meteors, and detect very fast atmospheric events, including Transient Luminous Events such as ELVES.
## 2 ELVESs
Transient Luminous Events (TLEs) are electrical discharges in the upper atmosphere, usually associated to thunderstorms. Their existence was predicted in 1920s by C. T. R. Wilson [3], and the first observation was performed by R. C. Franz in 1989 [4]. One of the types of TLEs are ELVESs (Emission of Light and Very low frequency perturbations due to Electromagnetic pulse Sources), discovered by Mesoscale Lightning Experiment in 1990. They appear as thick rings of light - a horizontal doughnut cross-section - propagating through the ionosphere at an altitude of about 100 km, and are caused by an electromagnetic pulse from an underlying thunderstorm. They can be multiringed due to, mainly, reflection of the pulse from the ground, and the diameter goes to a few hundred kilometres. The ring expansion speeds lie around the speed of light, thus the whole phenomenon lasts typically about one millisecond.
Mini-EUSO was not designed for ELVESs observations, however its \(\sim 200\times 200\) km FoV at the altitude of ionosphere, and the trigger at D1 2.5 \(\mu\)s frame length allows for registering these phenomena in a slightly different way than dedicated experiments. The telescope usually sees only part of the ring at the later stages of expansion, for at the beginning either the ring itself or accompanying light emission from the centre are too bright for our telescope. The observed ring thickness is of an order of a few pixels, changing during the propagation, along with the changing
intensity. In 3D space consisting of 2 spatial x, y dimensions and 1 temporal t dimension, an ELVES appears as a part of an approximate, thick cone surface, or multi-cone surface with a common top in case of a multi-ringed ELVES. However, the propagation of the rings is often followed by a significant brightening of the central source, which causes lightning protection to be switched on for influenced parts of the detector, and thus the rings being followed or propagating through areas of lower sensitivity, making the observational picture more complicated.
There are a few common phenomena that may resemble ELVESs in our telescope, due to its design. First, any significantly brightening, non-diffuse source may look like an expanding doughnut in Mini-EUSO. This is due to saturation - extendable dead time that causes reduction and then stop of photon counting for photons coming too close in time. The number of counts vs. light intensity dependence grows, reaching its maximum between 100 and 200 counts, then drops, reaching 0. Thus, any sufficiently bright source is visible as a doughnut, with a 0-counts centre, then counts sharply growing and slowly dropping with the distance from the centre, due to first exiting from the saturation and then going through the arms of the point spread function (PSF). If the intensity of the central source increases, the "dead" central area and the visible "doughnut" radius grow in time, similarly to an ELVES. The difference lies in the profile of the doughnut. The external slope of an ELVES is steeper, as it is caused primarily by the physical boundaries of the light emission, not the PSF. The internal slope of the ELVES is not as sharp, as the drop in light intensity is not caused by saturation. At least part of the ELVES interior should be at roughly a background level. Still, as mentioned before, an ELVES is often followed by the brightening of the central source and saturation, making the distinction more complicated.
The nature of the optics of the experiment results in ring-like structures in the PSF far from its centre. For the observable central source intensities, these are too dim to be observed, but for a brightening source outside the FoV they are visible as brightening, thick circles, often causing an illusion of movement. Also, the light protection switching in Mini-EUSO may result in some artificial structures slightly resembling an ELVES. It is important to note, that these events are caused by excessive light, which is common during thunderstorms - conditions required to register an ELVES.
## 3 The machine-learning based ELVESs searching algorithm
The simple pixel over threshold search method is not sufficient for the ELVESs identification purpose, for the diffuse nature of the rings and the fact, that the hardware may trigger on something else than the rings themselves. In the ideal case, searching for a cone-like structure in x, y, t data packets is simple and gives much better results. Initially, we have attempted identification with a series of negative cuts and a fit of conical surface to the remaining cases. This method was, however, mainly troubled by outliers - difficulties in selecting pixels belonging to the ELVES, especially at the late stages or for weak elve. More stable results were achieved when we were analysing each frame separately in circular Hough space and then estimated rings centres and radii in linear Hough space. However, the deviations from conicality especially for very weak events, reduce identification efficiency. The efficiency to background ratio is further decreased by the existence of ELVES-resembling background that may be seen independently but may also accompany an ELVES. Still, these factors do not have a significant influence on manual identification. This led us
to the creation of a Machine Learning algorithm in the hope it can spot visual clues similarily that are easy to spot for humans, but complex to contain in a conventional data analysis method.
### The neural network architecture
The problem of ELVESs identification in the Mini-EUSO data is a problem of so-called one-class classification. A packet of the data needs to be classified as containing an ELVES or not. While formally this would belong to the family of binary classifications, it is not in the sense most commonly used in Machine Learning based on pattern recognition. This problem differs significantly from categorising events into several (in this case two) defined classes, such as tutorial examples of recognising which of 9 digits a handwritten character represents, or if the analysed photo is a photo of a cat or a dog. In multi-class categorisation the neural network usually learns the shared characteristics of each class. Then, oversimplifying. The trained network estimates how each class's characteristic describes the given sample. In the ELVESs identification case, only one class is well-defined in terms that it shares a common characteristic - the class representing the ELVESs. The events belonging to the second class - "not-ELVESs" - do not have to have anything in common apart from not being an ELVES. Thus, in a one-class classificator we need to force the network to learn the common characteristic of one class, but prevent it from attempting to find common features of the samples of the second class.
The implemented idea is based on [5]. We use a simple neural network based on 6 3D convolutional layers coded in PyTorch [6], with details shown on fig. 1. The network accepts a standard D1 Mini-EUSO packet of \(48\times 48\times 128\) values, where the first two dimensions denote x and y pixels, and the third number of frames. The network transforms the packet to 16 values, which can be interpreted as coordinates of the packet in a 16-dimensional space. In our case, if the Euclidean distance \(D\) of the 16-dimensional point from 0 is lower than \(m=2\), the packet is classified as an ELVES, and if higher it is classified as non-ELVES.
The loss function used for training:
\[\frac{1}{2}(Y\cdot D^{2}+(1-Y)\cdot(\max(0,m-D))^{2})\]
Figure 1: The architecture of the network presented in this paper.
where \(Y=1\) for ELVESs and \(Y=0\) for non ELVESs, approaches 0 for ELVESs position approaching 0 in the 16-dimensional space, while for non-ELVESs it is 0 if their distance from 0 is bigger than 2, and grows if the non-ELVESs approach 0 within the sphere of radius 2.
### The training and validation set
The amount of ELVESs in Mini-EUSO data is very limited. Initially, we were operating on a set of 19 events detected with other methods, where 11 were used in the training process and 8 in external validation. To increase the number of events, the set was augmented by spatial flips and 90\({}^{\circ}\) rotations, and finally by introducing random fluctuations of pixel values to the events. False events set was generated from packets close in time, but not belonging to an ELVES, as these packets were well inspected. Later, packets with specific types of events frequently misclassified as ELVESs were added to the set. The process resulted in roughly 1200 events of both classes, where 70% was used for training and 30% for validation. These are not big numbers in the world of Machine Learning, but one has to keep in mind that they were employed for a one-class classifier recognising a rather clear pattern. The remaining 8 elves were augmented in the same way and accompanied by false events not included in the internal training/validation set, and used for external validation to assess the neural network model generalisation capabilities.
All the data were flat-fielded1, then "Gaussianised" with Anscombe transformation. Finally, extreme values were clipped, and each pixel had its mean value subtracted and was divided by its standard deviation.
Footnote 1: Flat-fielding is a process of uniformising the detector’s response by dividing the data by calibration data obtained with a uniformly illuminated instrument.
## 4 Results
The training during the development of the network architecture was tested with three batch sizes: 8, 16 and 32 packets, the last one being close to the maximum that was supported on the used NVIDIA GeForce RTX 2060 GPU with 6 GB of RAM. However, it was quickly discovered that the
Figure 2: _Left:_ The loss of the training set (red) and internal validation set (black). _Right:_ The accuracy of the internal validation set.
best results are almost always obtained with the batch size 8. The drawback is high instability of the loss and accuracy, as can be seen on the fig. 2, depicting training progress in our final design. The network was usually reaching close to 100% accuracy within the first 15 epochs of training, while the loss reduction was most significant within the first 50 epochs. The shown loss and accuracy curves for training and internal validation sets do not show obvious overfitting.
However, such a small initial data set can not be trusted even when augmented. Therefore, after the training, we were checking the efficiency of the snapshots of the network for chosen epochs on data files of all the ELVESs found so far. With the further availability of data, this set has grown to 29 ELVESs spanning through 35 packets. Depending on the network hyperparameters and the epoch of training, the best efficiencies were varying between 80% and 95%, and the amount of misidentified background packets between 0.5% and 2% out of 1554 real packets analysed in the external validation. While these results may seem mediocre, they are already better than our conventional algorithms in terms of efficiency and far better in terms of non-ELVESs classification, and provide some knowledge about the network generalisation capability.
The knowledge that the model generalises at least to some degree was crucial for the final step, which was training the model, giving the best result on the full set of detected ELVESs. Given a very small initial data set, this have a potential of increasing the networks' efficiency in recognising ELVESs, but at the same time prevents external validation, and we run a risk of unnoticed overfitting. Therefore, the training was performed for just one epoch. The final model was run on all the available Mini-EUSO data. It was able to properly identify all the ELVESs-containing packets and found 8 new ELVESs. At the same time, it misidentified 308 packets as ELVESs in the whole data set containing hundreds of thousands of packets.
## 5 Summary
The main purpose of this work was to create a machine learning based algorithm that would be more efficient in identifying ELVESs in the data than conventional algorithms prepared earlier. This task was completed successfully with the model being 100% efficient on the pre-detected ELVESs, misidentifying less tan 0.1% of packets and capable of detecting new ELVESs. Assuming that the final model's efficiency is not worse than the efficiency of a model trained on the limited data set, we expect the full efficiency to be at least 80% and very likely much better than that. Unfortunately this can remain only an educated guess, because a set of ELVESs detected with alternative, similarly efficient algorithm does not exist, nor does a simulated data set.
Until we find that the algorithm is not able to identify some ELVESs identified with other methods, improving it is technically difficult. We could try to increase spatial separation of the classes in their resulting 16 dimensions, but that most likely requires creating an alternative loss function. The understanding of the model's limits could be improved with analysing the efficiency vs some effective signal-to-noise ratio. However, even the standard signal-to-noise ratio is not trivial to estimate for ELVESs which are diffuse rings of increasing radius, even more difficult to modify, and it would be just a part of an effective signal-to-noise ratio, that must include also the light background conditions, light-protection response of the detector, etc. Still, this kind of estimation would be useful also for other purposes and could help improve the model design and parameters, and we intend to prepare it in the future. |
2309.13597 | On the Kolmogorov equation associated with Volterra equations and
Fractional Brownian Motion | We consider a Volterra convolution equation in $\mathbb{R}^d$ perturbed with
an additive fractional Brownian motion of Riemann-Liouville type with Hurst
parameter $H\in (0,1)$. We show that its solution solves a stochastic partial
differential equation (SPDE) in the Hilbert space of square-integrable
functions. Such an equation motivates our study of an unconventional class of
SPDEs requiring an original extension of the drift operator and its Fr\'echet
differentials. We prove that these SPDEs generate a Markov stochastic flow
which is twice Fr\'echet differentiable with respect to the initial data. This
stochastic flow is then employed to solve, in the classical sense of infinite
dimensional calculus, the path-dependent Kolmogorov equation corresponding to
the SPDEs. In particular, we associate a time-dependent infinitesimal generator
with the fractional Brownian motion. In the final section, we show some
obstructions in the analysis of the mild formulation of the Kolmogorov equation
for SPDEs driven by the same infinite dimensional noise. This problem, which is
relevant to the theory of regularization-by-noise, remains open for future
research. | Alessandro Bondi, Franco Flandoli | 2023-09-24T09:59:24Z | http://arxiv.org/abs/2309.13597v1 | # On the Kolmogorov equation associated with Volterra equations and Fractional Brownian Motion+
###### Abstract
We consider a Volterra convolution equation in \(\mathbb{R}^{d}\) perturbed with an additive fractional Brownian motion of Riemann-Liouville type with Hurst parameter \(H\in(0,1)\). We show that its solution solves a stochastic partial differential equation (SPDE) in the Hilbert space of square-integrable functions. Such an equation motivates our study of an unconventional class of SPDEs requiring an original extension of the drift operator and its Frechet differentials. We prove that these SPDEs generate a Markov stochastic flow which is twice Frechet differentiable with respect to the initial data. This stochastic flow is then employed to solve, in the classical sense of infinite dimensional calculus, the path-dependent Kolmogorov equation corresponding to the SPDEs. In particular, we associate a time-dependent infinitesimal generator with the fractional Brownian motion. In the final section, we show some obstructions in the analysis of the mild formulation of the Kolmogorov equation for SPDEs driven by the same infinite dimensional noise. This problem, which is relevant to the theory of regularization-by-noise, remains open for future research.
**Keywords:** path-dependent Kolmogorov equations; stochastic Volterra equations; stochastic partial differential equations; fractional Brownian motion
**MSC2020:** 35R15, 45D05, 60G22, 60H15
## 1 Introduction
Consider the stochastic differential equation (SDE) in \(\mathbb{R}^{d}\) with additive noise
\[X_{t}=x_{0}+\int_{0}^{t}k_{1}\left(t-s\right)b\left(s,X_{s}\right)\mathrm{d}s +\frac{1}{\Gamma\left(\alpha\right)}\int_{0}^{t}\left(t-s\right)^{\alpha-1} \mathrm{d}W_{s}, \tag{1}\]
where \(x_{0}\in\mathbb{R}^{d}\), \(\alpha\in(\frac{1}{2},1)\), \(W=\left(W_{t}\right)_{t\geq 0}\) is a standard Brownian motion in \(\mathbb{R}^{d}\), \(b:[0,T]\times\mathbb{R}^{d}\to\mathbb{R}^{d}\) is a measurable vector field and \(k_{1}\) is a locally square-integrable, \(\mathbb{R}-\)valued kernel that is continuous in \((0,\infty)\). This equation belongs to the class of stochastic Volterra equations (of convolution type), which is characterized by a wide and continuously expanding body of literature, see for instance [1, 2, 5, 19, 20, 24]. The additive noise driving the SDE (1) is a fractional Brownian motion (henceforth, fBm) of Riemann-Liouville type, with Hurst parameter \(H=\alpha-\frac{1}{2}\in(0,\frac{1}{2})\). Our motivation for studying this random perturbation stems from its relevance in mathematical finance, particularly in the field of rough volatility models, see [6, 11, 17]. However, the theory that we develop in this paper encompasses also the case \(\alpha\in[1,\frac{3}{2})\), corresponding to a fBM with Hurst parameter \(H\in[\frac{1}{2},1)\), exhibiting smoother trajectories and longer memory.
Inspired by [14], our aim is to insert the Volterra SDE (1) in a class of infinite dimensional SDEs in a separable Hilbert space \((H,\langle\cdot,\cdot\rangle_{H})\) of the form
\[w_{t}=\phi+\int_{0}^{t}B\left(s,w_{s}\right)\mathrm{d}s+\int_{0}^{t}\sigma \left(s\right)\,\mathrm{d}W_{s},\quad\phi\in H, \tag{2}\]
where \(\sigma\colon[0,T]\to\mathcal{L}(\mathbb{R}^{d};H)\) and \(B\colon[0,T]\times H\to H\). In order to achieve this objective, we need to consider a drift \(B\) with an unconventional structure. This motivates the study carried out in this paper of a novel class of stochastic partial differential equations (SPDEs) and of the associated stochastic flow's regularity. Notably, these SPDEs require an extension of the drift operator and its Frechet differentials.
Given \(\Phi\colon H\to\mathbb{R}\), we then study the following backward Kolmogorov equation associated with (2):
\[\begin{cases}\partial_{t}u\left(t,\phi\right)+\mathcal{A}_{t}u\left(t,\phi \right)=0,\quad t\in[0,T]\,,\,\phi\in H,\\ u\left(T,\phi\right)=\Phi\left(\phi\right),\end{cases}\]
which will be interpreted in integral form, see (70). Here \(\mathcal{A}_{t}\), the time-dependent infinitesimal generator, is given by
\[\mathcal{A}_{t}u\left(t,\phi\right)=\frac{1}{2}\mathrm{Tr}\left(D^{2}u\left(t,\phi\right)\sigma(t)\sigma(t)^{*}\right)+\left\langle B\left(t,\phi\right), \nabla u\left(t,\phi\right)\right\rangle_{H}.\]
As in [10, Chapter 9], the approach that we adopt for the existence of classical solutions of the Kolmogorov equation is based on a careful analysis of (2) and on the formula
\[u\left(t,\phi\right)=\mathbb{E}\left[\Phi\left(w_{T}^{t,\phi}\right)\right], \tag{3}\]
where \(w_{t}^{t_{0},\phi}\), \(t\in[t_{0},T]\), is the solution of an analogue of (2) starting at time \(t_{0}\) instead of \(0\).
It is worth noting that we use classical tools of infinite dimensional calculus, such as the Frechet derivative, when analyzing the Kolmogorov equation. This is a novelty compared to other studies addressing path-dependent PDEs related to Volterra SDEs, particularly [28] (see also [3] for a similar subject). In a sense, then, we unify the study of stochastic Volterra equations and fBm of Riemann-Liouville type to other infinite dimensional systems. However, the assumptions imposed on \(B\) are not entirely classical, resulting in an innovative abstract formulation of the problem. Consequently, the analysis developed here is only analogous to the classical one, not included into it.
A more direct approach to the Kolmogorov equation would be also of great interest for two reasons. Firstly, it would contribute to complete the comparison with the classical theory developed for other classes of problems, see [10]. Secondly, it could be used to study regularization-by-noise phenomena for SDEs driven by fractional Brownian motion, which are investigated in literature using different techniques, see, e.g., [15, 16, 19, 20, 21]. In fact, studying the Kolmogorov equation in mild form might allow to prove weak uniqueness of solutions of the underlying SDE when the drift is not smooth, see [7, 8, 12, 25, 27]. In an attempt to develop such a direct approach, we have identified obstructions that we report in Section 5, so this problem remains open.
The paper is structured as follows. In Section 2 we show the connection between the Volterra SDE (1) and the SDE (2), specifying the Hilbert space \(H\) considered in our study. Moreover, due to the particular structure of the drift \(B\), we introduce another infinite dimensional reformulation for (1) (see (12) in Proposition 2), which is at the core of our analysis. In Section 3 we study the reformulation given by (12) in an abstract setting (see (14)), focusing also on the regularity of its solution with respect to the initial data, see Subsections 3.1-3.2. The related backward Kolmogorov equation in integral form is then investigated in Section 4. In Section 5 we discuss the mild formulation of the Kolmogorov equation and its importance for the theory of regularization by noise, see Subsection 5.2. The challenges that we previously mentioned regarding the analysis of such a mild formulation are explained in Subsection 5.1. Finally, in Appendix A we study the regularity of the solution of the Kolmogorov equation constructed as in (3).
## 2 Infinite dimensional reformulations for the stochastic Volterra equation
Let \(\left(\Omega,\mathcal{F},\mathbb{P},\mathbb{F}\right)\) be a complete filtered probability space, with expectation denoted by \(\mathbb{E}\), where the filtration \(\mathbb{F}\)\(=\left(\mathcal{F}_{t}\right)_{t\in[0,T]}\) satisfies the usual conditions. Fix \(d\in\mathbb{N}\) and consider an \(\mathbb{R}^{d}-\)valued standard Brownian motion \(W=\left(W_{t}\right)_{t\geq 0}\) defined on \(\left(\Omega,\mathcal{F},\mathbb{P},\mathbb{F}\right)\). In what follows, we denote by \(k_{2}\colon\left(0,\infty\right)\to\left(0,\infty\right)\) the fractional kernel which controls the noise in the Volterra SDE (1), namely
\[k_{2}(t)=\frac{1}{\Gamma(\alpha)}t^{\alpha-1},\quad t>0,\text{ for some }\alpha\in\left(\frac{1}{2},1\right). \tag{4}\]
As already mentioned in Introduction 1, we note that the arguments and results of this paper continue to hold even when \(\alpha\in\left[1,\frac{3}{2}\right)\), i.e., when the fBM governing (1) has Hurst parameter in \(\left[1/2,1\right)\), see also Remark 4.
Fix \(T>0\). Suppose that the measurable vector field \(b:\left[0,T\right]\times\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\) satisfies, for some \(L>0\),
\[\left|b\left(t,x\right)\right|\leq L\left(1+\left|x\right|\right),\qquad\left|b \left(t,x\right)-b\left(t,y\right)\right|\leq L\left|x-y\right|,\]
for every \(t\in\left[0,T\right]\) and \(x,y\in\mathbb{R}^{d}\). By (strong) solution of (1) we mean a continuous adapted process satisfying the identity for every \(t\in\left[0,T\right],\,\mathbb{P}-\)a.s. Existence and pathwise uniqueness of strong solutions of (1) have been studied in literature under additional requirements on \(k_{1}\), see, e.g., Equation (2.5) and Theorem 3.3 in [2].
Let \(H\) be the Hilbert space \(L^{2}\left(0,T;\mathbb{R}^{d}\right)\) and denote by \(\langle\cdot,\cdot\rangle_{H}\) the usual inner product. Denoting by \(\mathcal{L}(\mathbb{R}^{d};H)\) the space of linear and bounded operators from \(\mathbb{R}^{d}\) to \(H\), define \(\sigma\colon\left[0,T\right]\rightarrow\mathcal{L}(\mathbb{R}^{d};H)\) by
\[\left[\sigma\left(t\right)x\right]\left(\xi\right)=k_{2}\left(\xi-t\right)1_{ \left\{t<\xi\right\}}x,\quad x\in\mathbb{R}^{d},\,t,\xi\in\left[0,T\right]. \tag{5}\]
For every \(q\geq 2\), we denote by \(\mathcal{H}^{q}\) the space \(L^{q}\big{(}\Omega;H\big{)}\), endowed with the usual norm \(\left\|\cdot\right\|_{\mathcal{H}^{q}},\) and by \(\mathcal{H}^{q}_{t}\subset\mathcal{H}^{q}\) the subspace of \(\mathcal{F}_{t}-\)measurable functions, \(t\in\left[0,T\right]\). Notice that
\[\left\|\sigma(t)\right\|_{\mathrm{HS}}^{2}\leq d\left\|k_{2}\right\|_{2}^{2}, \quad t\in\left[0,T\right],\]
where \(\left\|\cdot\right\|_{\mathrm{HS}}\) represents the Hilbert-Schmidt norm and \(\left\|\cdot\right\|_{2}\) the norm in \(L^{2}(0,T;\mathbb{R})\). As a consequence, since \(\int_{0}^{T}\left\|\sigma(s)\right\|_{\mathrm{HS}}^{2}\mathrm{d}s<\infty\), we can construct the stochastic integral
\[\Sigma_{s,t}=\int_{s}^{t}\sigma\left(r\right)\mathrm{d}W_{r}\in\mathcal{H}^{q }_{t},\quad 0\leq s\leq t\leq T. \tag{6}\]
By [10, Theorem 4.36], there exists a constant \(C_{d,q}>0\) such that
\[\left\|\Sigma_{s,t}\right\|_{\mathcal{H}^{q}}\leq C_{d,q}\left\|k_{2}\right\|_ {2}\sqrt{t-s},\quad 0\leq s\leq t\leq T. \tag{7}\]
Let \(\Lambda\) be the space \(C\left(\left[0,T\right];\mathbb{R}^{d}\right)\), and define \(B:\left[0,T\right]\times\Lambda\to H\) by
\[\left[B\left(t,w\right)\right]\left(\xi\right)=k_{1}\left(\xi-t\right)1_{\left\{ t<\xi\right\}}b\left(t,w\left(t\right)\right),\quad t,\xi\in\left[0,T\right]. \tag{8}\]
In the sequel, a stochastic process taking values in \(H\) will be denoted by, e.g., \(\left(w_{t}\right)_{t\in\left[0,T\right]}\), namely with the time variable as a subscript. Then, for a fixed \(t_{0}\in\left[0,T\right]\), \(w_{t_{0}}\) is a random function, denoted by \(w_{t_{0}}\left(\xi\right)\), \(\xi\in\left[0,T\right]\).
In the following proposition, we show that it is possible to construct a solution to (2), i.e., an \(\mathbb{F}-\)adapted process with values in \(H\) satisfying (2) \(\mathbb{P}-\)a.s., for every \(t\in\left[0,T\right]\), using a solution of (1).
**Proposition 1**.: _Let \(X=\left(X_{t}\right)_{t\in\left[0,T\right]}\) be a solution of (1). For every \(t\in\left[0,T\right]\), define the \(\mathbb{R}^{d}-\)valued stochastic process \(\theta_{t}=\left(\theta_{t}(\xi)\right)_{\xi\in\left[t,T\right]}\) by_
\[\theta_{t}\left(\xi\right)=x_{0}+\int_{0}^{t}k_{1}\left(\xi-s\right)b\left(s, X_{s}\right)\mathrm{d}s+\int_{0}^{t}k_{2}\left(\xi-s\right)\mathrm{d}W_{s}, \quad\xi\in\left[t,T\right].\]
_Define the \(H-\)valued stochastic process \(\left(w_{t}\right)_{t\in\left[0,T\right]}\) by setting, for each \(t\in\left[0,T\right]\),_
\[w_{t}\left(\xi\right)=\begin{cases}X_{\xi},&\xi\leq t,\\ \theta_{t}\left(\xi\right),&\xi>t.\end{cases} \tag{9}\]
_Then \(\left(w_{t}\right)_{t\in\left[0,T\right]}\) is a solution of (2) with \(\phi\in H\) being the function identically equal to \(x_{0}\)._
Proof.: Fix \(t\in\left[0,T\right]\). Note that, by the Kolmogorov-Chentsov continuity criterion, there exists a continuous version of the stochastic process \(\left(\int_{0}^{t}k_{2}(\xi-s)\,\mathrm{d}W_{s}\right)_{\xi\in\left[t,T\right]}\). Hence, also employing the dominated convergence theorem, we deduce that the process \(\theta_{t}\) has continuous trajectories \(\theta_{t}(\cdot)\) in \(\left[t,T\right]\). It follows that \(w_{t}\) defined in (9) takes values in \(H\).
In addition, by [10, Proposition 3.18], we observe that \(w_{t}\) is an \(\mathcal{F}_{t}-\)measurable random variable, because \(X\) is continuous and \(\mathbb{F}-\)adapted, \(\theta_{t}(\cdot)\) is continuous and \(\theta_{t}(\xi)\) is \(\mathcal{F}_{t}-\)measurable for every \(\xi\in[t,T]\). Thus, the \(H-\)valued stochastic process \((w_{t})_{t\in[0,T]}\) is \(\mathbb{F}-\)adapted.
We now want to prove that \(w_{t}\) satisfies (2). By (1) and the definition of \(\theta_{t}\), we have, \(\mathbb{P}-\)a.s.,
\[w_{t}\left(\xi\right) =X_{\xi}1_{\left\{\xi\leq t\right\}}+\theta_{t}(\xi)1_{\left\{ \xi>t\right\}}=x_{0}+\int_{0}^{t\wedge\xi}k_{1}\left(\xi-s\right)b\left(s,X_{ s}\right)\mathrm{d}s+\int_{0}^{t\wedge\xi}k_{2}\left(\xi-s\right)\mathrm{d}W_{s}\] \[=x_{0}+\int_{0}^{t}k_{1}\left(\xi-s\right)1_{\left\{\xi>s\right\} }b\left(s,X_{s}\right)\mathrm{d}s+\int_{0}^{t}k_{2}\left(\xi-s\right)1_{ \left\{\xi>s\right\}}\mathrm{d}W_{s},\quad\xi\in[0,T]. \tag{10}\]
We focus on the integral in \(\mathrm{d}W\), with the aim of understanding its relation with \(\Sigma_{0,t}=\int_{0}^{t}\sigma(s)\mathrm{d}W_{s}\), see (6). By (5) and [10, Proposition 4.30],
\[\left\langle\int_{0}^{t}\sigma\left(s\right)\mathrm{d}W_{s},h\right\rangle_{H }=\int_{0}^{t}\left(\int_{0}^{T}k_{2}(\xi-s)1_{\left\{\xi>s\right\}}h(\xi) \,\mathrm{d}\xi\right)^{\top}\mathrm{d}W_{s},\quad\mathbb{P}-\text{a.s., for every }h\in H.\]
Moreover, an application of the stochastic Fubini's theorem yields
\[\left\langle\int_{0}^{t}k_{2}(\cdot-s)1_{\left\{>s\right\}} \mathrm{d}W_{s},\,h\right\rangle_{H} =\int_{0}^{T}\left(\int_{0}^{t}k_{2}(\xi-s)1_{\left\{\xi>s\right\} }\,\mathrm{d}W_{s}\right)^{\top}h(\xi)\,\mathrm{d}\xi\] \[=\int_{0}^{t}\left(\int_{0}^{T}k_{2}(\xi-s)1_{\left\{\xi>s\right\} }h(\xi)\,\mathrm{d}\xi\right)^{\top}\mathrm{d}W_{s},\quad\mathbb{P}-\text{a.s., for every }h\in H.\]
Considering that \(H\) is separable, combining the two previous equations we deduce that
\[\left\langle\int_{0}^{t}\sigma\left(s\right)\mathrm{d}W_{s},h\right\rangle_{H }=\left\langle\int_{0}^{t}k_{2}(\cdot-s)1_{\left\{>s\right\}}\mathrm{d}W_{s}, \,h\right\rangle_{H},\quad h\in H,\,\mathbb{P}-\text{a.s.,}\]
which in turn implies that
\[\left(\int_{0}^{t}\sigma\left(s\right)\mathrm{d}W_{s}\right)(\xi)=\int_{0}^{t }k_{2}(\xi-s)1_{\left\{\xi>s\right\}}\mathrm{d}W_{s},\quad\text{for a.e. }\xi\in[0,T],\,\mathbb{P}-\text{a.s.} \tag{11}\]
Going back to (10), recalling the definition of \(B\) in (8) and denoting by \(\phi\in H\) the function identically equal to \(x_{0}\), by the standard properties of Bochner's integral we conclude that
\[w_{t}=\phi+\int_{0}^{t}B\left(s,w_{s}\right)\mathrm{d}s+\int_{0}^{t}\sigma(s) \,\mathrm{d}W_{s},\quad\mathbb{P}-\text{a.s.}\]
Therefore \((w_{t})_{t\in[0,T]}\) satisfies (2), completing the proof.
The previous proposition gives us the classical infinite dimensional reformulation of the Volterra SDE (1), quoted by Equation (2) in Introduction 1. However, for the procedure carried out in Section 3, it turns out that a second reformulation is more convenient.
**Proposition 2**.: _Let \(\left(X_{t}\right)_{t\in[0,T]}\) be a solution of (1) and \(\phi\in H\) be the function identically equal to \(x_{0}\). Let \(\theta_{t}\left(\xi\right)\) and \(w_{t}\left(\xi\right)\) be defined as in Proposition 1. Then, for every \(t\in[0,T]\), the following identity holds:_
\[w_{t}=\phi+\int_{0}^{t}B\left(s,w_{t}\right)\mathrm{d}s+\int_{0}^{t}\sigma(s) \,\mathrm{d}W_{s},\quad\mathbb{P}-\text{a.s.} \tag{12}\]
Proof.: Observing that, for a.e. \(\xi\in[0,T]\),
\[\int_{0}^{t}k_{1}\left(\xi-s\right)1_{\left\{\xi>s\right\}}b\left(s,X_{s} \right)\mathrm{d}s=\int_{0}^{t}B\left(s,w_{t}\right)\left(\xi\right)\mathrm{d}s =\left[\int_{0}^{t}B\left(s,w_{t}\right)\mathrm{d}s\right]\left(\xi\right),\]
the proof is the same as the one of Proposition 1,
Motivated by the infinite dimensional reformulation of Proposition 2, in Section 3 we focus on studying Equation (12). Our aim is to investigate the property of its solutions and the associated Kolmogorov equation, which is the subject of Section 4. However, the implementation of this plan is challenging, due to the particular structure of the drift function \(B\colon[0,T]\times\Lambda\to H\). More precisely, the issue with the expression of \(B\) in (8) is that it is meaningful only for continuous functions, as it involves a punctual evaluation. Consequently, unlike the classical case, the functional space \(\Lambda\) in the domain of \(B\) is different from the arrival Hilbert space \(H\). This requires an abstract formulation of the problem that, to the best of our knowledge, is not covered by the existing literature.
## 3 Abstract formulation and differentiability of the stochastic flow
In this section, we introduce and study an abstract formulation for the equation (12), with a particular attention devoted to the differentiability of its solution with respect to the initial data, see Subsections 3.1-3.2. In our reasoning, we introduce an extension of the drift operator \(B\), denoted by \(\overline{B}\), which is a characterizing and original feature of the approach that we propose.
For every \(k\), \(p\in\mathbb{N}\), we denote by \(\left\|\cdot\right\|_{p}\) the usual norm on the Banach space \(L^{p}\big{(}0,T;\mathbb{R}^{k}\big{)}\). We denote by
\[H_{\Box}\text{ the Hilbert space }L^{2}\big{(}\left(0,T\right)\times\left(0,T \right);\mathbb{R}^{d}\big{)}\text{ endowed with the norm }\left\|\cdot\right\|_{2,\Box}.\]
Recall \(H=L^{2}\big{(}0,T;\mathbb{R}^{d}\big{)}\) and \(\Lambda=C\left(\left[0,T\right];\mathbb{R}^{d}\right)\). For every \(w\in\Lambda\), we consider a map \(B\left(w\right):\,[0,T]\times[0,T]\rightarrow\mathbb{R}^{d}\) subject to the next requirement.
**Assumption 1**.: _The function \(B\colon\Lambda\to H_{\Box}\) satisfies_
\[\left\|B\left(w_{1}\right)\right\|_{2,\Box}\leq C_{0}\left(1+\left\|w_{1} \right\|_{2}\right),\qquad\left\|B\left(w_{1}\right)-B\left(w_{2}\right) \right\|_{2,\Box}\leq C_{0}\left\|w_{1}-w_{2}\right\|_{2}, \tag{13}\]
_for every \(w_{1}\), \(w_{2}\in\Lambda\), for some constant \(C_{0}=C_{0}\left(d,T\right)>0\). Moreover, given \(w\in\Lambda\) and \(0<t\leq T\), for a.e. \(r\in\left(0,t\right)\) the function \(B\left(w\right)\left(r,\cdot\right)\in H\) is of Volterra-type, namely \(B\left(w\right)\left(r,\xi\right)=0\) for a.e. \(\xi\in\left(0,r\right)\), and depends on \(w\) only via its restriction \(w|_{\left(0,t\right)}\) to \(\left(0,t\right)\)._
In the sequel, we are going to progressively introduce stricter hypotheses on the drift map \(B\) (see, in particular, Assumptions 2-3), which will allow us to prove the main result on the Kolmogorov equation, see Theorem 9 in Section 4. In Example 1, we show a function \(B\), obtained by choosing \(b\) in (8) with an affine structure, that satisfies these requirements.
Under Assumption 1, we can invoke the theorem of extension of uniformly continuous functions to uniquely define a continuous map \(\overline{B}\colon H\to H_{\Box}\) such that \(\overline{B}\big{|}_{\Lambda}=B\). Note that \(\overline{B}\) satisfies (13) for every \(w_{1}\), \(w_{2}\in H\). Given \(w\in H\) and \(r\in\left(0,T\right)\), we are going to write \(\overline{B}\left(r,w\right)=\overline{B}\left(w\right)\left(r,\cdot\right) \in H\): these maps are well defined for a.e. \(r\in\left(0,T\right)\).
For a fixed \(0<t\leq T\), we remark that also \(\overline{B}\left(r,w\right)\) is of Volterra-type in the sense of Assumption 1 for a.e. \(r\in\left(0,t\right)\), and that it depends on \(w\) only via \(w|_{\left(0,t\right)}\). For these reasons, in the sequel we will refer to Assumption 1 while talking about \(\overline{B}\).
Recall the spaces \(\mathcal{H}^{q}=L^{q}\big{(}\Omega;H\big{)},\,q\geq 2\), and the subspaces \(\mathcal{H}_{t}^{q}\subset\mathcal{H}^{q}\) of \(\mathcal{F}_{t}-\)measurable functions introduced in Section 2, as well as the random variables \(\Sigma_{s,t}\in\mathcal{H}_{t}^{q}\) in (6). For every \(0\leq s\leq t\leq T\) and \(\phi\in\mathcal{H}^{q}\), we are interested in the equation
\[w=\phi+\int_{s}^{t}\overline{B}\left(r,w\right)\mathrm{d}r+\int_{s}^{t}\sigma \left(r\right)\mathrm{d}W_{r}, \tag{14}\]
whose well-posedness in \(\mathcal{H}^{q}\) is given by the next result.
**Theorem 3**.: _Under Assumption 1, for every \(q\geq 2\), \(\phi\in\mathcal{H}^{q}\) and \(s,t\in[0,T]\), with \(s\leq t\), there exists a unique solution \(w_{t}^{s,\phi}\in\mathcal{H}^{q}\) of (14). In particular, if \(\phi\in\mathcal{H}_{q}^{q}\) then \(w_{t}^{s,\phi}\in\mathcal{H}_{t}^{q}\)._
_Furthermore, the following cocycle property holds in \(\mathcal{H}^{q}\):_
\[w_{t}^{s,\phi}=w_{t}^{u,w_{u}^{s,\phi}},\quad 0\leq s<u<t\leq T,\,\phi\in \mathcal{H}^{q}. \tag{15}\]
Proof.: Fix \(q\geq 2\), \(0\leq s\leq t\leq T\) and \(\phi\in\mathcal{H}_{s}^{q}\). Consider \(N=N\left(d,T\right)\in\mathbb{N}\) so big that \(C_{0}\sqrt{T/N}<1\), where \(C_{0}=C_{0}\left(d,T\right)\) is the constant in (13). Let us introduce an equispaced partition \(\{t_{k}\}_{k=0}^{N}\) of \([s,t]\) where \(t_{0}=s\) and \(t_{N}=t\): its mesh \(\Delta\leq T/N\). Define the mapping \(\Gamma_{t_{0}}^{t_{1}}\colon\mathcal{H}_{t_{1}}^{q}\to\mathcal{H}_{t_{1}}^{q}\) by
\[\Gamma_{t_{0}}^{t_{1}}w=\phi+\int_{t_{0}}^{t_{1}}\overline{B}\left(r,w\right) \mathrm{d}r+\int_{t_{0}}^{t_{1}}\sigma\left(r\right)\mathrm{d}W_{r},\quad w \in\mathcal{H}_{t_{1}}^{q}. \tag{16}\]
Under Assumption 1, \(\Gamma_{t_{0}}^{t_{1}}\) is well defined. Indeed, for every \(w\in\mathcal{H}_{t_{1}}^{q}\),
\[\left\|\Gamma_{t_{0}}^{t_{1}}w\right\|_{\mathcal{H}^{q}}^{q} =\mathbb{E}\left[\left\|\Gamma_{t_{0}}^{t_{1}}w\right\|_{2}^{q} \right]\leq 3^{q-1}\mathbb{E}\left[\left\|\phi\right\|_{2}^{q}+\Bigg{(} \int_{t_{0}}^{t_{1}}\left(\int_{0}^{T}\left|\overline{B}\left(w\right)\left(r,\xi\right)\right|^{2}\mathrm{d}\xi\right)^{\frac{1}{2}}\mathrm{d}r\Bigg{)}^{q }+\left\|\int_{t_{0}}^{t_{1}}\sigma\left(r\right)\mathrm{d}W_{r}\right\|_{2}^{ q}\right]\] \[\leq 3^{q-1}\mathbb{E}\left[\left\|\phi\right\|_{2}^{q}+C_{0}^{q }\Delta^{\frac{q}{2}}\left(1+\left\|w\right\|_{2}\right)^{q}+\left\|\int_{t_{0 }}^{t_{1}}\sigma\left(r\right)\mathrm{d}W_{r}\right\|_{2}^{q}\right]<\infty,\]
where we use Bochner's theorem in the first inequality and the first bound in (13), coupled with Jensen's inequality, in the second one. Analogously, using the second inequality in (13), we write
\[\left\|\Gamma_{t_{0}}^{t_{1}}w_{1}-\Gamma_{t_{0}}^{t_{1}}w_{2} \right\|_{\mathcal{H}^{q}}\leq\mathbb{E}\left[\Bigg{(}\int_{t_{0}}^{t_{1}} \left(\int_{0}^{T}\left|\overline{B}\left(w_{1}\right)-\overline{B}\left(w_{2} \right)\right|^{2}\left(r,\xi\right)\mathrm{d}\xi\right)^{\frac{1}{2}} \mathrm{d}r\Bigg{)}^{q}\right]^{\frac{1}{q}}\\ \leq C_{0}\sqrt{\Delta}\left\|w_{1}-w_{2}\right\|_{\mathcal{H}^{q }},\quad w_{1},\,w_{2}\in\mathcal{H}_{t_{1}}^{q}. \tag{17}\]
Hence, for our choice of \(N\in\mathbb{N}\), the map \(\Gamma_{t_{0}}^{t_{1}}\) is a contraction in \(\mathcal{H}_{t_{1}}^{q}\), whose unique fixed point is \(\overline{w}_{1}\). Noting that \(\overline{w}_{1}\) is the unique solution of (14) with \(t_{1}\) instead of \(t\), we denote it by \(w_{t_{1}}^{s,\phi}\).
Since the relation between constants in (17), which is necessary to make \(\Gamma_{t_{0}}^{t_{1}}\) a contraction, does not depend on the initial condition, under Assumption 1 the previous argument can be iterated to construct the solution \(w_{t}^{s,\phi}\) of (14). More precisely, define the map \(\Gamma_{t_{1}}^{t_{2}}\colon\mathcal{H}_{t_{2}}^{q}\to\mathcal{H}_{t_{2}}^{q}\) by
\[\Gamma_{t_{1}}^{t_{2}}w=\overline{w}_{1}+\int_{t_{1}}^{t_{2}}\overline{B} \left(r,w\right)\mathrm{d}r+\int_{t_{1}}^{t_{2}}\sigma\left(r\right)\mathrm{d} W_{r},\quad w\in\mathcal{H}_{t_{2}}^{q}.\]
Computations similar to those above show that \(\Gamma_{t_{1}}^{t_{2}}\) is well defined. Moreover,
\[\left\|\Gamma_{t_{1}}^{t_{2}}w_{1}-\Gamma_{t_{1}}^{t_{2}}w_{2}\right\|_{ \mathcal{H}^{q}}\leq C_{0}\sqrt{\Delta}\left\|w_{1}-w_{2}\right\|_{\mathcal{H} ^{q}},\quad w_{1},\,w_{2}\in\mathcal{H}_{t_{2}}^{q}.\]
Thus, \(\Gamma_{t_{1}}^{t_{2}}\) is a contraction in \(\mathcal{H}_{t_{2}}^{q}\), whose unique fixed point is \(\overline{w}_{2}=w_{t_{2}}^{t_{1},w_{t_{1}}^{s,\phi}}\). Now, by the Volterra-type property of \(\overline{B}\) and \(\sigma\), together with the standard features of the Bochner's and stochastic integrals (see (11)), we infer that
\[\left(\int_{t_{1}}^{t_{2}}\overline{B}\left(r,\overline{w}_{2}\right)\mathrm{d }r\right)\left(\xi\right)=\left(\int_{t_{1}}^{t_{2}}\sigma\left(r\right) \mathrm{d}W_{r}\right)\left(\xi\right)=0,\quad\text{for a.e. }\xi\in(0,t_{1}),\,\mathbb{P}-\text{a.s.}, \tag{18}\]
whence
\[\left.\overline{w}_{2}\right|_{(0,t_{1})}=\overline{w}_{1}\right|_{(0,t_{1})},\quad\mathbb{P}-\text{a.s.}\]
Furthermore, \(\mathbb{P}-\)a.s., for a.e. \(r\in(s,t_{1})\), \(\overline{B}\left(r,\overline{w}_{1}\right)\) depends on \(\overline{w}_{1}\) only via \(\left.\overline{w}_{1}\right|_{(0,r)}\), which yields
\[\overline{B}\left(r,\overline{w}_{1}\right)=\overline{B}\left(r,\overline{w}_{2 }\right),\quad\text{for a.e. }r\in(s,t_{1}),\,\mathbb{P}-\text{a.s.} \tag{19}\]
Therefore, recalling (16),
\[\overline{w}_{2}=\phi+\int_{s}^{t_{1}}\overline{B}\left(r,\overline{w}_{1} \right)\mathrm{d}r+\int_{t_{1}}^{t_{2}}\overline{B}\left(r,\overline{w}_{2} \right)\mathrm{d}r+\int_{s}^{t_{2}}\sigma\left(r\right)\mathrm{d}W_{r}=\phi+\int_ {s}^{t_{2}}\overline{B}\left(r,\overline{w}_{2}\right)\mathrm{d}r+\int_{s}^{t_{2}} \sigma\left(r\right)\mathrm{d}W_{r}. \tag{20}\]
This shows that \(\overline{w}_{2}\) is a solution of (14) with \(t_{2}\) instead of \(t\).
To prove that \(\overline{w}_{2}\) is in fact the unique solution of this equation, we consider another random variable \(\widetilde{w}\in\mathcal{H}_{t_{2}}^{q}\) satisfying (20). Then, relying on the same properties of \(\overline{B}\) and \(\sigma\) as those used above, we deduce that
\[1_{(0,t_{1})}\widetilde{w}=1_{(0,t_{1})}\left(\phi+\int_{s}^{t_{1}}\overline{B }\left(r,1_{(0,t_{1})}\widetilde{w}\right)\mathrm{d}r+\int_{s}^{t_{1}}\sigma \left(r\right)\mathrm{d}W_{r}\right). \tag{21}\]
Moreover, we observe that also \(1_{(0,t_{1})}\overline{w}_{1}\in\mathcal{H}^{q}\) satisfies (21). Therefore, using Bochner's theorem and Jensen's inequality, by Assumption 1 we can compute
\[\left\|1_{(0,t_{1})}\left(\overline{w}_{1}-\widetilde{w}\right) \right\|_{\mathcal{H}^{q}}^{q} \leq\mathbb{E}\left[\left\|\int_{s}^{t_{1}}\left(\overline{B} \left(r,1_{(0,t_{1})}\overline{w}_{1}\right)-\overline{B}\left(r,1_{(0,t_{1}) }\widetilde{w}\right)\right)\mathrm{d}r\right\|_{2}^{q}\right]\] \[\leq\Delta^{\frac{q}{2}}\mathbb{E}\left[\left\|\overline{B} \left(1_{(0,t_{1})}\overline{w}_{1}\right)-\overline{B}\left(1_{(0,t_{1})} \widetilde{w}\right)\right\|_{2,\mathbb{O}}^{q}\right]\leq\Delta^{\frac{q}{2} }C_{0}^{q}\left\|1_{(0,t_{1})}\left(\overline{w}_{1}-\widetilde{w}\right) \right\|_{\mathcal{H}^{q}}^{q},\]
which allow us to conclude, recalling that \(\sqrt{\Delta}C_{0}<1\),
\[1_{(0,t_{1})}\widetilde{w}=1_{(0,t_{1})}\overline{w}_{1},\quad\mathbb{P}- \mathrm{a.s.}\]
Going back to (20), by (16) and the previous equality we have, \(\mathbb{P}-\mathrm{a.s.}\),
\[\widetilde{w}=\phi+\int_{s}^{t_{1}}\overline{B}\left(r,\overline{w}_{1}\right) \mathrm{d}r+\int_{s}^{t_{1}}\sigma\left(r\right)\mathrm{d}W_{r}+\int_{t_{1}}^ {t_{2}}\overline{B}\left(r,\widetilde{w}\right)\mathrm{d}r+\Sigma_{t_{1},t_{ 2}}=\overline{w}_{1}+\int_{t_{1}}^{t_{2}}\overline{B}\left(r,\widetilde{w} \right)\mathrm{d}r+\Sigma_{t_{1},t_{2}}.\]
It follows that \(\widetilde{w}\) is a fixed point of the map \(\Gamma_{t_{1}}^{t_{2}}\) in \(\mathcal{H}_{t_{2}}^{q}\): by uniqueness, we obtain \(\widetilde{w}=\overline{w}_{2}\). Hence \(\overline{w}_{2}\) is the unique solution of (14) with \(t_{2}\) instead of \(t\), which we denote by \(w_{t_{2}}^{s,\phi}\).
This argument by steps can be repeated to cover the whole interval \([s,t]\). In this way, we obtain the unique solution \(w_{t}^{s,\phi}\)of (14) in \(\mathcal{H}_{t}^{q}\). The same procedure also works when the initial condition \(\phi\in\mathcal{H}^{q}\), i.e., when \(\phi\) is not necessarily \(\mathcal{F}_{s}-\)measurable. In such a case, it provides a unique solution \(w_{t}^{s,\phi}\in\mathcal{H}^{q}\).
The cocycle property in (15) follows by a similar reasoning. Indeed, if we fix \(u\in(s,t)\), then by the Volterra-type property of \(\overline{B}\) and \(\sigma\) (cfr. (18)) we have
\[w_{t}^{u,w_{u}^{s,\phi}}\Big{|}_{(0,u)}=w_{u}^{s,\phi}\big{|}_{(0,u)},\quad \mathbb{P}-\mathrm{a.s.} \tag{22}\]
Invoking again Assumption 1 as in (19),
\[w_{t}^{u,w_{u}^{s,\phi}}=\phi+\int_{s}^{u}\overline{B}\left(r,w_ {u}^{s,\phi}\right)\mathrm{d}r+\int_{u}^{t}\overline{B}\left(r,w_{t}^{u,w_{u}^ {s,\phi}}\right)\mathrm{d}r+\int_{s}^{t}\sigma\left(r\right)\mathrm{d}W_{r}\\ =\phi+\int_{s}^{t}\overline{B}\left(r,w_{t}^{u,w_{u}^{s,\phi}} \right)\mathrm{d}r+\int_{s}^{t}\sigma\left(r\right)\mathrm{d}W_{r},\]
hence the equality in (15) is inferred by the uniqueness of the solution of (14). The proof is now complete.
**Remark 1**.: _The cocycle property in (15) (see also (22)) yields \(w_{t}^{s,\phi}\left(\xi\right)=w_{u}^{s,\phi}\left(\xi\right)\) for a.e. \(\xi\in(0,u)\), \(\mathbb{P}-\)a.s., for every \(0\leq s\leq u\leq t\leq T\) and \(\phi\in\mathcal{H}^{q}\), \(q\geq 2\)._
**Remark 2**.: _For every \(p\in(2,(1-\alpha)^{-1})\), the fractional kernel \(k_{2}\) in (4) belongs to the space \(L^{p}\big{(}0,T;\mathbb{R}\big{)}\)._
_As a consequence, according to [23, Lemma 8.27, Theorem 8.29], the stochastic integral \(\Sigma_{s,t}\) in (6) belongs to the space_
\[\mathcal{L}_{t}^{p}=\left(L_{t}^{p}\left(\Omega;L^{p}\right),\left\|\cdot \right\|_{\mathcal{L}^{p}}\right),\quad\text{ where }\quad L^{p}=L^{p}\big{(}0,T;\mathbb{R}^{d}\big{)}.\]
_As before, the subscript \(t\) in the previous expression indicates a space of \(\mathcal{F}_{t}-\)measurable random variables. Moreover, the following inequality holds (cfr. (7)):_
\[\left\|\Sigma_{s,t}\right\|_{\mathcal{L}^{p}}\leq C_{d,p}\left\|k_{2}\right\|_{ p}\sqrt{t-s},\quad\text{for some }C_{d,p}>0. \tag{23}\]
_We denote by_
\[L_{\Box}^{p}\]
_the Banach space \[L^{p}\big{(}\left(0,T\right)\times\left(0,T\right);\mathbb{R}^{d}\big{)}\], endowed with the norm \[\left\|\cdot\right\|_{p,\Box}\]._
_In addition to Assumption 1, suppose that \(B\colon\Lambda\to L_{\Box}^{p}\) and that it satisfies_
\[\left\|B\left(w_{1}\right)\right\|_{p,\Box}\leq C_{0,p}\left(1+\left\|w_{1} \right\|_{p}\right),\qquad\left\|B\left(w_{1}\right)-B\left(w_{2}\right)\right\| _{p,\Box}\leq C_{0,p}\left\|w_{1}-w_{2}\right\|_{p}, \tag{24}\]
_for every \(w_{1},\,w_{2}\in\Lambda\), for some constant \(C_{0,p}=C_{0,p}(d,T)>0\). Note that \(\overline{B}\colon H\to H_{\Box}\) satisfies (24) for every \(w_{1},\,w_{2}\in L^{p}\)._
_In this framework, one can argue as in the proof of Theorem 3 to infer that, for every \(\phi\in\mathcal{L}_{s}^{p}\), there exists a unique solution \(w_{t}^{s,\phi}\) of (14) belonging to the space \(\mathcal{L}_{t}^{p}\)._
The following corollary to Theorem 3 gives a Lipschitz-type dependence of the solution \(w_{t}^{s,\phi}\) of (14) on the initial condition \(\phi\), which combined with (15) allows to prove the \(\mathbb{F}-\)Markov property of the process \((w_{t}^{s,\phi})_{t\in[s,T]}\).
**Corollary 4**.: _Let \(q\geq 2\). Under Assumption 1, there exists a constant \(C_{1}=C_{1}\left(d,q,T\right)>0\) such that, for every \(0\leq s<t\leq T\),_
\[\left\|w_{t}^{s,\phi}-w_{t}^{s,\psi}\right\|_{\mathcal{H}^{q}}\leq C_{1}\left\| \phi-\psi\right\|_{\mathcal{H}^{q}},\quad\phi,\,\psi\in\mathcal{H}^{q}. \tag{25}\]
_In addition, for all \(s\in[0,T]\) and \(\phi\in\mathcal{H}_{s}^{q}\), the process \((w_{t}^{s,\phi})_{t\in[s,T]}\) is \(\mathbb{F}-\)Markov, and_
\[\mathbb{E}\left[\Phi\left(w_{u}^{s,\phi}\right)\Big{|}\mathcal{F}_{t}\right]= \mathbb{E}\left[\Phi\left(w_{u}^{t,\psi}\right)\right]\big{|}_{\psi=w_{t}^{s, \phi}},\quad\mathbb{P}-\text{a.s., }s\leq t\leq u\leq T,\,\Phi\in\mathcal{B}_{b}(H), \tag{26}\]
_where \(\mathcal{B}_{b}(H)\) denotes the space of bounded Borel measurable functions from \(H\) to \(\mathbb{R}\)._
Proof.: Fix \(q\geq 2\), \(0\leq s<t\leq T\) and consider \(N=N\left(d,T\right)\in\mathbb{N}\) so big that \(2C_{0}\sqrt{T/N}<2^{1/q}\), where \(C_{0}=C_{0}\left(d,T\right)\) is the constant in (13). Moreover, take an equispaced partition \(\{t_{k}\}_{k=0}^{N}\) of \([s,t]\) where \(t_{0}=s\) and \(t_{N}=t\). By (13)-(14), for every \(\phi,\,\psi\in\mathcal{H}^{q}\),
\[\left\|w_{t_{1}}^{s,\phi}-w_{t_{1}}^{s,\psi}\right\|_{2}^{q}\leq 2 ^{q-1}\left\|\phi-\psi\right\|_{2}^{q}+2^{q-1}\left(\frac{T}{N}\right)^{\frac{q }{2}}\left\|\overline{B}\left(w_{t_{1}}^{s,\phi}\right)-\overline{B}\left(w_{t _{1}}^{s,\psi}\right)\right\|_{2,\Box}^{q}\\ \leq 2^{q-1}\left\|\phi-\psi\right\|_{2}^{q}+2^{q-1}C_{0}^{q} \left(\frac{T}{N}\right)^{\frac{q}{2}}\left\|w_{t_{1}}^{s,\phi}-w_{t_{1}}^{s, \psi}\right\|_{2}^{q},\quad\mathbb{P}-\text{a.s.},\]
hence
\[\left\|w_{t_{1}}^{s,\phi}-w_{t_{1}}^{s,\psi}\right\|_{2}^{q}\leq 2^{q-1}\left(1- 2^{q-1}C_{0}^{q}\left(\frac{T}{N}\right)^{\frac{q}{2}}\right)^{-1}\left\|\phi- \psi\right\|_{2}^{q},\quad\mathbb{P}-\text{a.s.}\]
Thus, by the cocycle property in (15), for every \(\phi,\,\psi\in\mathcal{H}^{q}\),
\[\left\|w_{t}^{s,\phi}-w_{t}^{s,\psi}\right\|_{2}^{q} =\left\|w_{t_{N}}^{t_{N-1},w_{t_{N-1}}^{s,\phi}}-w_{t_{N}}^{t_{N- 1},w_{t_{N-1}}^{s,\psi}}\right\|_{2}^{q}\] \[\leq 2^{q-1}\left(1-2^{q-1}C_{0}^{q}\left(\frac{T}{N}\right)^{ \frac{q}{2}}\right)^{-1}\left\|w_{t_{N-1}}^{t_{N-2},w_{t_{N-2}}^{s,\phi}}-w_{t_ {N-1}}^{t_{N-2},w_{t_{N-2}}^{s,\psi}}\right\|_{2}^{q}\] \[\leq 2^{N(q-1)}\left(1-2^{q-1}C_{0}^{q}\left(\frac{T}{N}\right)^{ \frac{q}{2}}\right)^{-N}\left\|\phi-\psi\right\|_{2}^{q},\quad\mathbb{P}- \text{a.s.},\]
which shows (25) upon taking expectations and \(q-\)th root, as desired.
The Markov property of the process \((w_{t}^{s,\phi})_{t\in[s,T]}\), \(\phi\in\mathcal{H}_{s}^{q}\), is a consequence of (26). In turn, the equality in (26) can be readily obtained by paralleling the monotone class argument in [10, Theorem 9.14], which essentially relies on the cocycle property in (15) and the Lipschitz-continuous dependence in (25). Thus, the proof is complete.
### First-order differentiability in the initial data
In this subsection we focus on deterministic initial conditions for (14), i.e., \(\phi\in H\). From now on, we denote the Hilbert space \(\mathcal{H}^{2}=L^{2}(\Omega;H)\) simply by \(\mathcal{H}\).
In order to study the first-order Frechet differentiability of \(w_{t}^{s,\phi}\) in \(H\), we require hypotheses on \(B\) which are stronger than Assumption 1. In fact, we need some conditions on the Frechet differentiability of \(B\) in the normed space \(\left(\Lambda,\left\|\cdot\right\|_{2}\right)\). In the sequel, we write \(\Lambda_{2}\) for \(\left(\Lambda,\left\|\cdot\right\|_{2}\right)\) to have a compact notation.
**Assumption 2**.: _The map \(B\colon\Lambda\to H_{\Box}\) satisfies Assumption 1. Moreover, \(B\) is \(\Lambda_{2}-\)Frechet differentiable, and there exists a constant \(C_{0}=C_{0}\left(d,T\right)>0\) such that_
\[\left\|DB\left(w_{1}\right)\left(w_{2}\right)\right\|_{2,\Box}\leq C_{0}\left\| w_{2}\right\|_{2},\quad w_{1},w_{2}\in\Lambda, \tag{27}\]
_and_
\[\left\|DB\left(w_{1}\right)-DB\left(w_{2}\right)\right\|_{\mathcal{L}\left( \Lambda_{2};H_{\Box}\right)}\leq C_{0}\left\|w_{1}-w_{2}\right\|_{2}^{\gamma}, \quad w_{1},w_{2}\in\Lambda,\text{ for some }\gamma\in\left(0,1\right]. \tag{28}\]
Without loss of generality, we assume the constant \(C_{0}\) in (27)-(28) to be the same as the one in (13).
Under Assumption 2, precisely by (27) and the theorem of extension of uniformly continuous functions, for every \(w_{1}\in\Lambda\) it is possible to extend \(DB\left(w_{1}\right)\in\mathcal{L}\left(\Lambda;H_{\Box}\right)\) to an operator \(\overline{DB}\left(w_{1}\right)\in\mathcal{L}\left(H;H_{\Box}\right)\) satisfying (27) for all \(w_{2}\in H\). Moreover, by (28),
\[\left\|\overline{DB}\left(w_{1}\right)-\overline{DB}\left(w_{2}\right)\right\| _{\mathcal{L}\left(H;H_{\Box}\right)}=\left\|DB\left(w_{1}\right)-DB\left(w_{ 2}\right)\right\|_{\mathcal{L}\left(\Lambda_{2};H_{\Box}\right)}\leq C_{0} \left\|w_{1}-w_{2}\right\|_{2}^{\gamma},\quad w_{1},w_{2}\in\Lambda, \tag{29}\]
hence we can extend (without changing the notation)
\[\overline{DB}\colon H\to\mathcal{L}\left(H;H_{\Box}\right)\text{, with }\overline{DB}\text{ satisfying (\ref{eq:
**Theorem 5**.: _Under Assumption 2, for every \(0\leq s\leq t\leq T\), the mapping \(w_{t}^{s,\phi}\in C^{1+\gamma}\left(H;\mathcal{H}\right)\). In particular, for every \(\phi,\,\psi\in H\), \(Dw_{t}^{s,\phi}\psi\) is the unique solution in \(\mathcal{H}\) of the following equation:_
\[Dw_{t}^{s,\phi}\psi=\psi+\int_{s}^{t}D\overline{B}\left(w_{t}^{s,\phi}\right) \left(r,Dw_{t}^{s,\phi}\psi\right)\mathrm{d}r. \tag{34}\]
_Furthermore, there exists a constant \(C_{2}=C_{2}(d,T)>0\) such that, for every \(\phi,\,\psi,\,\eta\in H\), \(\mathbb{P}-\text{a.s.},\)_
\[\left\|Dw_{t}^{s,\phi}\eta\right\|_{2}\leq C_{2}\left\|\eta\right\|_{2},\qquad \left\|Dw_{t}^{s,\phi}\eta-Dw_{t}^{s,\psi}\eta\right\|_{2}\leq C_{2}\left\|w_ {t}^{s,\phi}-w_{t}^{s,\psi}\right\|_{2}^{\gamma}\left\|\eta\right\|_{2}. \tag{35}\]
Proof.: Fix \(0\leq s\leq t\leq T\) and \(\phi\in H\). Firstly, we prove the well-posedness in \(\mathcal{H}\) of the equation
\[w=\psi+\int_{s}^{t}D\overline{B}\left(w_{t}^{s,\phi}\right)\left(r,w\right) \mathrm{d}r,\quad\psi\in H. \tag{36}\]
Consider \(N=N\left(d,T\right)\in\mathbb{N}\) so big that \(C_{0}\sqrt{T/N}<1\), where \(C_{0}=C_{0}\left(d,T\right)\) is the constant in Assumptions 1-2. In addition, take an equispaced partition \(\{t_{k}\}_{k=0}^{N}\) of \([s,t]\) where \(t_{0}=s\) and \(t_{N}=t\): its mesh \(\Delta\leq T/N\). By (30) (see also (27)) and Bochner's theorem, the following estimate holds:
\[\left\|\int_{t_{0}}^{t_{1}}D\overline{B}\left(w_{t}^{s,\phi}\right) \left(w_{1}-w_{2}\right)\left(r,\cdot\right)\mathrm{d}r\right\|_{\mathcal{H}} \leq\sqrt{\Delta}\mathbb{E}\left[\int_{t_{0}}^{t_{1}}\int_{0}^{T}\left|D \overline{B}\left(w_{t}^{s,\phi}\right)\left(w_{1}-w_{2}\right)\right|^{2} \left(r,\xi\right)\mathrm{d}\xi\,\mathrm{d}r\right]^{\frac{1}{2}}\\ \leq\sqrt{\Delta}\mathbb{E}\left[\left\|D\overline{B}\left(w_{t} ^{s,\phi}\right)\left(w_{1}-w_{2}\right)\right\|_{2,\Box}^{2}\right]^{\frac{1} {2}}\leq C_{0}\sqrt{\Delta}\left\|w_{1}-w_{2}\right\|_{\mathcal{H}},\quad w_{ 1},w_{2}\in\mathcal{H}. \tag{37}\]
Thus, employing a fixed point argument as in the proof of Theorem 3, we deduce the existence of a unique solution \(\overline{w}_{1}^{\phi}\in\mathcal{H}\) of (36) with \(t_{1}\) instead of \(t\), for every \(\psi\in H\).
We claim that the operator \(Dw_{t_{1}}^{s,\phi}\colon H\to\mathcal{H}\) defined by \(Dw_{t_{1}}^{s,\phi}\psi=\overline{w}_{1}^{\psi}\), \(\psi\in H\), is the Frechet differential of \(w_{t_{1}}^{s,\phi}\). Indeed, the linearity of \(Dw_{t_{1}}^{s,\phi}\) is straightforward, while the continuity is ensured by the following computation, which can be argued from (36) similarly to (37):
\[\left\|Dw_{t_{1}}^{s,\phi}\psi\right\|_{2}\leq\left(1-C_{0}\sqrt{T/N}\right)^ {-1}\left\|\psi\right\|_{2},\quad\mathbb{P}-\text{a.s., }\psi\in H. \tag{38}\]
Moreover, recalling (14)-(34),
\[\left\|w_{t_{1}}^{s,\phi+h}-w_{t_{1}}^{s,\phi}-Dw_{t_{1}}^{s, \phi}h\right\|_{\mathcal{H}}\leq\sqrt{\Delta}\mathbb{E}\left[\left\|\overline{ B}\left(w_{t_{1}}^{s,\phi+h}\right)-\overline{B}\left(w_{t_{1}}^{s,\phi} \right)-D\overline{B}\left(w_{t_{1}}^{s,\phi}\right)Dw_{t_{1}}^{s,\phi}h\right\| _{2,\Box}^{2}\right]^{\frac{1}{2}}\\ \leq\sqrt{T/N}\left(\mathbb{E}\left[\left\|D\overline{B}\left(w_{ t_{1}}^{s,\phi}\right)\left(w_{t_{1}}^{s,\phi+h}-w_{t_{1}}^{s,\phi}-Dw_{t_{1}}^{s, \phi}h\right)\right\|_{2,\Box}^{2}\right]^{\frac{1}{2}}\right.\\ +\left.\mathbb{E}\left[\left\|\int_{0}^{1}\left(D\overline{B} \left(w_{t_{1}}^{s,\phi}+u\left(w_{t_{1}}^{s,\phi+h}-w_{t_{1}}^{s,\phi} \right)\right)-D\overline{B}\left(w_{t_{1}}^{s,\phi}\right)\right)\left(w_{t_{1 }}^{s,\phi+h}-w_{t_{1}}^{s,\phi}\right)\mathrm{d}u\right\|_{2,\Box}^{2}\right]^ {\frac{1}{2}}\right)\\ \leq\sqrt{T/N}C_{0}\left(\left\|w_{t_{1}}^{s,\phi+h}-w_{t_{1}}^{ s,\phi}-Dw_{t_{1}}^{s,\phi}h\right\|_{\mathcal{H}}+\mathbb{E}\left[\left\|w_{t_{1}}^{s, \phi+h}-w_{t_{1}}^{s,\phi}\right\|_{2}^{2(1+\gamma)}\right]^{\frac{1}{2}}\right),\quad h\in H, \tag{39}\]
where we apply Taylor's formula on \(\overline{B}\) for the second inequality and (30) together with Bochner's theorem for the third. Notice that \(H\subset\mathcal{H}^{q}\) for every \(q\geq 2\). Therefore, by Corollary 4 with \(q=2(1+\gamma)\), from (39) we infer that
\[\left\|w_{t_{1}}^{s,\phi+h}-w_{t_{1}}^{s,\phi}-Dw_{t_{1}}^{s,\phi}h\right\|_{ \mathcal{H}}\leq\sqrt{T/N}C_{0}C_{1}^{1+\gamma}\left(1-\sqrt{T/N}C_{0}\right)^ {-1}\left\|h\right\|_{2}^{1+\gamma}=\mathrm{o}\left(\left\|h\right\|_{2} \right),\quad h\in H, \tag{40}\]
for some constant \(C_{1}=C_{1}(\gamma,d,T)>0\). This shows that \(Dw_{t_{1}}^{s,\phi}\) is the Frechet differential of \(w_{t_{1}}^{s,\phi}\), as desired.
Next, consider
\[w=Dw_{t_{1}}^{s,\phi}\psi+\int_{t_{1}}^{t_{2}}D\overline{B}\left(w_{t_{2}}^{s, \phi}\right)(r,w)\,\mathrm{d}r,\quad\psi\in H: \tag{41}\]
the well-posedness of this equation in \(\mathcal{H}\) can be obtained via a fixed-point argument as in the above step. We denote by \(\overline{w}_{2}^{\psi}\in\mathcal{H}\), \(\psi\in H\), the unique solution of (41).
We argue that \(\overline{w}_{2}^{\psi}\) is the unique solution of (36) with \(t_{2}\) instead of \(t\), for every \(\psi\in H\). By the Volterra-type property of \(D\overline{B}\) in (32) and (41) we have, \(\mathbb{P}-\)a.s.,
\[\left.\overline{w}_{2}^{\psi}\right|_{(0,t_{1})}=Dw_{t_{1}}^{s,\phi}\psi \right|_{(0,t_{1})}.\]
Furthermore, thanks to the relation \(w_{t_{2}}^{s,\phi}=w_{t_{2}}^{t_{1},w_{t_{1}}^{s,\phi}}\) in (15) and the properties of \(\overline{B}\) under Assumption 1 we can write, \(\mathbb{P}-\)a.s.,
\[\left.w_{t_{2}}^{s,\phi}\right|_{(0,t_{1})}=w_{t_{1}}^{s,\phi}\right|_{(0,t_{ 1})}, \tag{42}\]
see Remark 1. Consequently, by the property of \(D\overline{B}\) in (33) and recalling that \(Dw_{t_{1}}^{s,\phi}\psi\) satisfies (36) with \(t_{1}\) instead of \(t\), from (41) we conclude that, \(\mathbb{P}-\)a.s.,
\[\overline{w}_{2}^{\psi}=\psi+\int_{s}^{t_{1}}D\overline{B}\left( w_{t_{1}}^{s,\phi}\right)\left(r,Dw_{t_{1}}^{s,\phi}\psi\right)\mathrm{d}r+ \int_{t_{1}}^{t_{2}}D\overline{B}\left(w_{t_{2}}^{s,\phi}\right)\left(r, \overline{w}_{2}^{\phi}\right)\mathrm{d}r\\ =\psi+\int_{s}^{t_{2}}D\overline{B}\left(w_{t_{2}}^{s,\phi} \right)\left(r,\overline{w}_{2}^{\phi}\right)\mathrm{d}r. \tag{43}\]
Hence \(\overline{w}_{2}^{\psi}\) solves (36) with \(t\) replaced by \(t_{2}\); to prove that it is in fact the unique solution, we consider another random variable \(\widetilde{w}\in\mathcal{H}\) satisfying (43). Then, by (32)-(33),
\[1_{(0,t_{1})}\widetilde{w}=1_{(0,t_{1})}\left(\psi+\int_{s}^{t_{1}}D \overline{B}\left(w_{t_{1}}^{s,\phi}\right)\left(r,1_{(0,t_{1})}\widetilde{w }\right)\mathrm{d}r\right). \tag{44}\]
We observe that also \(1_{(0,t_{1})}\overline{w}_{1}^{\psi}\in\mathcal{H}\) satisfies (44). Therefore, using Bochner's theorem and Jensen's inequality, by (30) we can compute
\[\left\|1_{(0,t_{1})}\left(\overline{w}_{1}^{\psi}-\widetilde{w} \right)\right\|_{\mathcal{H}}^{2} \leq\mathbb{E}\left[\left(\int_{s}^{t_{1}}\left\|D\overline{B} \left(w_{t_{1}}^{s,\phi}\right)\left(r,1_{(0,t_{1})}\left(\overline{w}_{1}^{ \psi}-\widetilde{w}\right)\right)\right\|_{2}\mathrm{d}r\right)^{2}\right]\] \[\leq\Delta\mathbb{E}\left[\left\|D\overline{B}\left(w_{t_{1}}^{s, \phi}\right)\left(1_{(0,t_{1})}\left(\overline{w}_{1}^{\psi}-\widetilde{w} \right)\right)\right\|_{2,\Box}^{2}\right]\leq\Delta C_{0}^{2}\left\|1_{(0,t_{ 1})}\left(\overline{w}_{1}-\widetilde{w}\right)\right\|_{\mathcal{H}}^{2}, \tag{45}\]
which allow us to conclude, recalling that \(\sqrt{\Delta}C_{0}<1\),
\[1_{(0,t_{1})}\widetilde{w}=1_{(0,t_{1})}\overline{w}_{1}^{\psi},\quad\mathbb{ P}-\text{a.s.}\]
Going back to (43), by (36) and the previous equality we have, \(\mathbb{P}-\)a.s.,
\[\widetilde{w}=\psi+\int_{s}^{t_{1}}D\overline{B}\left(w_{t_{1}}^{s,\phi} \right)\left(r,\overline{w}_{1}^{\psi}\right)\mathrm{d}r+\int_{t_{1}}^{t_{2}}D \overline{B}\left(w_{t_{2}}^{s,\phi}\right)(r,\widetilde{w})\,\mathrm{d}r= \overline{w}_{1}^{\psi}+\int_{t_{1}}^{t_{2}}D\overline{B}\left(w_{t_{2}}^{s, \phi}\right)(r,\widetilde{w})\,\mathrm{d}r.\]
It follows that \(\widetilde{w}\) satisfies (41): by uniqueness, we obtain \(\widetilde{w}=\overline{w}_{2}^{\psi}\). Hence \(\overline{w}_{2}^{\psi}\) is the unique solution of (36) in \(\mathcal{H}\) with \(t_{2}\) instead of \(t\).
We define the operator \(Dw_{t_{2}}^{s,\phi}\colon H\to\mathcal{H}\) by \(Dw_{t_{2}}^{s,\phi}\psi=\overline{w}_{2}^{\psi},\,\psi\in H,\) and claim that it is the Frechet differential of \(w_{t_{2}}^{s,\phi}\). To see this, note that the linearity of \(Dw_{t_{2}}^{s,\phi}\) is a consequence of the well-posedness of (43). As for the continuity, it is ensured by the following computations, where we use (30)-(38)-(41):
\[\left\|Dw_{t_{2}}^{s,\phi}\psi\right\|_{2}\leq\left\|Dw_{t_{1}} ^{s,\phi}\psi\right\|_{2}+\int_{t_{1}}^{t_{2}}\left\|D\overline{B}\left(w_{t_{2} }^{s,\phi}\right)(r,Dw_{t_{2}}^{s,\phi}\psi)\right\|_{2}\mathrm{d}r\\ \leq\left(1-C_{0}\sqrt{T/N}\right)^{-1}\left\|\psi\right\|_{2}+ \sqrt{\Delta}C_{0}\left\|Dw_{t_{2}}^{s,\phi}\psi\right\|_{2},\quad\mathbb{P}- \text{a.s., }\psi\in H,\]
whence
\[\left\|Dw_{t_{2}}^{s,\phi}\psi\right\|_{2}\leq\left(1-C_{0}\sqrt{T/N}\right)^{-2} \left\|\psi\right\|_{2},\quad\mathbb{P}-\text{a.s., }\psi\in H. \tag{46}\]
Moreover, by the cocycle property in (15) and reasoning as in (39), by (14)-(41) we obtain, for some constant \(c>0\),
\[\left\|w_{t_{2}}^{s,\phi+h}-w_{t_{2}}^{s,\phi}-Dw_{t_{2}}^{s,\phi }h\right\|_{\mathcal{H}}=\left\|w_{t_{2}}^{t_{1},w_{t_{1}}^{s,\phi+h}}-w_{t_{2} }^{t_{1},w_{t_{1}}^{s,\phi}}-Dw_{t_{1}}^{s,\phi}h-\int_{t_{1}}^{t_{2}}D \overline{B}\left(w_{t_{2}}^{s,\phi}\right)\left(r,Dw_{t_{2}}^{s,\phi}h\right) \mathrm{d}r\right\|_{\mathcal{H}}\\ \leq\left\|w_{t_{1}}^{s,\phi+h}-w_{t_{1}}^{s,\phi}-Dw_{t_{1}}^{s,\phi}h\right\|_{\mathcal{H}}+\left\|\int_{t_{1}}^{t_{2}}\left(\overline{B} \left(w_{t_{2}}^{s,\phi+h}\right)-\overline{B}\left(w_{t_{2}}^{s,\phi}\right) -D\overline{B}\left(w_{t_{2}}^{s,\phi}\right)Dw_{t_{2}}^{s,\phi}h\right) \left(r,\cdot\right)\mathrm{d}r\right\|_{\mathcal{H}}\\ \leq c\left\|h\right\|_{2}^{1+\gamma}=\mathrm{o}\left(\left\|h \right\|_{2}\right),\quad h\in H, \tag{47}\]
where we also employ (40) in the last inequality. This shows that \(Dw_{t_{2}}^{s,\phi}\) is the Frechet differential of \(w_{t_{2}}^{s,\phi}\), as desired.
Repeating this argument \(N-\)times, we deduce that the operator \(Dw_{t}^{s,\phi}\colon H\to\mathcal{H}\) defined by \(Dw_{t}^{s,\phi}\psi=\overline{w}_{N}^{\psi}\), where \(\overline{w}_{N}^{\psi}\) is the unique solution of (36) in \(\mathcal{H}\), for every \(\psi\in H\), is the Frechet differential of \(w_{t}^{s,\phi}\). In particular, the first bound in (35) is true, because (cfr. (38)-(46))
\[\left\|Dw_{t}^{s,\phi}\psi\right\|_{2}\leq\left(1-C_{0}\sqrt{T/N}\right)^{-N} \left\|\psi\right\|_{2}=\overline{C}\left\|\psi\right\|_{2},\quad\mathbb{P}- \text{a.s., }\phi,\psi\in H. \tag{48}\]
As regards the second inequality in (35), by (30), (34) and (48) we have, for every \(\phi\), \(\psi\), \(\eta\in H\), \(\mathbb{P}-\)a.s.,
\[\left\|Dw_{t_{1}}^{s,\phi}\eta-Dw_{t_{1}}^{s,\psi}\eta\right\|_{ 2}=\left\|\int_{s}^{t_{1}}\left(D\overline{B}\left(w_{t}^{s,\phi}\right)Dw_{t_ {1}}^{s,\phi}\eta-D\overline{B}\left(w_{t}^{s,\psi}\right)Dw_{t_{1}}^{s,\psi} \eta\right)\left(r,\cdot\right)\,\mathrm{d}r\right\|_{2}\\ \leq\sqrt{\Delta}\left(\left\|D\overline{B}\left(w_{t}^{s,\phi} \right)\left(Dw_{t_{1}}^{s,\phi}\eta-Dw_{t_{1}}^{s,\psi}\eta\right)\right\|_{2, \square}+\left\|\left(D\overline{B}\left(w_{t}^{s,\phi}\right)-D\overline{B} \left(w_{t}^{s,\psi}\right)\right)Dw_{t_{1}}^{s,\psi}\eta\right\|_{2,\square}\right)\] \[\leq C_{0}\sqrt{T/N}\left(\left\|Dw_{t_{1}}^{s,\phi}\eta-Dw_{t_ {1}}^{s,\psi}\eta\right\|_{2}+\overline{C}\left\|w_{t}^{s,\phi}-w_{t}^{s,\psi }\right\|_{2}^{\gamma}\left\|\eta\right\|_{2}\right),\]
where in the first equality we also use (33) and (42) with \(t\) instead of \(t_{2}\). It follows that
\[\left\|Dw_{t_{1}}^{s,\phi}\eta-Dw_{t_{1}}^{s,\psi}\eta\right\|_{2}\leq\left( 1-C_{0}\sqrt{T/N}\right)^{-1}C_{0}\overline{C}\sqrt{T/N}\left\|w_{t}^{s,\phi}- w_{t}^{s,\psi}\right\|_{2}^{\gamma}\left\|\eta\right\|_{2}.\]
By (41), we sequentially iterate this computation to obtain the second bound in (35) with
\[C_{2}=\max\{\overline{C},NC_{0}\overline{C}^{2}\sqrt{T/N}\}.\]
At this point, taking expectations and using Corollary 4 with \(q=2\gamma\) (recall that \(H\subset\mathcal{H}^{q}\)), by Jensen's inequality we infer that, for some constant \(C>0\),
\[\left\|Dw_{t}^{s,\phi}-Dw_{t}^{s,\psi}\right\|_{\mathcal{L}(H; \mathcal{H})}=\sup_{\left\|\eta\right\|_{2}\leq 1}\mathbb{E}\left[\left\|Dw_{t}^{s,\phi} \eta-Dw_{t}^{s,\psi}\eta\right\|_{2}^{2}\right]^{\frac{1}{2}}\leq C_{2}\mathbb{E }\left[\left\|w_{t}^{s,\phi}-w_{t}^{s,\psi}\right\|_{2}^{2\gamma}\right]^{\frac{1 }{2}}\\ \leq C\left\|\phi-\psi\right\|_{2}^{\gamma},\quad\phi,\,\psi\in H.\]
This shows that \(Dw_{t}^{s,\cdot}\in C^{\gamma}\left(H;\mathcal{L}(H;\mathcal{H})\right)\), completing the proof.
### Second-order differentiability in the initial data
Recalling the normed space \(\Lambda_{2}=\left(\Lambda,\left\|\cdot\right\|_{2}\right)\), in the sequel we identify \(\mathcal{L}(\Lambda_{2};\mathcal{L}(\Lambda_{2};H_{\square}))\) with the space \(\mathcal{L}(\Lambda_{2},\Lambda_{2};H_{\square})\) of bilinear forms from \(\Lambda_{2}\times\Lambda_{2}\) to \(H_{\square}\) in the usual way.
For the purpose of investigating the second-order Frechet differential in \(H\) of \(w_{t}^{s,\phi}\), we need to require another condition on \(B\).
**Assumption 3**.: _The map \(B\colon\Lambda\to H_{\square}\) satisfies Assumption 2. Moreover, \(B\) is twice \(\Lambda_{2}-\)Frechet differentiable, and there exists a constant \(C_{0}=C_{0}\left(d,T\right)>0\) such that_
\[\left\|D^{2}B\left(w_{1}\right)\left(w_{2},w_{3}\right)\right\|_{2,\square}\leq C _{0}\left\|w_{2}\right\|_{2}\left\|w_{3}\right\|_{2},\quad w_{1},w_{2},w_{3} \in\Lambda, \tag{49}\]
_and_
\[\left\|D^{2}B\left(w_{1}\right)-D^{2}B\left(w_{2}\right)\right\|_{\mathcal{L} \left(\Lambda_{2},\Lambda_{2};H_{\square}\right)}\leq C_{0}\left\|w_{1}-w_{2} \right\|_{2}^{\beta},\quad w_{1},w_{2}\in\Lambda,\text{ for some }\beta\in\left(0,1\right]. \tag{50}\]
Once again, we can assume that the constant \(C_{0}\) in (49)-(50) is the same as the one in (13) and (27)-(28). By (49), we invoke the theorem of extension of uniformly continuous functions to extend, for every \(w_{1},w_{2}\in\Lambda\), the map \(D^{2}B\left(w_{1}\right)\left(w_{2},\cdot\right)\in\mathcal{L}\left(\Lambda_{ 2};H_{\square}\right)\) to an operator \(\overline{D^{2}B}\left(w_{1}\right)\left(w_{2},\cdot\right)\in\mathcal{L} \left(H;H_{\square}\right)\) satisfying (49) for all \(w_{3}\in H\). It follows that, by linearity,
\[\left\|\overline{D^{2}B}\left(w_{1}\right)\left(w_{2}\right)- \overline{D^{2}B}\left(w_{1}\right)\left(w_{3}\right)\right\|_{\mathcal{L} \left(H;H_{\square}\right)}=\left\|\overline{D^{2}B}\left(w_{1}\right)\left(w_ {2}-w_{3}\right)\right\|_{\mathcal{L}\left(H;H_{\square}\right)}\\ \leq C_{0}\left\|w_{2}-w_{3}\right\|_{2},\quad w_{1},w_{2},w_{3} \in\Lambda,\]
hence we can extend (without changing notation) \(\overline{D^{2}B}(w_{1})\in\mathcal{L}(H,H;H_{\square})\), for all \(w_{1}\in\Lambda\). At this point, by (50) we infer that, for every \(w_{1},w_{2}\in\Lambda\),
\[\left\|\overline{D^{2}B}\left(w_{1}\right)-\overline{D^{2}B}\left(w_{2} \right)\right\|_{\mathcal{L}\left(H,H;H_{\square}\right)}=\left\|\overline{D^{ 2}B}\left(w_{1}\right)-\overline{D^{2}B}\left(w_{2}\right)\right\|_{\mathcal{L }\left(\Lambda_{2},\Lambda_{2};H_{\square}\right)}\leq C_{0}\left\|w_{1}-w_{2} \right\|_{2}^{\beta}, \tag{51}\]
whence, via another extension, from now on we consider
\[\overline{D^{2}B}\colon H\to\mathcal{L}(H,H;H_{\square})\text{ satisfying \eqref{eq:def}-\eqref{eq:def} for every }w_{i}\in H,\,i=1,2,3. \tag{52}\]
We want to show that \(\overline{B}\) is twice \(H-\)Frechet differentiable, with \(D^{2}\overline{B}=\overline{D^{2}B}.\) By Taylor's formula applied to \(DB\),
\[\left(D\overline{B}\left(w_{2}\right)-D\overline{B}\left(w_{1} \right)-\overline{D^{2}B}\left(w_{1}\right)\left(w_{2}-w_{1}\right)\right)w_{3 }=r\left(w_{1},w_{2},w_{3}\right),\quad w_{1},w_{2},w_{3}\in\Lambda,\text{ where} \tag{53}\] \[r\left(x,y,z\right)=\left(\int_{0}^{1}\left(\overline{D^{2}B} \left(x+h\left(y-x\right)\right)-\overline{D^{2}B}\left(x\right)\right)\left(y -x\right)\mathrm{d}h\right)z,\quad x,\,y,\,z\in H.\]
We note that \(r\colon H\times H\times H\to H_{\square}\) is continuous. Indeed, for every \(x,y,z\in H\) and every sequence \(((x_{n},y_{n},z_{n}))_{n}\subset H\times H\times H\) such that \((x_{n},y_{n},z_{n})\to(x,y,z)\) as \(n\to\infty\), with some algebraic computations we obtain, by (52),
\[\left\|r\left(x_{n},y_{n},z_{n}\right)-r\left(x,y,z\right)\right\| _{2,\square}\leq 2C_{0}\left\|y_{n}-x_{n}\right\|_{2}\left\|z_{n}-z\right\|_{2}\\ +C_{0}\left\|z\right\|_{2}\left(2\left\|y_{n}-x_{n}+x-y\right\| _{2}+\left(\frac{1}{\beta+1}\left\|y_{n}-y+x-x_{n}\right\|_{2}^{\beta}+2\left\| x_{n}-x\right\|_{2}^{\beta}\right)\left\|y-x\right\|_{2}\right)\underset{n\to\infty}{ \longrightarrow}0.\]
It then follows from the continuity of \(D\overline{B}\) in \(H\) and (52) that (53) holds for every \(w_{1},w_{2},w_{3}\in H\). Moreover, observing that, by (52), \(\left\|r\left(x,y,\cdot\right)\right\|_{\mathcal{L}\left(H;H_{\square}\right)} \leq C_{0}(\beta+1)^{-1}\left\|y-x\right\|_{2}^{1+\beta}\), \(x,y\in H\), we conclude that
\[D\overline{B}\left(w_{2}\right)-D\overline{B}\left(w_{1}\right)- \overline{D^{2}B}\left(w_{1}\right)\left(w_{2}-w_{1}\right)=\mathrm{o}\left( \left\|w_{2}-w_{1}\right\|_{2}\right),\quad w_{1},w_{2}\in H.\]
Therefore \(\overline{B}\) is twice \(H-\)Frechet differentiable, with \(D^{2}\overline{B}=\overline{D^{2}B}\).
We also note that, for every \(w_{1}\), \(w_{2}\), \(w_{3}\in H\) and \(0<t\leq T,\)
\[D^{2}\overline{B}(w_{1})\left(w_{2},w_{3}\right)\left(r,\cdot\right)\in H\text{ is of Volterra-type, for a.e. }r\in(0,t), \tag{54}\]
and that
\[D^{2}\overline{B}(w_{1})\left(w_{2},w_{3}\right)\left(r,\cdot\right)\text{ depends on }w_{i}\text{ only via }w_{i}\big{|}_{\left(0,t\right)},\,i=1,2,3,\,\text{for a.e. }r\in(0,t)\text{:} \tag{55}\]
these properties are inherited from \(D\overline{B}\) (cfr. (32)-(33) in the discussion following Assumption 2).
In conclusion, we notice that, by (52) (see also (49)),
\[\left\|D^{2}\overline{B}\right\|_{\infty}=\sup_{w\in H}\left\|D^{2}\overline{B}( w)\right\|_{\mathcal{L}(H,H;H_{\Box})}\leq C_{0}. \tag{56}\]
As a consequence, by the mean value theorem we deduce that (29) (see also (30)) holds with \(\gamma=1\), i.e., under Assumption 3 the map \(D\overline{B}\colon H\to\mathcal{L}(H;H_{\Box})\) is globally Lipschitz-continuous. Since \(D\overline{B}\) is also bounded (see (27)-(30)), in what follows we suppose, without loss of generality, that
\[\text{under Assumption \ref{eq:D}},\,D\overline{B}\colon H\to\mathcal{L}(H;H_{\Box}) \text{ satisfies \eqref{eq:D} with }\gamma=\beta. \tag{57}\]
The next result shows that, in the framework of this subsection, the solution \(w_{t}^{s,\phi}\) of (14), considered as a map from \(H\) to \(\mathcal{H}\), is twice \(H-\)Frechet differentiable.
**Theorem 6**.: _Under Assumption 3, for every \(0\leq s\leq t\leq T\), the mapping \(w_{t}^{s,\cdot}\in C^{2+\beta}\left(H;\mathcal{H}\right)\). In particular, for every \(\phi,\,\psi,\,\eta\in H\), \(D^{2}w_{t}^{s,\phi}\left(\psi,\eta\right)\) is the unique solution in \(\mathcal{H}\) of the following equation:_
\[D^{2}w_{t}^{s,\phi}\left(\psi,\eta\right)=\int_{s}^{t}\left(D^{2}\overline{B} \left(w_{t}^{s,\phi}\right)\left(Dw_{t}^{s,\phi}\psi,Dw_{t}^{s,\phi}\eta \right)+D\overline{B}\left(w_{t}^{s,\phi}\right)D^{2}w_{t}^{s,\phi}\left(\psi,\eta\right)\right)\left(r,\cdot\right)\mathrm{d}r. \tag{58}\]
_Furthermore, there exists a constant \(C_{3}=C_{3}(d,T)>0\) such that, for every \(\phi,\psi,\eta,\theta\in H\), \(\mathbb{P}-\)a.s.,_
\[\left\|D^{2}w_{t}^{s,\phi}\left(\eta,\theta\right)\right\|_{2}\leq C_{3}\left\| \eta\right\|_{2}\left\|\theta\right\|_{2},\qquad\left\|\left(D^{2}w_{t}^{s, \phi}-D^{2}w_{t}^{s,\psi}\right)\left(\eta,\theta\right)\right\|_{2}\leq C_{3} \left\|w_{t}^{s,\phi}-w_{t}^{s,\psi}\right\|_{2}^{\beta}\left\|\eta\right\|_{ 2}\left\|\theta\right\|_{2}. \tag{59}\]
Proof.: Fix \(0\leq s\leq t\leq T\) and \(\phi\in H\). We first want to prove the well-posedness in \(\mathcal{H}\) of the equation
\[w=\int_{s}^{t}\left(D^{2}\overline{B}\left(w_{t}^{s,\phi}\right)\left(Dw_{t}^{ s,\phi}\psi,Dw_{t}^{s,\phi}\eta\right)+D\overline{B}\left(w_{t}^{s,\phi} \right)w\right)\left(r,\cdot\right)\mathrm{d}r,\quad\psi,\,\eta\in H. \tag{60}\]
Consider \(N=N\left(d,T\right)\in\mathbb{N}\) so big that \(C_{0}\sqrt{T/N}<1\), where \(C_{0}=C_{0}\left(d,T\right)\) is the constant in Assumptions 1-2-3. In addition, take an equispaced partition \(\{t_{k}\}_{k=0}^{N}\) of \([s,t]\) where \(t_{0}=s\) and \(t_{N}=t\): its mesh \(\Delta\leq T/N\). Under Assumption 3, the bound in (37) holds and allows to employ a fixed point argument as in the proof of Theorem 3 (see also Theorem 5) to deduce the existence of a unique solution \(\overline{w}_{1}^{\phi,\eta}\in\mathcal{H}\) of (60) with \(t_{1}\) instead of \(t\), for every \(\psi,\eta\in H\).
We claim that the operator \(D^{2}w_{t_{1}}^{s,\phi}\colon H\times H\to\mathcal{H}\) defined by \(D^{2}w_{t_{1}}^{s,\phi}(\psi,\eta)=\overline{w}_{1}^{\psi,\eta}\), \(\psi,\eta\in H\), is the second-order Frechet differential of \(w_{t_{1}}^{s,\phi}\). Indeed, considering that \(Dw_{t_{1}}^{s,\phi}\in\mathcal{L}(H;\mathcal{H})\), \(D\overline{B}(w_{t_{1}}^{s,\phi})\in\mathcal{L}(H;H_{\Box})\) and \(D^{2}\overline{B}(w_{t_{1}}^{s,\phi})\in\mathcal{L}(H,H;H_{\Box})\), the fact that \(D^{2}w_{t_{1}}^{s,\phi}\) is bilinear directly follows from (60). As for the boundedness, by (30)-(52) (see also (27)-(49)) and (35) we can compute, applying Bochner's theorem to (58), for some constant \(C_{2}=C_{2}(d,T)>0\),
\[\left\|D^{2}w_{t_{1}}^{s,\phi}\left(\psi,\eta\right)\right\|_{2} \leq C_{0}\sqrt{\Delta}\left(\left\|Dw_{t_{1}}^{s,\phi}\psi\right\|_{2}\left\|Dw _{t_{1}}^{s,\phi}\eta\right\|_{2}+\left\|D^{2}w_{t_{1}}^{s,\phi}(\psi,\eta) \right\|_{2}\right)\\ \leq C_{0}\sqrt{T/N}\left(C_{2}\left\|\psi\right\|_{2}\left\|\eta \right\|_{2}+\left\|D^{2}w_{t_{1}}^{s,\phi}(\psi,\eta)\right\|_{2}\right), \quad\mathbb{P}-\text{a.s.},\,\psi,\eta\in H. \tag{61}\]
Hence
\[\left\|D^{2}w_{t_{1}}^{s,\phi}\left(\psi,\eta\right)\right\|_{2}\leq\left(1-C_{ 0}\sqrt{T/N}\right)^{-1}C_{0}C_{2}\sqrt{T/N}\left\|\psi\right\|_{2}\left\|\eta \right\|_{2},\quad\mathbb{P}-\text{a.s.},\,\psi,\eta\in H. \tag{62}\]
We now observe that, by Taylor's formula applied to \(D\overline{B}\) (cfr. (53)), from (34)-(58) we have, for every \(h\in H\),
\[\left\|Dw_{t_{1}}^{s,\phi+h}-Dw_{t_{1}}^{s,\phi}-D^{2}w_{t_{1}}^{s,\phi}h \right\|_{\mathcal{L}(H;\mathcal{H})}\leq\mathbf{I}_{1}+\mathbf{II}_{1}+\mathbf{ III}_{1}+\mathbf{IV}_{1}, \tag{63}\]
where we set
\[\mathbf{I}_{1} =\sup_{\left\|\eta\right\|_{2}\leq 1}\mathbb{E}\left[\left\|\int_{s}^{t_ {1}}D\overline{B}\left(w_{t_{1}}^{s,\phi}\right)\left(Dw_{t_{1}}^{s,\phi+h} \eta-Dw_{t_{1}}^{s,\phi}\eta-D^{2}w_{t_{1}}^{s,\phi}\left(h,\eta\right)\right) \left(r,\cdot\right)\,\mathrm{d}r\right\|_{2}^{2}\right]^{\frac{1}{2}},\] \[\mathbf{II}_{1} =\sup_{\left\|\eta\right\|_{2}\leq 1}\mathbb{E}\left[\left\|\int_{s}^{t _{1}}\left(D^{2}\overline{B}\left(w_{t_{1}}^{s,\phi}\right)\left(w_{t_{1}}^{s, \phi+h}-w_{t_{1}}^{s,\phi}-Dw_{t_{1}}^{s,\phi}h,Dw_{t_{1}}^{s,\phi}\eta\right) \right)\left(r,\cdot\right)\,\mathrm{d}r\right\|_{2}^{2}\right]^{\frac{1}{2}},\] \[\mathbf{III}_{1} =\sup_{\left\|\eta\right\|_{2}\leq 1}\mathbb{E}\left[\left\|\int_{s}^{t _{1}}\left(D\overline{B}\left(w_{t_{1}}^{s,\phi+h}\right)-D\overline{B}\left( w_{t_{1}}^{s,\phi}\right)\right)\left(Dw_{t_{1}}^{s,\phi+h}\eta-Dw_{t_{1}}^{s, \phi}\eta\right)\left(r,\cdot\right)\,\mathrm{d}r\right\|_{2}^{2}\right]^{ \frac{1}{2}},\] \[\mathbf{IV}_{1} =\sup_{\left\|\eta\right\|_{2}\leq 1}\mathbb{E}\left[\left\|\int_{s}^{t _{1}}\left(\int_{0}^{1}\left(D^{2}\overline{B}\left(w_{t_{1}}^{s,\phi}+v \left(w_{t_{1}}^{s,\phi+h}-w_{t_{1}}^{s,\phi}\right)\right)\right.\right.\right.\] \[\left.\left.\left.-D^{2}\overline{B}\left(w_{t_{1}}^{s,\phi} \right)\right)\left(w_{t_{1}}^{s,\phi+h}-w_{t_{1}}^{s,\phi}\right)\mathrm{d}v \right)Dw_{t_{1}}^{s,\phi}\eta\,\left(r,\cdot\right)\mathrm{d}r\right\|_{2}^{2 }\right]^{\frac{1}{2}}.\]
By (30) (see, in particular, (27))
\[\left|\mathbf{I}_{1}\right|\leq C_{0}\sqrt{T/N}\sup_{\left\|\eta \right\|_{2}\leq 1}\mathbb{E}\left[\left\|Dw_{t_{1}}^{s,\phi+h}\eta-Dw_{t_{1}}^{s, \phi}\eta-D^{2}w_{t_{1}}^{s,\phi}\left(h,\eta\right)\right\|_{2}^{2}\right]^{ \frac{1}{2}}\\ =C_{0}\sqrt{T/N}\left\|Dw_{t_{1}}^{s,\phi+h}-Dw_{t_{1}}^{s,\phi}- D^{2}w_{t_{1}}^{s,\phi}h\right\|_{\mathcal{L}\left(H;\mathcal{H}\right)}.\]
Moreover, considering (29)-(35) (see also (57)) and Corollary 4, which we can apply with \(q=2(1+\beta)\) because \(\phi,h\in H\subset\mathcal{H}^{q}\) (see also ), for some \(C_{1}=C_{1}(\beta,d,T)>0\) we can write
\[\left|\mathbf{III}_{1}\right| \leq\sqrt{\Delta}\sup_{\left\|\eta\right\|_{2}\leq 1}\mathbb{E} \left[\left\|\left(D\overline{B}\left(w_{t_{1}}^{s,\phi+h}\right)-D\overline{B }\left(w_{t_{1}}^{s,\phi}\right)\right)\left(Dw_{t_{1}}^{s,\phi+h}\eta-Dw_{t_{ 1}}^{s,\phi}\eta\right)\right\|_{2,\square}^{2}\right]^{\frac{1}{2}}\] \[\leq\left\|D^{2}\overline{B}\right\|_{\infty}C_{2}\sqrt{T/N}\sup_ {\left\|\eta\right\|_{2}\leq 1}\mathbb{E}\left[\left\|w_{t_{1}}^{s,\phi+h}-w_{t_{1}}^{s, \phi}\right\|_{2}^{2(1+\beta)}\left\|\eta\right\|_{2}^{2}\right]^{\frac{1}{2}} \leq C_{0}C_{1}^{1+\beta}C_{2}\sqrt{T/N}\left\|h\right\|_{2}^{1+\beta},\]
where we also use the mean value theorem on \(D\overline{B}\) and (56). As for \(\mathbf{II}_{1}\), by (35)-(52) we compute
\[\left|\mathbf{II}_{1}\right| \leq\sqrt{\Delta}\sup_{\left\|\eta\right\|_{2}\leq 1}\mathbb{E} \left[\left\|D^{2}\overline{B}\left(w_{t_{1}}^{s,\phi}\right)\left(w_{t_{1}}^{s,\phi+h}-w_{t_{1}}^{s,\phi}-Dw_{t_{1}}^{s,\phi}h,Dw_{t_{1}}^{s,\phi}\eta\right) \right\|_{2,\square}^{2}\right]^{\frac{1}{2}}\] \[\leq C_{0}C_{2}\sqrt{\Delta}\sup_{\left\|\eta\right\|_{2}\leq 1} \mathbb{E}\left[\left\|w_{t_{1}}^{s,\phi+h}-w_{t_{1}}^{s,\phi}-Dw_{t_{1}}^{s, \phi}h\right\|_{2}^{2}\left\|\eta\right\|_{2}^{2}\right]^{\frac{1}{2}}\] \[\leq C_{0}C_{2}\sqrt{T/N}\left\|w_{t_{1}}^{s,\phi+h}-w_{t_{1}}^{ s,\phi}-Dw_{t_{1}}^{s,\phi}h\right\|_{\mathcal{H}}=\mathrm{o}\left(\left\|h\right\|_{2} \right).\]
Finally, again by (35)-(52) (see also (51)) and Corollary 4, employed with \(q=2(1+\beta)\), we have
\[\left|\mathbf{IV}_{1}\right| \leq\sqrt{\Delta\mathbb{E}}\left[\left(\int_{0}^{1}\left\|D^{2} \overline{B}\left(w_{t_{1}}^{s,\phi}+v\left(w_{t_{1}}^{s,\phi+h}-w_{t_{1}}^{s, \phi}\right)\right)-D^{2}\overline{B}\left(w_{t_{1}}^{s,\phi}\right)\right\|_ {\mathcal{L}\left(H,H;H_{\square}\right)}\mathrm{d}v\right)^{2}\right.\] \[\left.\times\left\|w_{t_{1}}^{s,\phi+h}-w_{t_{1}}^{s,\phi}\right\| _{2}^{2}\left\|Dw_{t_{1}}^{s,\phi}\eta\right\|_{2}^{2}\right]^{\frac{1}{2}}\] \[\leq C_{0}C_{2}\sqrt{T/N}\sup_{\left\|\eta\right\|_{2}\leq 1} \mathbb{E}\left[\left\|w_{t_{1}}^{s,\phi+h}-w_{t_{1}}^{s,\phi}\right\|_{2}^{2(1+ \beta)}\left\|\eta\right\|_{2}^{2}\right]^{\frac{1}{2}}\leq C_{0}C_{1}^{1+\beta} C_{2}\sqrt{T/N}\left\|h\right\|_{2}^{1+\beta}.\]
Going back to (63), we conclude that
\[\left\|Dw_{t_{1}}^{s,\phi+h}-Dw_{t_{1}}^{s,\phi}-D^{2}w_{t_{1}}^{s,\phi}h\right\|_ {\mathcal{L}\left(H;\mathcal{H}\right)}\leq\left(1-C_{0}\sqrt{T/N}\right)^{-1} \left(\mathbf{II}_{1}+\mathbf{III}_{1}+\mathbf{IV}_{1}\right)=\mathrm{o}\left( \left\|h\right\|_{2}\right),\quad h\in H. \tag{64}\]
This shows that \(D^{2}w_{t_{1}}^{s,\phi}\) is the second-order Frechet differential of \(w_{t_{1}}^{s,\phi}\), as desired.
Next, consider
\[w=D^{2}w_{t_{1}}^{s,\phi}\left(\psi,\eta\right)\,+\int_{t_{1}}^{t_{2}}\left(D^{2} \overline{B}\left(w_{t_{2}}^{s,\phi}\right)\left(Dw_{t_{2}}^{s,\phi}\psi,Dw_{t _{2}}^{s,\phi}\eta\right)+D\overline{B}\left(w_{t_{2}}^{s,\phi}\right)w\right) \left(r,\cdot\right)\mathrm{d}r,\quad\psi,\eta\in H. \tag{65}\]
Arguing as in the previous step, we infer the well-posedness of this equation in \(\mathcal{H}\): we denote by \(\overline{w}_{2}^{s,\eta}\in\mathcal{H}\) its unique solution, for every \(\psi,\,\eta\in H.\)
Given \(\psi,\,\eta\in H\), we now show that \(\overline{w}_{2}^{\psi,\eta}\) is the unique solution of (60) with \(t_{2}\) instead of \(t\). By the Volterra-type property of \(D^{2}\overline{B}\) [resp., \(D\overline{B}\)] in (54) [resp., (32)] and (65) we have, \(\mathbb{P}-\)a.s.,
\[\left.\overline{w}_{2}^{\psi,\eta}\right|_{(0,t_{1})}=D^{2}w_{t_{1}}^{s,\phi} (\psi,\eta)\Big{|}_{(0,t_{1})}.\]
Moreover, since \(Dw_{t_{2}}^{s,\phi}\psi\) satisfies (41), we infer that, \(\mathbb{P}-\)a.s.,
\[Dw_{t_{2}}^{s,\phi}\psi\Big{|}_{(0,t_{1})}=Dw_{t_{1}}^{s,\phi}\psi\Big{|}_{(0,t_{1})},\]
with an analogous result holding for \(\eta\). Consequently, recalling also (42) and Remark 1, by the property of \(D^{2}\overline{B}\) [resp., \(D\overline{B}\)] in (55) [resp., (33)], from (65) we obtain, \(\mathbb{P}-\)a.s.,
\[\overline{w}_{2}^{\phi,\eta} =\int_{s}^{t_{1}}\left(D^{2}\overline{B}\left(w_{t_{1}}^{s,\phi} \right)\left(Dw_{t_{1}}^{s,\phi}\psi,Dw_{t_{1}}^{s,\phi}\eta\right)+D\overline{ B}\left(w_{t_{1}}^{s,\phi}\right)D^{2}w_{t_{1}}^{s,\phi}(\psi,\eta)\right) \left(r,\cdot\right)\mathrm{d}r\] \[\qquad\qquad\qquad+\int_{t_{1}}^{t_{2}}\left(D^{2}\overline{B} \left(w_{t_{2}}^{s,\phi}\right)\left(Dw_{t_{2}}^{s,\phi}\psi,Dw_{t_{2}}^{s, \phi}\eta\right)+D\overline{B}\left(w_{t_{2}}^{s,\phi}\right)\overline{w}_{2}^ {\psi,\eta}\right)\left(r,\cdot\right)\mathrm{d}r\] \[=\int_{s}^{t_{2}}\left(D^{2}\overline{B}\left(w_{t_{2}}^{s,\phi} \right)\left(Dw_{t_{2}}^{s,\phi}\psi,Dw_{t_{2}}^{s,\phi}\eta\right)+D\overline {B}\left(w_{t_{2}}^{s,\phi}\right)\overline{w}_{2}^{\phi,\eta}\right)\left(r, \cdot\right)\mathrm{d}r, \tag{66}\]
where we also use the fact that \(D^{2}w_{t_{1}}^{s,\phi}(\psi,\eta)\) solves (60) with \(t_{1}\) instead of \(t\). Hence \(\overline{w}_{2}^{\psi,\eta}\) solves (60) with \(t\) replaced by \(t_{2}\). In order to prove that it is in fact the unique solution of this equation, we consider another random variable \(\widetilde{w}\in\mathcal{H}\) satisfying (66). Then, by (54)-(55),
\[1_{(0,t_{1})}\widetilde{w}=1_{(0,t_{1})}\left(\int_{s}^{t_{1}}\left(D^{2} \overline{B}\left(w_{t_{1}}^{s,\phi}\right)\left(Dw_{t_{1}}^{s,\phi}\psi,Dw_{ t_{1}}^{s,\phi}\eta\right)+D\overline{B}\left(w_{t_{1}}^{s,\phi}\right)1_{(0,t_{1})} \widetilde{w}\right)\left(r,\cdot\right)\mathrm{d}r\right). \tag{67}\]
We observe that also \(1_{(0,t_{1})}\overline{w}_{1}^{\psi,\eta}\in\mathcal{H}\) satisfies (67). Therefore we can perform the same computations as in (45) to deduce that
\[1_{(0,t_{1})}\widetilde{w}=1_{(0,t_{1})}\overline{w}_{1}^{\psi,\eta},\quad \mathbb{P}-\mathrm{a.s.}\]
Going back to (66), by the previous equality we have, \(\mathbb{P}-\)a.s.,
\[\widetilde{w} =\int_{s}^{t_{1}}\left(D^{2}\overline{B}\left(w_{t_{1}}^{s,\phi} \right)\left(Dw_{t_{1}}^{s,\phi}\psi,Dw_{t_{1}}^{s,\phi}\eta\right)+D\overline{ B}\left(w_{t_{1}}^{s,\phi}\right)\widetilde{w}\right)\left(r,\cdot\right) \mathrm{d}r\] \[\qquad\qquad\qquad\qquad\qquad\qquad+\int_{t_{1}}^{t_{2}}\left(D^ {2}\overline{B}\left(w_{t_{2}}^{s,\phi}\right)\left(Dw_{t_{2}}^{s,\phi}\psi,Dw_{ t_{2}}^{s,\phi}\eta\right)+D\overline{B}\left(w_{t_{2}}^{s,\phi}\right) \widetilde{w}\right)\left(r,\cdot\right)\mathrm{d}r\] \[=\overline{w}_{1}^{\psi,\eta}+\int_{t_{1}}^{t_{2}}\left(D^{2} \overline{B}\left(w_{t_{2}}^{s,\phi}\right)\left(Dw_{t_{2}}^{s,\phi}\psi,Dw_{t_{2} }^{s,\phi}\eta\right)+D\overline{B}\left(w_{t_{2}}^{s,\phi}\right)\widetilde{w} \right)\left(r,\cdot\right)\mathrm{d}r.\]
It follows that \(\widetilde{w}\) satisfies (65): by uniqueness, we obtain \(\widetilde{w}=\overline{w}_{2}^{\psi,\eta}\). Hence \(\overline{w}_{2}^{\psi,\eta}\) is the unique solution of (60) in \(\mathcal{H}\) with \(t_{2}\) instead of \(t\).
We define the operator \(D^{2}w_{t_{2}}^{s,\phi}\colon H\times H\to\mathcal{H}\) by \(D^{2}w_{t_{2}}^{s,\phi}(\psi,\eta)=\overline{w}_{2}^{\psi,\eta}\), \(\psi,\eta\in H,\) and claim that it is the second-order Frechet differential of \(w_{t_{2}}^{s,\phi}\). Indeed, as we have argued for \(D^{2}w_{t_{1}}^{s,\phi}\), the map \(D^{2}w_{t_{2}}^{s,\phi}\) is bilinear thanks to the the well-posedness of (66). As for the boundedness, arguing as in (61), by (62)-(65) we can
write, for every \(\psi\), \(\eta\in H\), \(\mathbb{P}\) - a.s.,
\[\left\|Dw_{t_{2}}^{s,\phi}(\psi,\eta)\right\|_{2} \leq\left\|D^{2}w_{t_{1}}^{s,\phi}(\psi,\eta)\right\|_{2}\] \[\qquad\qquad+\int_{t_{1}}^{t_{2}}\left\|\left(D^{2}\overline{B} \left(w_{t_{2}}^{s,\phi}\right)\left(Dw_{t_{2}}^{s,\phi}\psi,Dw_{t_{2}}^{s, \phi}\eta\right)+D\overline{B}\left(w_{t_{2}}^{s,\phi}\right)Dw_{t_{2}}^{s, \phi}(\psi,\eta)\right)(r,\cdot)\right\|_{2}\mathrm{d}r\] \[\leq C_{0}C_{2}\left(\left(1-C_{0}\sqrt{T/N}\right)^{-1}\sqrt{T/N }+\sqrt{\Delta}\right)\left\|\psi\right\|_{2}\left\|\eta\right\|_{2}+\sqrt{ \Delta}C_{0}\left\|D^{2}w_{t_{2}}^{s,\phi}(\psi,\eta)\right\|_{2},\]
whence
\[\left\|Dw_{t_{2}}^{s,\phi}(\psi,\eta)\right\|_{2}\leq 2C_{0}C_{2}\left(1-C_{0} \sqrt{T/N}\right)^{-2}\sqrt{T/N}\left\|\psi\right\|_{2}\left\|\eta\right\|_{2 },\quad\mathbb{P}-\text{a.s., }\psi,\,\eta\in H. \tag{68}\]
Moreover, combining (41) with (65), we can argue as in (63) to infer that
\[\left\|Dw_{t_{2}}^{s,\phi+h}-Dw_{t_{2}}^{s,\phi}-D^{2}w_{t_{2}}^{s,\phi}h \right\|_{\mathcal{L}(H;\mathcal{H})}=\mathrm{o}\left(\left\|h\right\|_{2} \right),\quad h\in H,\]
which shows that \(D^{2}w_{t_{2}}^{s,\phi}\) is the second-order Frechet differential of \(w_{t_{2}}^{s,\phi}\), as desired.
This reasoning can be repeated \(N-\)times to deduce that the operator \(D^{2}w_{t}^{s,\phi}\colon H\times H\to\mathcal{H}\) defined by \(D^{2}w_{t}^{s,\phi}(\psi,\eta)=\overline{w}_{N}^{\psi,\eta}\), where \(\overline{w}_{N}^{\psi,\eta}\) is the unique solution of (60) in \(\mathcal{H}\), for every \(\psi\), \(\eta\in H\), is the second-order Frechet differential of \(w_{t}^{s,\phi}\). In particular, the first bound in (59) is true, because (cfr. (62)-(68))
\[\left\|D^{2}w_{t}^{s,\phi}\left(\psi,\eta\right)\right\|_{2} \leq NC_{0}C_{2}\left(1-C_{0}\sqrt{T/N}\right)^{-N}\sqrt{T/N} \left\|\psi\right\|_{2}\left\|\eta\right\|_{2}\eqqcolon\widetilde{C}\left\| \psi\right\|_{2}\left\|\eta\right\|_{2},\] \[\mathbb{P}-\text{a.s., }\phi,\psi,\eta\in H. \tag{69}\]
As for the second inequality in (59), by (30), (35), (52), (57), (58) and (69) we compute, for every \(\phi,\psi,\eta,\theta\in H\), \(\mathbb{P}\)-a.s.,
\[\left\|D^{2}w_{t_{1}}^{s,\phi}\left(\eta,\theta\right)-D^{2}w_{t_ {1}}^{s,\psi}\left(\eta,\theta\right)\right\|_{2}\] \[=\left\|\int_{s}^{t_{1}}\left(D^{2}\overline{B}\left(w_{t}^{s,\phi }\right)\left(Dw_{t}^{s,\phi}\eta,Dw_{t}^{s,\phi}\theta\right)-D^{2}\overline {B}\left(w_{t}^{s,\psi}\right)\left(Dw_{t}^{s,\psi}\eta,Dw_{t}^{s,\psi}\theta \right)\right.\right.\] \[\leq\sqrt{\Delta}\left(\left\|\left(D^{2}\overline{B}\left(w_{t}^{ s,\phi}\right)-D^{2}\overline{B}\left(w_{t}^{s,\psi}\right)\right)\left(Dw_{t}^{s, \phi},Dw_{t}^{s,\phi}\theta\right)\right\|_{2,\square}\right.\] \[\leq\sqrt{\Delta}\left(\left\|\left(D^{2}\overline{B}\left(w_{t}^{ s,\phi}\right)-D^{2}\overline{B}\left(w_{t}^{s,\psi}\right)\right)\left(Dw_{t}^{s, \phi},\eta,Dw_{t}^{s,\phi}\theta\right)\right\|_{2,\square}\right.\] \[\leq C_{0}\sqrt{T/N}\Big{(}\left(\widetilde{C}+3C_{2}\right)\left\| w_{t}^{s,\phi}-w_{t}^{s,\psi}\right\|_{2}^{\beta}\left\|\eta\right\|_{2}\left\| \theta\right\|_{2}+\left\|D^{2}w_{t_{1}}^{s,\phi}\left(\eta,\theta\right)-D^{2 }w_{t_{1}}^{s,\psi}\left(\eta,\theta\right)\right\|_{2}\Big{)},\]
whence
\[\left\|D^{2}w_{t_{1}}^{s,\phi}\left(\eta,\theta\right)-D^{2}w_{t_ {1}}^{s,\psi}\left(\eta,\theta\right)\right\|_{2}\leq\left(1-C_{0}\sqrt{T/N} \right)^{-1}C_{0}\left(\widetilde{C}+3C_{2}\right)\sqrt{T/N}\left\|w_{t}^{s, \phi}-w_{t}^{s,\psi}\right\|_{2}^{\beta}\left\|\eta\right\|_{2}\left\|\theta \right\|_{2}.\]
By (65), we sequentially iterate this computation to obtain the second inequality in (59) with
\[C_{3}=\max\{\widetilde{C},N\left(1-C_{0}\sqrt{T/N}\right)^{-N}C_{0}(\widetilde{ C}+3C_{2})\sqrt{T/N}\}.\]
Thus, taking expectations and using Corollary 4 with \(q=2\), by Jensen's inequality we deduce that, for some constant \(c>0\),
\[\left\|D^{2}w_{t}^{s,\phi}-D^{2}w_{t}^{s,\psi}\right\|_{\mathcal{L}(H,H; \mathcal{H})}=\sup_{\left\|\eta\right\|_{2},\left\|\theta\right\|_{2}\leq 1} \mathbb{E}\left[\left\|D^{2}w_{t}^{s,\phi}\left(\eta,\theta\right)-D^{2}w_{t}^{ s,\psi}\left(\eta,\theta\right)\right\|_{2}^{2}\right]^{\frac{1}{2}}\\ \leq C_{3}\mathbb{E}\left[\left\|w_{t}^{s,\phi}-w_{t}^{s,\psi} \right\|_{2}^{2\beta}\right]^{\frac{1}{2}}\leq c\left\|\phi-\psi\right\|_{2}^{ \beta},\quad\phi,\,\psi\in H.\]
This shows that \(D^{2}w_{t}^{s,\cdot}\in C^{\beta}\left(H;\mathcal{L}(H,H;\mathcal{H})\right)\), completing the proof.
## 4 The Kolmogorov equation
Recall the definition of the map \(\sigma\colon[0,T]\to\mathcal{L}(\mathbb{R}^{d};H)\) in (5). Given \(u\colon[0,T]\times H\to\mathbb{R}\) and a terminal condition \(\Phi\colon H\to\mathbb{R}\), in this section we investigate the following _Kolmogorov backward equation_ in integral form:
\[u\left(t,\phi\right)=\Phi\left(\phi\right)+\int_{t}^{T}\left\langle\nabla u \left(r,\phi\right),B\left(r,\phi\right)\right\rangle_{H}\mathrm{d}r+\frac{1} {2}\int_{t}^{T}\mathrm{Tr}\left(D^{2}u\left(r,\phi\right)\sigma\left(r\right) \sigma\left(r\right)^{*}\right)\mathrm{d}r,\]
\[t\in[0,T]\,,\,\phi\in\Lambda. \tag{70}\]
Our aim is to find a solution of (70) via the random variables \(w_{T}^{t,\,\phi}\in\mathcal{H}\) satisfying (14) for every \(t\in[0,T]\) and \(\phi\in H\). This is done in Theorem 9, for which we need a couple of preparatory results.
**Lemma 7**.: _There exists a constant \(C_{\alpha,d}>0\) such that_
\[\left\|\int_{s}^{t}\left(\sigma\left(t\right)-\sigma\left(r\right)\right) \mathrm{d}W_{r}\right\|_{\mathcal{H}}\leq C_{\alpha,d}\left|t-s\right|^{ \alpha},\quad 0\leq s\leq t\leq T. \tag{71}\]
Proof.: Fix \(0\leq s\leq t\leq T\) and denote by \(\left(e_{k}\right)_{k=1,\ldots,d}\) the canonical basis of \(\mathbb{R}^{d}\). Using straightforward substitutions, by (5) we compute, for every \(k=1,\ldots,d\),
\[\left\|\left(\sigma\left(t\right)-\sigma\left(r\right)\right)e_{ k}\right\|_{2}^{2}=\int_{0}^{T}\left|k_{2}\left(\xi-t\right)1_{\left\{\xi>t\right\}}-k_ {2}\left(\xi-r\right)1_{\left\{\xi>r\right\}}\right|^{2}\mathrm{d}\xi\\ =\int_{0}^{t-r}\left|k_{2}\left(\xi\right)\right|^{2}\mathrm{d} \xi+\int_{0}^{T-t}\left|k_{2}\left(\xi+t-r\right)-k_{2}\left(\xi\right) \right|^{2}\mathrm{d}\xi,\quad r\in\left[s,t\right]. \tag{72}\]
Recalling that (see (4)) \(k_{2}\left(u\right)=\frac{1}{\Gamma\left(\alpha\right)}u^{\alpha-1}\), \(\alpha\in(1/2,1)\), \(u>0\), for every \(r\in\left[s,t\right]\) we have
\[\int_{0}^{t-r}\left|k_{2}\left(\xi\right)\right|^{2}\mathrm{d}\xi=\frac{1}{ \left(\Gamma(\alpha)\right)^{2}\left(2\alpha-1\right)}\left|t-r\right|^{2 \alpha-1},\]
and
\[\int_{0}^{T-t}\left|k_{2}\left(\xi+t-r\right)-k_{2}\left(\xi\right)\right|^{2} \mathrm{d}\xi\leq\frac{1}{\left(\Gamma(\alpha)\right)^{2}}\left(\int_{0}^{ \infty}\left(\left(\xi+1\right)^{\alpha-1}-\xi^{\alpha-1}\right)^{2}\mathrm{d }\xi\right)\left|t-r\right|^{2\alpha-1}.\]
Therefore the discussion at the end of Page 98 in [10] ensures that (71) holds with
\[C_{\alpha,d}=\frac{\sqrt{d}}{\Gamma\left(\alpha\right)}\left(\frac{1}{2\alpha }\right)^{\frac{1}{2}}\left(\frac{1}{2\alpha-1}+\int_{0}^{\infty}\left(\left( \xi+1\right)^{\alpha-1}-\xi^{\alpha-1}\right)^{2}\mathrm{d}\xi\right)^{\frac{1 }{2}},\]
completing the proof.
The following lemma analyzes some properties of the solution \(w_{t}^{s,\phi}\in\mathcal{L}_{t}^{p}\) of (14) in the framework of Remark 2. Recall that \(\mathcal{L}_{t}^{p}=L_{t}^{p}\left(\Omega;L^{p}\right),\) where \(L^{p}=L^{p}\big{(}0,T;\mathbb{R}^{d}\big{)}\), and that \(L_{\square}^{p}=L^{p}\big{(}\left(0,T\right)\times\left(0,T\right);\mathbb{R} ^{d}\big{)}\).
**Lemma 8**.: _Suppose that \(B\colon\Lambda\to L_{\square}^{p}\) satisfies Assumption 1 and (24), for some \(p\in\big{[}2,\left(1-\alpha\right)^{-1}\big{)}\). Then there exists a constant \(C_{1,p}=C_{1,p}\left(\alpha,d,T\right)>0\) such that_
\[\left\|w_{t}^{s,\phi}\right\|_{\mathcal{L}^{p}}\leq C_{1,p}\left(1+\left\| \phi\right\|_{p}\right),\quad 0\leq s\leq t\leq T,\,\phi\in L^{p}. \tag{73}\]
_Furthermore, for every \(\phi\in L^{p}\), there is a constant \(C_{\phi,p}=C_{\phi,p}(\alpha,d,T)>0\) such that_
\[\left\|w_{t}^{s,\phi}-\phi\right\|_{\mathcal{L}^{p}}\leq C_{\phi,p}\sqrt{t-s}, \quad 0\leq s\leq t\leq T. \tag{74}\]
When \(p=2\), the hypotheses of Lemma 8 reduce to Assumption 1 and \(\left\|\cdot\right\|_{\mathcal{L}^{p}}=\left\|\cdot\right\|_{\mathcal{H}}\).
Proof.: Fix \(0\leq s\leq t\leq T\) and \(\phi\in L^{p}\). Recall that, under the hypotheses of the lemma, the unique solution \(w_{t}^{s,\phi}\in\mathcal{H}\) of (14) belongs to the space \(\mathcal{L}_{t}^{p}\), see Remark 2.
Consider \(N=N(d,p,T)\in\mathbb{N}\) so big that \(C_{0,p}(2T/N)^{1-\frac{1}{p}}<1\), where \(C_{0,p}\) is the constant appearing in (24). Take an equispaced partition \(\left\{t_{k}\right\}_{k=0}^{N}\) of \(\left[s,t\right]\) with \(t_{0}=s\) and \(t_{N}=t\): its mesh \(\Delta\leq T/N\). By (14)-(24) we have, using Bochner's theorem and Jensen's inequality,
\[\left\|w_{t_{1}}^{s,\phi}\right\|_{\mathcal{L}^{p}}\leq\left\|\phi\right\|_{p} +C_{0,p}(2\Delta)^{1-\frac{1}{p}}\left(1+\left\|w_{t_{1}}^{s,\phi}\right\|_{ \mathcal{L}^{p}}\right)+\left\|\int_{s}^{t_{1}}\sigma\left(r\right)\mathrm{d} W_{r}\right\|_{\mathcal{L}^{p}},\]
which in turn implies, by (23), for some constant \(c=c(d,p,T)>0\),
\[\left\|w_{t_{1}}^{s,\phi}\right\|_{\mathcal{L}^{p}}\leq\left(1-C_{0,p}\left(2 T/N\right)^{1-\frac{1}{p}}\right)^{-1}\left(\left\|\phi\right\|_{p}+c\left\|k_{2} \right\|_{p}+C_{0,p}\left(2T/N\right)^{1-\frac{1}{p}}\right).\]
At this point, invoking \(N-\)times the cocycle property in (15) we obtain (73).
As for (74), using (23)-(24) we compute, for some constant \(C=C(d,p)>0\), recalling the notation \(\Sigma_{s,t}\) introduced in (6),
\[\left\|w_{t}^{s,\phi}-\phi\right\|_{\mathcal{L}^{p}} \leq\mathbb{E}\left[\left(\int_{s}^{t}\left\|\overline{B}\left(r,w_{t}^{s,\phi}\right)\right\|_{p}\mathrm{d}r\right)^{p}\right]^{\frac{1}{p}}+ \left\|\Sigma_{s,t}\right\|_{\mathcal{L}^{p}}\] \[\leq(t-s)^{1-\frac{1}{p}}\mathbb{E}\left[\int_{s}^{t}\mathrm{d}r \int_{0}^{T}\left|\overline{B}\left(w_{t}^{s,\phi}\right)\right|^{p}\left(r, \xi\right)\mathrm{d}\xi\right]^{\frac{1}{p}}+C\left\|k_{2}\right\|_{p}\sqrt{t-s}\] \[\leq\sqrt{t-s}\left(C\left\|k_{2}\right\|_{p}+2^{1-\frac{1}{p}}T^ {\frac{1}{2}-\frac{1}{p}}C_{0,p}\left(1+\left\|w_{t}^{s,\phi}\right\|_{ \mathcal{L}^{p}}\right)\right).\]
Thus, by (73) the proof is complete.
We are now ready to prove the main result of the paper, which shows the connection between the solution \(w_{T}^{t,\phi}\), \(t\in\left[0,T\right]\), \(\phi\in H\), of (14) and the backward Kolmogorov equation in integral form (70).
**Theorem 9**.: _Suppose that \(B\colon\Lambda\to L_{\Box}^{p}\) satisfies Assumption 3 and (24), for some \(p\in\left(2,\left(1-\alpha\right)^{-1}\right)\). In addition, let the function \(r\mapsto B(r,\phi)\) belong to \(C\big{(}[0,T];H\big{)}\), for every \(\phi\in\Lambda\). Fix \(\Phi\in C_{b}^{2+\beta}\left(H\right)\) and define the map \(u\colon\,[0,T]\times H\to\mathbb{R}\) by_
\[u\left(t,\phi\right)=\mathbb{E}\left[\Phi\left(w_{T}^{t,\phi}\right)\right], \quad t\in\left[0,T\right],\,\phi\in H, \tag{75}\]
_where \(w_{T}^{t,\phi}\in\mathcal{H}\) is the unique solution of (14). Then \(u\in L^{\infty}\big{(}0,T;C_{b}^{2+\beta}\left(H\right)\big{)}\cap C([0,T] \times H;\mathbb{R})\) and solves the Kolmogorov backward equation in integral form (70)._
Proof.: The fact that the function \(u\) defined in (75) belongs to \(L^{\infty}\big{(}0,T;C_{b}^{2+\beta}\left(H\right)\big{)}\cap C([0,T]\times H; \mathbb{R})\) is one of the results contained in Lemma 11 (see Appendix A). Consequently, here we only focus on proving that \(u\) solves (70).
Fix \(0\leq s<t\leq T\) and \(\phi\in\Lambda\). Since \(\Lambda\subset\mathcal{H}_{s}^{q}\), \(q\geq 2\), we can use (26) in Corollary 4 to write
\[u\left(s,\phi\right)=\mathbb{E}\left[\mathbb{E}\left[\Phi\left(w_{T}^{s,\phi} \right)\left|\mathcal{F}_{t}\right|\right]\right]=\mathbb{E}\left[\mathbb{E} \left[\Phi\left(w_{T}^{t,\psi}\right)\right]\Big{|}_{\psi=w_{t}^{s,\phi}} \right]=\mathbb{E}\left[u\left(t,w_{t}^{s,\phi}\right)\right]. \tag{76}\]
Taylor's formula applied to the mapping \(u\left(t,\cdot\right)\in C_{b}^{2+\beta}\left(H\right)\) yields, denoting by \(h=w_{t}^{s,\phi}-\phi\in\mathcal{H}\),
\[u\left(t,w_{t}^{s,\phi}\right)-u\left(t,\phi\right)=\left\langle \nabla u\left(t,\phi\right),h\right\rangle_{H}+\frac{1}{2}\left\langle D^{2}u \left(t,\phi\right)h,h\right\rangle_{H}+r_{u\left(t,\cdot\right)}\left(\phi,w_{ t}^{s,\phi}\right),\quad\text{ where }\] \[r_{u\left(t,\cdot\right)}\left(x,y\right)=\int_{0}^{1}\left(1-r \right)\left\langle\left(D^{2}u\left(t,x+r\left(y-x\right)\right)-D^{2}u \left(t,x\right)\right)\left(y-x\right),y-x\right\rangle_{H}\mathrm{d}r,\quad x,\,y\in H. \tag{77}\]
To keep the notation simple, in this proof we denote by \(\overline{B}_{s,t}(w_{t}^{s,\phi})=\int_{s}^{t}\overline{B}(r,w_{t}^{s,\phi})\, \mathrm{d}r\in\mathcal{H}\). Using the expression in (14) for \(h=w_{t}^{s,\phi}-\phi\) and noticing that \(\mathbb{E}[\Sigma_{s,t}]=0\in H\) by [10, Proposition 4.28], we take expectations in the previous chain of equalities to obtain, from (76),
\[u\left(s,\phi\right)-u\left(t,\phi\right)= \left\langle\nabla u\left(t,\phi\right),\mathbb{E}\left[\int_{s} ^{t}\overline{B}\left(r,w_{t}^{s,\phi}\right)\mathrm{d}r\right]\right\rangle_{H}\] \[+\frac{1}{2}\mathbb{E}\left[\left\langle D^{2}u\left(t,\phi \right)\left(\overline{B}_{s,t}\left(w_{t}^{s,\phi}\right)+\Sigma_{s,t} \right),\overline{B}_{s,t}\left(w_{t}^{s,\phi}\right)+\Sigma_{s,t}\right\rangle _{H}\right]\!+\mathbb{E}\left[r_{u\left(t,\cdot\right)}\!\left(\phi,w_{t}^{s, \phi}\right)\right]. \tag{78}\]
For all \(N\in\mathbb{N}\), consider an equispaced partition \(\{t_{k}^{(N)}\}_{k=0}^{N}\) of \([s,T]\) with mesh \(\Delta_{N}\), where \(t_{0}^{(N)}=s\) and \(t_{N}^{(N)}=T\). By (78), we have
\[u\left(s,\phi\right)-\Phi\left(\phi\right)=\sum_{k=1}^{N}\left(u \left(t_{k-1}^{(N)},\phi\right)-u\left(t_{k}^{(N)},\phi\right)\right)\] \[=\sum_{k=1}^{N}\left\langle\nabla u\left(t_{k}^{(N)},\phi\right), \mathbb{E}\left[\overline{B}_{t_{k-1}^{(N)},t_{k}^{(N)}}\left(w_{t_{k-1}^{(N)} }^{t_{k-1}^{(N)},\phi}\right)\right]\right\rangle_{H}\] \[\quad+\frac{1}{2}\sum_{k=1}^{N}\mathbb{E}\left[\left\langle D^{2} u\left(t_{k}^{(N)},\phi\right)\left(\overline{B}_{t_{k-1}^{(N)},t_{k}^{(N)}} \left(w_{t_{k}^{(N)}}^{t_{(N)}}\right)+\Sigma_{t_{k-1}^{(N)},t_{k}^{(N)}} \right),\overline{B}_{t_{k-1}^{(N)},t_{k}^{(N)}}\left(w_{t_{k}^{(N)}}^{t_{k-1 }^{(N)},\phi}\right)+\Sigma_{t_{k-1}^{(N)},t_{k}^{(N)}}\right\rangle_{H}\right]\] \[\quad+\sum_{k=1}^{N}\mathbb{E}\left[r_{u\left(t_{k}^{(N)},\cdot \right)}\left(\phi,w_{t_{k}^{(N)}}^{t_{k-1}^{(N)},\phi}\right)\right]=\mathbf{ I}^{N}+\mathbf{II}^{N}+\mathbf{III}^{N}. \tag{79}\]
In the sequel, we omit the superscript \(N\) from the points of the partition to ease notation, i.e., we write \(t_{k}\) for \(t_{k}^{(N)}\). Firstly, we analyze \(\mathbf{I}^{N}\), which we decompose using the properties of the Bochner's integral as follows:
\[\mathbf{I}^{N}= \sum_{k=1}^{N}\left\langle\nabla u\left(t_{k},\phi\right),B\left( t_{k},\phi\right)\right\rangle_{H}\left(t_{k}-t_{k-1}\right)\] \[+\sum_{k=1}^{N}\mathbb{E}\left[\int_{t_{k-1}}^{t_{k}}\left\langle \nabla u\left(t_{k},\phi\right),\overline{B}\left(r,w_{t_{k}}^{t_{k-1},\phi} \right)-B\left(r,\phi\right)\right\rangle_{H}\mathrm{d}r\right]\] \[+\sum_{k=1}^{N}\int_{t_{k-1}}^{t_{k}}\left\langle\nabla u\left(t_ {k},\phi\right),B\left(r,\phi\right)-B\left(t_{k},\phi\right)\right\rangle_{H} \mathrm{d}r\right.\rightleftharpoons\mathbf{I}_{1}^{N}+\mathbf{I}_{2}^{N}+ \mathbf{I}_{3}^{N}.\]
Note that \(\mathbf{I}_{1}^{N}\rightarrow\int_{s}^{T}(\nabla u(r,\phi),B(r,\phi))_{H} \mathrm{d}r\) as \(N\rightarrow\infty\) by Lemma 11 in Appendix A. Next, Jensen's inequality, (24), (74) and the continuous immersion \(L^{p}\!\left(\left(t_{k-1},t_{k}\right)\times\left(0,T\right);\mathbb{R}^{d} \right)\hookrightarrow L^{2}\!\left(\left(t_{k-1},t_{k}\right)\times\left(0,T \right);\mathbb{R}^{d}\right)\) yield, for some constant \(C_{\phi,p}=C_{\phi,p}(\alpha,d,T)>0\),
\[\left|\mathbf{I}_{2}^{N}\right| \leq\left\|\nabla u\right\|_{\infty}\sqrt{\Delta_{N}}\sum_{k=1}^{ N}\mathbb{E}\left[\left(\int_{t_{k-1}}^{t_{k}}\mathrm{d}r\int_{0}^{T}\left| \overline{B}\left(w_{t_{k}}^{t_{k-1},\phi}\right)-B\left(\phi\right)\right|^{ 2}\left(r,\xi\right)\mathrm{d}\xi\right)^{\frac{1}{2}}\right]\] \[\leq T^{\frac{1}{2}-\frac{1}{p}}C_{0,p}\left(\Delta_{N}\right)^{1 -\frac{1}{p}}\left\|\nabla u\right\|_{\infty}\sum_{k=1}^{N}\mathbb{E}\left[ \left\|\overline{B}\left(w_{t_{k}}^{t_{k-1},\phi}\right)-B\left(\phi\right) \right\|_{p,\Box}\right]\]
Here, we set \(\left\|\nabla u\right\|_{\infty}=\sup_{t\in[0,T]}\sup_{\phi\in H}\left\|\nabla u (t,\phi)\right\|_{2}\). Regarding \(\mathbf{I}_{3}^{N}\), we define the modulus of continuity of the map \(B(\cdot,\phi)\colon[0,T]\to H\) by
\[\mathfrak{w}\left(B(\cdot,\phi),\delta\right)=\sup_{\left|u-v\right|\leq\delta} \left.\left\|B\left(u,\phi\right)-B\left(v,\phi\right)\right\|_{2},\quad\delta>0.\]
Since, by hypothesis, \(B(\cdot,\phi)\) is continuous on the compact \([0,T]\), it is also uniformly continuous, hence we infer that \(\left|\mathbf{I}_{3}^{N}\right|\leq T\left\|\nabla u\right\|_{\infty}\mathfrak{n }\left(B\left(\cdot,\phi\right),\Delta_{N}\right)\underset{N\rightarrow\infty} {\longrightarrow}0.\) Therefore, we have just shown that
\[\lim_{N\rightarrow\infty}\mathbf{I}^{N}=\int_{s}^{T}\left\langle\nabla u(r, \phi),B(r,\phi)\right\rangle_{H}\mathrm{d}r. \tag{80}\]
Now we investigate \(\mathbf{II}^{N}\), which we split as follows:
\[2\mathbf{II}^{N}= \sum_{k=1}^{N}\mathbb{E}\left[\left\langle D^{2}u\left(t_{k},\phi \right)\overline{B}_{t_{k-1},t_{k}}\left(w_{t_{k}}^{t_{k-1},\phi}\right), \overline{B}_{t_{k-1},t_{k}}\left(w_{t_{k}}^{t_{k-1},\phi}\right)\right\rangle _{H}\right]\] \[+\sum_{k=1}^{N}\mathbb{E}\left[\left\langle D^{2}u\left(t_{k}, \phi\right)\overline{B}_{t_{k-1},t_{k}}\left(w_{t_{k}}^{t_{k-1},\phi}\right), \Sigma_{t_{k-1},t_{k}}\right\rangle_{H}\right]\] \[+\sum_{k=1}^{N}\mathbb{E}\left[\left\langle D^{2}u\left(t_{k}, \phi\right)\Sigma_{t_{k-1},t_{k}},\overline{B}_{t_{k-1},t_{k}}\left(w_{t_{k}}^ {t_{k-1},\phi}\right)\right\rangle_{H}\right]\] \[+\sum_{k=1}^{N}\mathbb{E}\left[\left\langle D^{2}u\left(t_{k}, \phi\right)\Sigma_{t_{k-1},t_{k}},\Sigma_{t_{k-1},t_{k}}\right\rangle_{H} \right]\eqqcolon\mathbf{II}_{1}^{N}+\mathbf{II}_{2}^{N}+\mathbf{II}_{3}^{N}+ \mathbf{II}_{4}^{N}.\]
Let us set \(\left\|D^{2}u\right\|_{\infty}=\sup_{t\in[0,T]}\sup_{\phi\in H}\left\|D^{2}u( t,\phi)\right\|_{\mathcal{L}(H;H)}\). By (24)-(73), arguing similarly to \(\mathbf{I}_{2}^{N}\) we have, for some \(c>0\),
\[\left|\mathbf{II}_{1}^{N}\right| \leq\left\|D^{2}u\right\|_{\infty}\sum_{k=1}^{N}\mathbb{E}\left[ \left\|\overline{B}_{t_{k-1},t_{k}}\left(w_{t_{k}}^{t_{k-1},\phi}\right) \right\|_{2}^{2}\right]\] \[\leq T^{1-\frac{2}{p}}\Delta_{N}^{1-\frac{2}{p}}\left\|D^{2}u \right\|_{\infty}\sum_{k=1}^{N}\mathbb{E}\left[\left\|\overline{B}\left(w_{t_ {k}}^{t_{k-1},\phi}\right)\right\|_{p,\square}^{2}\right]\left(t_{k}-t_{k-1} \right)\leq c\Delta_{N}^{1-\frac{2}{p}}\left\|D^{2}u\right\|_{\infty}\left(1+ \left\|\phi\right\|_{p}^{2}\right).\]
Moreover, by Holder's inequality and (7), for some \(\tilde{c}>0\),
\[\left|\mathbf{II}_{2}^{N}\right|\leq\left\|D^{2}u\right\|_{\infty}\sum_{k=1}^{ N}\left\|\overline{B}_{t_{k-1},t_{k}}\left(w_{t_{k}}^{t_{k-1},\phi}\right) \right\|_{\mathcal{H}}\left\|\Sigma_{t_{k-1},t_{k}}\right\|_{\mathcal{H}} \leq\left\|D^{2}u\right\|_{\infty}\widetilde{c}\,T^{\frac{3}{2}-\frac{1}{p}} \left\|k_{2}\right\|_{2}\Delta_{N}^{\frac{1}{2}-\frac{1}{p}}\left(1+\left\| \phi\right\|_{p}\right).\]
Since the second bound holds for \(\mathbf{II}_{3}^{N}\), too, we see that \(\mathbf{II}_{i}^{N}\to 0\) as \(N\rightarrow\infty\), \(i=1,2,3\).
As for \(\mathbf{II}_{4}^{N}\), we write it as the following sum:
\[\mathbf{II}_{4}^{N}= \sum_{k=1}^{N}\mathbb{E}\left[\left\langle D^{2}u\left(t_{k}, \phi\right)\int_{t_{k-1}}^{t_{k}}\sigma\left(t_{k}\right)\mathrm{d}W_{r},\int_ {t_{k-1}}^{t_{k}}\sigma\left(t_{k}\right)\mathrm{d}W_{r}\right\rangle_{H}\right]\] \[+\sum_{k=1}^{N}\mathbb{E}\left[\left\langle D^{2}u\left(t_{k}, \phi\right)\int_{t_{k-1}}^{t_{k}}\left(\sigma\left(r\right)-\sigma\left(t_{k} \right)\right)\mathrm{d}W_{r},\int_{t_{k-1}}^{t_{k}}\sigma\left(t_{k}\right) \mathrm{d}W_{r}\right\rangle_{H}\right]\] \[+\sum_{k=1}^{N}\mathbb{E}\left[\left\langle D^{2}u\left(t_{k}, \phi\right)\int_{t_{k-1}}^{t_{k}}\sigma\left(t_{k}\right)\mathrm{d}W_{r},\int_ {t_{k-1}}^{t_{k}}\left(\sigma\left(r\right)-\sigma\left(t_{k}\right)\right) \mathrm{d}W_{r}\right\rangle_{H}\right]\] \[=: \mathbf{II}_{4,1}^{N}+\mathbf{II}_{4,2}^{N}+\mathbf{II}_{4,3}^{N}+ \mathbf{II}_{4,4}^{N}.\]
By [10, Proposition 4.30], we have, for every \(k=1,\ldots,N\),
\[D^{2}u\left(t_{k},\phi\right)\int_{t_{k-1}}^{t_{k}}\sigma\left(t_{k}\right) \mathrm{d}W_{r}=\int_{t_{k-1}}^{t_{k}}D^{2}u\left(t_{k},\phi\right)\sigma \left(t_{k}\right)\mathrm{d}W_{r},\quad\mathbb{P}-\text{a.s.},\]
whence, by [10, Corollary 4.29] and Lemma 11,
\[\mathbf{\Pi}_{4,1}^{N}=\sum_{k=1}^{N}\mathrm{Tr}\left(D^{2}u\left(t_{k},\phi \right)\sigma\left(t_{k}\right)\sigma\left(t_{k}\right)^{\ast}\right)\left(t_{k }-t_{k-1}\right)\underset{N\rightarrow\infty}{\longrightarrow}\int_{s}^{T} \mathrm{Tr}\left(D^{2}u\left(r,\phi\right)\sigma\left(r\right)\sigma\left(r \right)^{\ast}\right)\mathrm{d}r.\]
Furthermore, Holder's inequality, (71) in Lemma 7 and [10, Proposition 4.20] yield, for \(i=2,3\), for some constants \(c_{1},c_{2}>0\),
Analogous estimates show that \(\mathbf{\Pi}_{4,4}^{N}\to 0\) as \(N\rightarrow\infty\), as well. Thus,
\[\lim_{N\rightarrow\infty}\mathbf{\Pi}^{N}=\frac{1}{2}\int_{s}^{T}\mathrm{Tr} \left(D^{2}u\left(r,\phi\right)\sigma\left(r\right)\sigma\left(r\right)^{\ast }\right)\mathrm{d}r. \tag{81}\]
At last we study the remainder term \(\mathbf{\Pi}^{N}\) in (79). To do this, we employ the fact that \(D^{2}u\left(t,\cdot\right):H\rightarrow\mathcal{L}\left(H;H\right)\) is \(\beta-\)Holder continuous uniformly in time, see (94) in Lemma 11. We choose \(\tilde{\beta}\in\left(0,\beta\right)\) such that \(2+\tilde{\beta}<p\); by the expression of \(r_{u\left(t_{k},\cdot\right)}\) in (77) we deduce that
\[\left|\mathbf{\Pi}^{N}\right| \leq\sum_{k=1}^{N}\int_{0}^{1}\mathbb{E}\left[\left\|D^{2}u\left( t_{k},\phi+r\left(w_{t_{k}}^{t_{k-1},\phi}-\phi\right)-D^{2}u\left(t_{k},\phi \right)\right)\right\|_{\mathcal{L}\left(H;H\right)}\left\|w_{t_{k}}^{t_{k-1 },\phi}-\phi\right\|_{2}^{2}\right]\mathrm{d}r\] \[\leq C\sum_{k=1}^{N}\mathbb{E}\left[\left\|w_{t_{k}}^{t_{k-1}, \phi}-\phi\right\|_{2}^{2+\tilde{\beta}}\right]\leq CT^{\left(\frac{1}{2}- \frac{1}{p}\right)\left(2+\tilde{\beta}\right)}\sum_{k=1}^{N}\mathbb{E}\left[ \left\|w_{t_{k}}^{t_{k-1},\phi}-\phi\right\|_{p}^{2+\tilde{\beta}}\right]\] \[\leq C\sum_{k=1}^{N}\left(t_{k}-t_{k-1}\right)^{1+\frac{\tilde{ \beta}}{2}}\underset{N\rightarrow\infty}{\longrightarrow}0, \tag{82}\]
where in the last passage we use Lemma 8 and Jensen's inequality. Here \(C>0\) is a constant allowed to change from line to line. Combining (80), (81), (82) in (79), we obtain
\[u\left(s,\phi\right)-\Phi\left(\phi\right)=\int_{s}^{T}\left\langle\nabla u \left(r,\phi\right),B\left(r,\phi\right)\right\rangle_{H}\mathrm{d}r+\frac{1} {2}\int_{s}^{T}\mathrm{Tr}\left(D^{2}u\left(r,\phi\right)\sigma\left(r\right) \sigma\left(r\right)^{\ast}\right)\mathrm{d}r,\]
i.e., (70). Thus, the proof is complete.
**Remark 3**.: _Under the hypotheses of Theorem 9, for every \(\phi\in\Lambda\) the function \(u\left(\cdot,\phi\right):\left[0,T\right]\rightarrow\mathbb{R}\) defined in (75) is absolutely continuous on \(\left[0,T\right]\), because the integrands on the right-hand side of (70) are bounded on \(\left[0,T\right]\). Thus, the fundamental theorem of calculus shows that \(u\colon\left[0,T\right]\times H\rightarrow\mathbb{R}\) satisfies the following Kolmogorov backward equation in differential form:_
\[\begin{cases}\partial_{t}u\left(t,\phi\right)+\left\langle\nabla u\left(t, \phi\right),B\left(t,\phi\right)\right\rangle_{H}+\frac{1}{2}\mathrm{Tr}\left( D^{2}u\left(t,\phi\right)\sigma\left(t\right)\sigma\left(t\right)^{\ast} \right)=0,&\text{for a.e. }t\in\left(0,T\right),\,\phi\in\Lambda,\\ u\left(T,\phi\right)=\Phi\left(\phi\right),&\phi\in H.\end{cases}\]
**Remark 4**.: _All the arguments and computations leading to Theorem 9 continue to hold when the power \(\alpha\) of the kernel \(k_{2}\) in (4) varies in \(\left[1,\frac{3}{2}\right)\), i.e., \(k_{2}\) is the continuous kernel in \(\mathbb{R}_{+}\) given by_
\[k_{2}(t)=\frac{1}{\Gamma(\alpha)}t^{\alpha-1},\quad t\geq 0,\text{ for some }\alpha\in\left[1,\frac{3}{2}\right).\]
_We have however decided to present the theory in the case \(\alpha\in\left(\frac{1}{2},1\right)\) to emphasize the fact that our approach is able to handle rough kernels with explosions at \(t=0\)._
**Example 1**.: Given two continuous maps \(A\colon[0,T]\to\mathbb{R}^{d\times d}\) and \(b\colon[0,T]\to\mathbb{R}^{d}\), define \(B\colon\Lambda\to H_{\Box}\) by (cfr. (8))
\[B(w)\colon[0,T]\times[0,T]\to\mathbb{R}^{d}\quad\text{ such that }\quad B(w)(t,\xi)=1_{\{\xi>t\}}k_{2}(\xi-t) \,\left(A(t)w(t)+b(t)\right),\,t,\xi\in[0,T], \tag{83}\]
for every \(w\in\Lambda.\) We now show that \(B\) satisfies all the hypotheses of Theorem 9.
For every \(t\in(0,T]\) and \(r\in(0,t)\), from the definition in (83) it is immediate to see that \(B(w)(r,\xi)=0,\,\xi\in(0,r)\), and that \(B(w)(r,\cdot)\) depends on \(w\) only via \(w\big{|}_{(0,t)}.\) Denote by \(\left\|A\right\|_{\infty}=\sup_{t\in[0,T]}\left|A(t)\right|\) and by \(\left\|b\right\|_{\infty}=\sup_{t\in[0,T]}\left|b(t)\right|\), where \(\left|A(t)\right|\) is the operator norm in \(\mathbb{R}^{d\times d}.\) Computing, for every \(w_{1}\), \(w_{2}\in\Lambda\),
\[\left\|B(w_{1})\right\|_{\Box}^{2}\leq\int_{0}^{T}\left(\int_{0}^ {T}\left|k_{2}(\xi-t)\right|^{2}1_{\{\xi>t\}}\left(\left\|b\right\|_{\infty}+ \left\|A\right\|_{\infty}\left|w_{1}(t)\right|\right)^{2}\mathrm{d}\xi\right) \mathrm{d}t\\ \leq 2T\max\left\{\left\|b\right\|_{\infty}^{2},\left\|A\right\|_{ \infty}^{2}\right\}\left\|k_{2}\right\|_{2}^{2}(1+\left\|w_{1}\right\|_{2}^{2}),\]
and
\[\left\|B(w_{2})-B(w_{1})\right\|_{\Box}^{2}\leq\left\|A\right\|_{\infty}^{2} \int_{0}^{T}\left(\int_{0}^{T}\left|k_{2}(\xi-t)\right|^{2}1_{\{\xi>t\}}\left| w_{2}(t)-w_{1}(t)\right|^{2}\mathrm{d}\xi\right)\mathrm{d}t\leq\left\|A \right\|_{\infty}^{2}\left\|k_{2}\right\|_{2}^{2}\left\|w_{2}-w_{1}\right\|_{2 }^{2},\]
we deduce that Assumption 1 is satisfied. Since the previous computations can be repeated for every \(p\in(2,(1-\alpha)^{-1})\), then condition (24) in Remark 2 is verified, as well.
As for Assumption 2, evidently the operator \(DB(w_{1})\in\mathcal{L}\big{(}\Lambda_{2};H_{\Box}\big{)}\) defined by
\[[DB(w_{1})(w_{2})](t,\xi)=1_{\{\xi>t\}}k_{2}(\xi-t)A(t)w_{2}(t),\quad t,\xi \in[0,T],\,w_{2}\in\Lambda_{2}, \tag{84}\]
is the \(\Lambda_{2}-\)Frechet differential of \(B\) in \(w_{1}\), for any \(w_{1}\in\Lambda\). Indeed,
\[B(w_{1}+h)-B(w_{1})-DB(w_{1})(h)=0,\quad w_{1},h\in\Lambda.\]
Moreover, from (84) we have, for every \(w_{1},w_{2}\in\Lambda\),
\[\left\|DB(w_{1})(w_{2})\right\|_{\Box}\leq\left\|A\right\|_{\infty}\left\|k_{2 }\right\|_{2}\left\|w_{2}\right\|_{2},\qquad\left\|DB(w_{1})-DB(w_{2})\right\| _{\mathcal{L}(\Lambda_{2};H_{\Box})}=0,\]
which in particular gives (28) with \(\gamma=1\).
The requirements of Assumption 3 are trivially satisfied (with \(\beta=1\)) because, given the affine structure of this example, \(D^{2}B(w_{1})=0\in\mathcal{L}(\Lambda_{2},\Lambda_{2};H_{\Box})\), \(w_{1}\in\Lambda\).
In conclusion, for every \(w\in\Lambda\), the map \(t\mapsto B(t,w)=B(w)(t,\cdot)\) is continuous from \([0,T]\) to \(H\). Indeed, denoting by \(\tilde{b}(t)\) the \(\mathbb{R}^{d}-\)valued continuous function \(A(t)w(t)+b(t)\), by (72) and the two following equations we have, for any \(r,t\in[0,T]\),
\[\left\|B(t,w)-B(r,w)\right\|_{2}^{2}=\int_{0}^{T}\left|k_{2}\,( \xi-t)\,1_{\{\xi>t\}}\tilde{b}(t)-k_{2}\,(\xi-r)\,1_{\{\xi>r\}}\tilde{b}(r) \right|^{2}\mathrm{d}\xi\\ \leq 2\big{\|}\tilde{b}\big{\|}_{\infty}^{2}\!\int_{0}^{T}\!\left|k_{ 2}\,(\xi-t)1_{\{\xi>t\}}\!-\!k_{2}\,(\xi-r)1_{\{\xi>r\}}\right|^{2}\mathrm{d} \xi\!+\!2\left\|k_{2}\right\|_{2}^{2}\!\left|\tilde{b}(t)-\tilde{b}(r)\right|^{ 2}\!\leq\!L\!\left(\!\left|t-r\right|^{2\alpha-1}\!\!+\left|\tilde{b}(t)-\tilde {b}(r)\right|^{2}\right),\]
for some constant \(L>0\).
## 5 The mild Kolmogorov equation
A classical approach to the study of the Kolmogorov equation is its mild formulation, see for example [9, Section 6.5] and [10, Section 9.5]. Contrary to the strategy adopted in the previous section, where we have constructed a solution to (70) via a stochastic equation (cfr. Theorem 9), for the mild Kolmogorov equation we look for a _direct_ solution. With the term _direct_, we mean a solution which is determined by a fixed point argument, hence which does not rely on the underlying stochastic PDE.
In this section, we first present a formal reasoning leading to the mild form of (70), see (88). After that, in Subsection 5.1 we explain some difficulties in proving the well-posedness of such a mild formulation,
which are essentially due to the structure of the noise. Since it not the purpose of this section to present a general theory with abstract hypotheses, we limit ourselves to observe that the mild Kolmogorov equation cannot be solved for a class of interesting drifts \(b\) using common techniques (cfr. Lemma 10). Finally, in Subsection 5.2, we highlight the theoretical importance of the mild Kolmogorov equation. In particular, we sketch a procedure -relying on the mild form- typically used to prove uniqueness in law for a stochastic PDE under weak regularity requirements on the coefficients. We only mention that studying the relation between the transition semigroup of an SDE and the corresponding mild Kolmogorov equation can also be used for numerical applications, as recently investigated by [13] in the Brownian case and [4] in the case of isotropic, stable Levy processes.
Let \(\mathcal{C}=C_{b}\left(H;\mathbb{R}\right)\) and consider the backward Kolmogorov equation in differential form, formally written as
\[\begin{cases}\partial_{s}v\left(s,x\right)+\left\langle b\left(s,x\right), \nabla v\left(s,x\right)\right\rangle_{H}+\frac{1}{2}\mathrm{Tr}\left(D^{2}v \left(s,x\right)\sigma\left(s\right)\sigma\left(s\right)^{\ast}\right)=0, \qquad s\in\left[0,T\right),\,x\in H,\\ v\left(T,x\right)=\phi\left(x\right),\quad\phi\in\mathcal{C}.\end{cases} \tag{85}\]
Here, \(H\) and \(\sigma\) are those of the previous sections (see, in particular, (5)), whereas the drift \(b\colon\left[0,T\right]\times H\to H\) is a bounded measurable map which could be non-smooth.
We reformulate (85) in order to study it in the space \(\mathcal{C}\). Let \(u\left(t,x\right)\coloneqq v\left(T-t,x\right)\): \(u\) solves the forward equation
\[\begin{cases}\partial_{t}u\left(t,x\right)=\mathcal{A}_{T-t}u\left(t,x\right) +\left\langle b\left(T-t,x\right),\nabla u\left(t,x\right)\right\rangle_{H}, \qquad t\in\left(0,T\right],\,x\in H,\\ u\left(0,x\right)=\phi\left(x\right),\qquad\phi\in\mathcal{C},\end{cases} \tag{86}\]
where we set
\[\mathcal{A}_{T-t}f\left(x\right)=\frac{1}{2}\mathrm{Tr}\left(D^{2}f\left(x \right)\sigma\left(T-t\right)\sigma\left(T-t\right)^{\ast}\right).\]
Fix \(s\in\left[0,T\right]\). For every \(t\in\left[s,T\right]\), we define the linear evolution operator \(R_{T}\left(t,s\right):\mathcal{C}\rightarrow\mathcal{C}\) by
\[\left(R_{T}\left(t,s\right)\phi\right)\left(x\right)=\mathbb{E}\left[\phi \left(x+\int_{s}^{t}\sigma\left(T-r\right)\mathrm{d}W_{r}\right)\right], \quad x\in H,\,\phi\in\mathcal{C},\]
where \(W\) is an \(\mathbb{R}^{d}-\)valued, standard Brownian motion as the one introduced in Section 2. Consider the auxiliary equation
\[\begin{cases}\partial_{t}z\left(t,x\right)=\mathcal{A}_{T-t}z\left(t,x\right),\qquad t\in\left(s,T\right],\,x\in H,\\ z\left(s,x\right)=\phi\left(x\right),\quad\phi\in\mathcal{C};\end{cases} \tag{87}\]
if \(\phi\in C_{b}^{2+\beta}(H)\), then Theorem 9 and Remark 3 imply that the function \((R_{T}(t,s)\phi)(x)\) solves this Cauchy problem for almost every \(t\in\left(s,T\right)\), for every \(x\in\Lambda\). At this point, we can introduce the mild formulation of the Kolmogorov equation (70):
\[u\left(t,x\right)=\left(R_{T}\left(t,0\right)\phi\right)\left(x\right)+\int_{ 0}^{t}\left(R_{T}\left(t,s\right)\left\langle b\left(T-s,\cdot\right),\nabla u \left(s,\cdot\right)\right\rangle_{H}\right)\left(x\right)\mathrm{d}s,\quad \phi\in\mathcal{C}. \tag{88}\]
Note that, heuristically speaking, (88) corresponds to the Kolmogorov equation (86). Indeed, if \(u\left(t,x\right)\) solves (88), then a formal application of Leibnitz integral rule and (87) yield
\[\partial_{t}u(t,\cdot) =\partial_{t}R_{T}\left(t,0\right)\phi+R_{T}\left(t,t\right) \left\langle b\left(T-t,\cdot\right),\nabla u\left(t,\cdot\right)\right\rangle _{H}+\int_{0}^{t}\partial_{t}R_{T}\left(t,s\right)\left\langle b\left(T-s, \cdot\right),\nabla u\left(s,\cdot\right)\right\rangle_{H}\mathrm{d}s\] \[=\mathcal{A}_{T-t}R_{T}\left(t,0\right)\phi+\left\langle b\left(T -t,\cdot\right),\nabla u\left(t,\cdot\right)\right\rangle_{H}+\int_{0}^{t} \mathcal{A}_{T-t}R_{T}\left(t,s\right)\left\langle b\left(T-s,\cdot\right), \nabla u\left(s,\cdot\right)\right\rangle_{H}\mathrm{d}s\] \[=\mathcal{A}_{T-t}u+\left\langle b\left(T-t,\cdot\right),\nabla u \left(t,\cdot\right)\right\rangle_{H}.\]
As we have already mentioned, the aim is to prove directly, i.e., by a fixed point argument not relying on a stochastic equation, that (88) admits a solution of class, e.g., \(C\left(\left[0,T\right];\mathcal{C}\right)\). In this regards, the regularity properties of the evolution operator \(R_{T}\left(t,s\right)\) are paramount, hence we now discuss them.
According to [10, Proposition 4.28], the \(H-\)valued random variable \(\int_{s}^{t}\sigma\left(T-r\right)\mathrm{d}W_{r}\) is Gaussian, centered, with covariance operator
\[Q_{T}\left(t,s\right)=\int_{s}^{t}\sigma\left(T-r\right)\sigma\left(T-r\right)^ {*}\mathrm{d}r=\int_{T-t}^{T-s}\sigma\left(\tau\right)\sigma\left(\tau\right)^ {*}\mathrm{d}\tau. \tag{89}\]
This covariance operator is not trivial as it would be in the case of constant \(\sigma\). In fact, in such a case it would be easy to see that \(R_{T}\left(t,s\right)\phi,\,\phi\in\mathcal{C}\), is differentiable in the direction \(\sigma\) (and only in this direction). In our framework with a time-varying \(\sigma\), the question of the directions of differentiability of \(R_{T}\left(t,s\right)\phi,\,\phi\in\mathcal{C}\), is much more complex. Nevertheless, it has to be addressed, because the directional differentiability of \(R_{T}\left(t,s\right)\phi\) is essential to solve directly (88). This may be seen in various ways, one of which is the change of variable
\[\theta_{T}\left(t,x\right)=\left\langle b\left(T-t,x\right),\nabla u\left(t, x\right)\right\rangle_{H},\]
that leads to the study of the equation
\[\theta_{T}\left(t,x\right)=\left\langle b\left(T-t,x\right), \nabla\left(R_{T}\left(t,0\right)\phi\right)\left(x\right)\right\rangle_{H}\\ +\int_{0}^{t}\left\langle b\left(T-t,x\right),\nabla\left(R_{T} \left(t,s\right)\theta_{T}\left(s,\cdot\right)\right)\left(x\right)\right\rangle _{H}\mathrm{d}s,\quad\phi\in\mathcal{C}. \tag{90}\]
If we can prove that, for some \(C,\epsilon>0\),
\[\sup_{x\in H}\left|\left\langle b\left(T-t,x\right),\nabla\left(R_{T}\left(t, s\right)\psi\right)\left(x\right)\right\rangle_{H}\right|\leq\frac{C}{\left|t-s \right|^{1-\epsilon}}\left\|\psi\right\|_{\infty},\quad 0\leq s<t\leq T,\, \psi\in\mathcal{C}, \tag{91}\]
then we may try to set up a fixed point argument for the \(\theta_{T}-\)equation (90) in a suitable space of bounded, measurable functions. This would in turn give a solution for equation (88) by simply setting
\[u\left(t,x\right)=\left(R_{T}\left(t,0\right)\phi\right)\left(x\right)+\int_{ 0}^{t}\left(R_{T}\left(t,s\right)\theta_{T}\left(s,\cdot\right)\right)\left(x \right)\mathrm{d}s.\]
### The gradient estimate
Using the Gaussian structure of the \(H-\)valued random variable
\[Z_{T}\left(t,s\right)=\int_{s}^{t}\sigma\left(T-r\right)\mathrm{d}W_{r}, \quad 0\leq s<t\leq T,\]
and denoting by \(Q_{T}(t,s)^{-1}\) the pseudo-inverse of \(Q_{T}(t,s)\), one can prove -via the Cameron Martin formula (see, e.g., [10, Theorem 2.23])- that
\[\left\langle b\left(T-t,x\right),\nabla\left(R_{T}\left(t,s\right)\psi\right) \left(x\right)\right\rangle_{H}=\mathbb{E}\left[\left\langle Q_{T}\left(t,s \right)^{-1}b\left(T-t,x\right),Z_{T}\left(t,s\right)\right\rangle_{H}\psi \left(x+Z_{T}\left(t,s\right)\right)\right],\quad\psi\in\mathcal{C},\]
if
\[b\left(T-t,x\right)\in\mathrm{Range}\left(Q_{T}\left(t,s\right)\right).\]
This is not the most general condition to obtain the existence of such directional derivative. Indeed, we could split \(Q_{T}\left(t,s\right)^{-1}\) and use the fact that \(Q_{T}\left(t,s\right)^{-1/2}Z_{T}\left(t,s\right)\) has good properties, which reduces the problem to investigating \(b\left(T-t,x\right)\in\mathrm{Range}\,\left(Q_{T}\left(t,s\right)^{1/2}\right)\). However, handling the square root is even more difficult and thus, for the time being, we analyze the more restrictive condition.
When the previous holds, arguing as in (7), for some \(c>0\) we have
\[\sup_{x\in H}\left|\left\langle b\left(T-t,x\right),\nabla\left(R_ {T}\left(t,s\right)\psi\right)\left(x\right)\right\rangle_{H}\right| \leq\left\|\psi\right\|_{\infty}\sup_{x\in H}\mathbb{E}\left[ \left|\left\langle Q_{T}\left(t,s\right)^{-1}b\left(T-t,x\right),Z_{T}\left(t, s\right)\right\rangle_{H}\right|\right]\] \[\leq c\left\|\psi\right\|_{\infty}\left\|k_{2}\right\|_{2}\left(t-s \right)^{1/2}\sup_{x\in H}\left\|Q_{T}\left(t,s\right)^{-1}b\left(T-t,x\right) \right\|_{2}.\]
Therefore a sufficient condition for the gradient estimate (91) is
\[\sup_{x\in H}\left\|Q_{T}\left(t,s\right)^{-1}b\left(T-t,x\right)\right\|_{2}\leq \frac{C}{\left|t-s\right|^{\frac{3}{2}-\epsilon}},\quad 0\leq s<t\leq T,\text{ for some }C>0.\]
For a general \(b\), standing the potentially very strong degeneracy of \(Q_{T}\left(t,s\right)\), we do not see any hope to prove the gradient estimate (91). A particular case that, a priori, may look promising, is when the Volterra drift is of the same kind as the noise part, namely (cfr. (5))
\[\left[b\left(t,x\right)\right](\xi)=\bar{\beta}\left(x\right)k_{2}\left(\xi-t \right)1_{\left\{t<\xi\right\}}=\left[\sigma(t)\bar{\beta}(x)\right]\left(\xi \right),\quad\xi\in\left[0,T\right],\text{ for some }\bar{\beta}\in\mathcal{B}_{b} \left(H;\mathbb{R}^{d}\right).\]
In this case, since \(b\left(T-t,x\right)=\sigma\left(T-t\right)\bar{\beta}\left(x\right)\), we need to prove that
\[\sigma\left(T-t\right)e_{k}\in\text{Range}\left(Q_{T}\left(t,s\right)\right), \quad k=1,\ldots,d, \tag{92}\]
and that
\[\left\|Q_{T}\left(t,s\right)^{-1}\sigma\left(T-t\right)e_{k}\right\|_{2}\leq \frac{C}{\left|t-s\right|^{\frac{3}{2}-\epsilon}},\quad 0\leq s<t\leq T,\,k=1, \ldots,d,\text{ for some }C>0,\]
where \((\epsilon_{k})_{k=1,\ldots,d}\) is the canonical basis of \(\mathbb{R}^{d}\). Recalling that, by (89), \(Q_{T}\left(t,s\right)=\int_{T-t}^{T-s}\sigma\left(\tau\right)\sigma\left(\tau \right)^{*}\mathrm{d}\tau\), apparently we could think that (92) is true. But it is not, as the necessary condition given by the next lemma shows.
**Lemma 10**.: _Let \(0\leq s<t\leq T\) and suppose that \(f\in\text{Range}\left(Q_{T}(t,s)\right)\subset H\). Then \(f=g\) almost everywhere in \((0,T)\), where \(g\colon(0,T)\to\mathbb{R}^{d}\) is a continuous function such that \(g=0\) in \((0,T-t)\)._
Proof.: Fix \(0\leq s<t\leq T\). Consider \(f\in\text{Range}\left(Q_{T}(t,s)\right)\), so that there exists \(v\in H\) such that, by (89), \(f=\int_{T-t}^{T-s}\sigma\left(\tau\right)\sigma\left(\tau\right)^{*}\mathrm{d}\tau.\) In particular, for every \(k=1,\ldots,d\), denoting by \(\boldsymbol{\cdot}\) the scalar product in \(\mathbb{R}^{d}\), by the standard properties of Bochner's integral we obtain
\[f\boldsymbol{\cdot}e_{k}=\left(\int_{T-t}^{T-s}\left(\sigma\left(\tau\right) ^{*}v\right)\ k_{2}\left(\cdot-\tau\right)1_{\left\{\cdot>\tau\right\}} \mathrm{d}\tau\right)\boldsymbol{\cdot}e_{k}=\int_{T-t}^{T-s}\left\langle \sigma\left(\tau\right)e_{k},v\right\rangle_{H}k_{2}\left(\cdot-\tau\right)1_{ \left\{\cdot>\tau\right\}}\mathrm{d}\tau.\]
Furthermore, recalling (4), for a.e. \(\xi\in(0,T)\) we have
\[\left(f\boldsymbol{\cdot}e_{k}\right)\left(\xi\right)=\frac{1}{\Gamma\left( \alpha\right)}\int_{T-t}^{T-s}1_{\left\{\tau<\xi\right\}}\left\langle\sigma \left(\tau\right)e_{k},v\right\rangle_{H}\left(\xi-\tau\right)^{\alpha-1} \mathrm{d}\tau.\]
We denote by \(g_{k}\) the function appearing on the right-hand side of the previous equation, i.e.,
\[g_{k}\left(\xi\right)=\frac{1}{\Gamma\left(\alpha\right)}\int_{T-t}^{T-s}1_{ \left\{\tau<\xi\right\}}\left\langle\sigma\left(\tau\right)e_{k},v\right\rangle _{H}\left(\xi-\tau\right)^{\alpha-1}\mathrm{d}\tau,\quad\xi\in\left(0,T\right).\]
We want to show the continuity of \(g_{k}\) on the interval \(\left[T-t,T\right)\): this ensures that \(g_{k}\) is continuous on the whole \((0,T)\), since trivially \(g_{k}=0\) on \((0,T-t]\). We first write
\[g_{k}\left(\xi\right)=\int_{0}^{\xi}1_{\left\{\tau>T-t\right\}}\left\langle \sigma\left(\tau\right)e_{k},v\right\rangle_{H}\left(\xi-\tau\right)^{\alpha-1 }\mathrm{d}\tau,\quad\xi\in\left[T-t,T-s\right],\]
and notice that, as \(\sigma(\cdot)e_{k}\in C([0,T];H)\) (see (72) in the proof of Lemma 7), the mapping \(\langle\sigma(\cdot)e_{k},v\rangle_{H}\) is continuous on \([0,T]\). Therefore we invoke [18, Theorem 2.2 (i), Chapter 2] to conclude that \(g_{k}\) is continuous on \(\left[T-t,T-s\right]\). Secondly, since
\[g_{k}\left(\xi\right)=\int_{T-t}^{T-s}\left\langle\sigma\left(\tau\right)e_{k}, v\right\rangle_{H}\left(\xi-\tau\right)^{\alpha-1}\mathrm{d}\tau,\quad\xi\in\left[T-s,T \right),\]
the continuity of \(g_{k}\) on \(\left[T-s,T\right)\) can be inferred employing the dominated convergence theorem. Thus, \(g_{k}\) is continuous on \((0,T)\). This shows that the components \(f\boldsymbol{\cdot}e_{k}\), \(k=1,\ldots,d\), of the function \(f\colon[0,T]\to\mathbb{R}^{d}\) are almost everywhere equal on \((0,T)\) to continuous functions \(g_{k}\), which completes the proof.
**Remark 5**.: _Lemma 10 prevents us from choosing another interesting drift \(b(t,x)\), namely_
\[\left[b\left(t,x\right)\right]\left(\xi\right)=\bar{\beta}\left(x\right)1_{ \left(t,T\right)}\left(\xi\right),\quad\xi\in\left[0,T\right],\text{ for some }\bar{\beta}\in\mathcal{B}_{b}(H;\mathbb{R}^{d}).\]
### Concerning regularization by noise via the Kolmogorov equation
Among the interests of the Kolmogorov equation, there is the theory of regularization by noise: both in finite and infinite dimensions, it has been shown that a sufficiently regular solution to the Kolmogorov equation allows to prove suitable uniqueness results for the underlying stochastic differential equation (see examples in [7, 8, 12, 25, 27]). In contrast with Sections 3-4, one deals with a stochastic PDE
\[\mathrm{d}X_{t}=b(t,X_{t})\,\mathrm{d}t+\sigma\left(t\right)\mathrm{d}W_{t}, \quad X_{0}=x\in H, \tag{93}\]
which -a priori- is not well posed, because \(b\colon[0,T]\times H\to H\) is subject to weak regularity assumptions not including Lipschitz continuity. The aim is to prove the uniqueness in law of a mild solution to (93). A typical approach to achieve this takes the following steps:
1. Write the Kolmogorov equation in mild form (88) associated with (93) and prove the existence of solutions by a fixed point argument.
2. Possibly after a regularization procedure (see an example in infinite dimensions in [12, Theorem 2.9, Section 2.3.3]), apply Ito formula to \(u\left(T-t,X_{t}\right)\), where \(u\) solves (88) and \(X_{t}\) is any solution of (93), prove that the local martingale term is a martingale and obtain an expression for \[\mathbb{E}\left[\phi\left(X_{t}\right)\right].\] In this way, one deduces that two solutions have the same marginals. A control on the gradient of \(u\), like the one discussed in Subsection 5.1, may help in this step to prove that the local martingale term is a martingale.
3. Apply specific arguments (see [25], [26]) to obtain uniqueness in law.
Under suitable assumptions on \(b\) which guarantee the well-posedness of (88) (whence Step 1 follows), the details of Steps 2-3 will be the subject of a future research.
## Appendix A Regularity of the solution (75) of the Kolmogorov equation
In this appendix, we present an auxiliary lemma, namely Lemma 11, containing regularity results about the solution \(u\colon[0,T]\times H\to\mathbb{R}\) of the Kolmogorov backward equation (70) defined in (75). Such a lemma plays a key role in the proof of Theorem 9.
**Lemma 11**.: _Suppose that \(\Phi\in C_{b}^{2+\beta}\left(H\right)\) and that Assumption 3 holds. Then, the map \(u\colon[0,T]\times H\to\mathbb{R}\) defined in (75) belongs to \(L^{\infty}\big{(}0,T;C_{b}^{2+\beta}\left(H\right)\big{)}\cap C([0,T]\times H; \mathbb{R})\). In particular, there exists a constant \(C_{d,T,\beta,\Phi}>0\) such that_
\[\left\|D^{2}u\left(t,\phi\right)-D^{2}u\left(t,\psi\right)\right\|_{\mathcal{ L}\left(H;H\right)}\leq C_{d,T,\beta,\Phi}\left\|\phi-\psi\right\|_{2}^{\beta}, \quad\phi,\psi\in H,\,t\in[0,T]\,. \tag{94}\]
_Furthermore, the map \((t,\phi,\psi)\mapsto\left\langle\nabla u(t,\phi),\psi\right\rangle_{H}\) [resp., \((t,\phi,\psi,\eta)\mapsto\left\langle D^{2}u(t,\phi)\psi,\eta\right\rangle_{H}\)] is continuous in \([0,T]\times H\times H\) [resp., \([0,T]\times H\times H\times H\)]._
Proof.: We start off by proving that \(u\in C([0,T]\times H;\mathbb{R})\). Consider \(t\in[0,T],\,\phi\in H\) and two sequences \((t_{n})_{n}\subset[0,T]\) and \((\phi_{n})_{n}\subset H\) such that \(t_{n}\to t\) and \(\phi_{n}\to\phi\) as \(n\to\infty\). Since \(\nabla\Phi\colon H\to H\) is bounded, by the mean value theorem we compute, recalling the definition of \(u\) in (75),
\[\left|u(t_{n},\phi_{n})-u(t,\phi)\right|\leq\mathbb{E}\left[\left| \Phi\left(w_{T}^{t_{n},\phi_{n}}\right)-\Phi\left(w_{T}^{t_{n},\phi}\right) \right|\right]+\mathbb{E}\left[\left|\Phi\left(w_{T}^{t_{n},\phi}\right)-\Phi \left(w_{T}^{t,\phi}\right)\right|\right]\\ \leq\left\|\nabla\Phi\right\|_{\infty}\left(\left\|w_{T}^{t_{n}, \phi_{n}}-w_{T}^{t_{n},\phi}\right\|_{\mathcal{H}}+\left\|w_{T}^{t_{n},\phi}-w_ {T}^{t,\phi}\right\|_{\mathcal{H}}\right). \tag{95}\]
By (25) in Corollary 4, we infer that \(\lim_{n\to\infty}\left\|w_{T}^{t_{n},\phi_{n}}-w_{T}^{t_{n},\phi}\right\|_{ \mathcal{H}}=0\). As for \(\left\|w_{T}^{t_{n},\phi}-w_{T}^{t,\phi}\right\|_{\mathcal{H}}\), we first assume that \(t_{n}>t\). Then, by the flow property in (15) and Corollary 4 we have, for some constants \(c_{1},c_{2}>0\) which might depend on \(\phi\),
\[\left\|w_{T}^{t_{n},\phi}-w_{T}^{t,\phi}\right\|_{\mathcal{H}}=\left\|w_{T}^{t _{n},\phi}-w_{T}^{t_{n},w_{t_{n}}^{t,\phi}}\right\|_{\mathcal{H}}\leq c_{1} \left\|w_{t_{n}}^{t,\phi}-\phi\right\|_{\mathcal{H}}\leq c_{2}\sqrt{\left|t_{n }-t\right|},\]
where the last inequality is due to Lemma 8, see (74). An analogous argument shows that the previous bound holds even in the case \(t_{n}\leq t\), therefore \(\lim_{n\to\infty}\left\|w_{T}^{t_{n},\phi}-w_{T}^{t,\phi}\right\|_{\mathcal{H }}=0\). Going back to (95), we conclude that \(\lim_{n\to\infty}\left|u(t_{n},\phi_{n})-u(t,\phi)\right|=0\), hence \(u\colon[0,T]\times H\to\mathbb{R}\) is continuous, as desired.
We now prove that \(u\in L^{\infty}\big{(}0,T;C_{b}^{2+\beta}\left(H\right)\big{)}\). Since \(\Phi\in C_{b}^{2+\beta}\left(H\right)\), there exists a constant \(C_{\Phi}>0\) such that
\[\left\|D^{2}\Phi\left(\phi\right)-D^{2}\Phi\left(\psi\right)\right\|_{ \mathcal{L}\left(H;H\right)}\leq C_{\Phi}\left\|\phi-\psi\right\|_{2}^{\beta}, \quad\phi,\psi\in H. \tag{96}\]
Obviously, from the boundedness of \(\Phi\) we have \(\left\|u\right\|_{\infty}=\sup_{t\in[0,T]}\sup_{\phi\in H}\left|u(t,\phi) \right|<\infty\). First, we want to show that, for every \(t\in[0,T]\), \(u(t,\cdot)\in C_{b}^{1}(H)\), with
\[\left\langle\nabla u\left(t,\phi\right),\psi\right\rangle_{H}=\mathbb{E}\left[ \left\langle\nabla\Phi\left(w_{T}^{t,\phi}\right),Dw_{T}^{t,\phi}\psi\right \rangle_{H}\right],\quad\phi,\,\psi\in H. \tag{97}\]
To see this, by Taylor's formula applied to \(\Phi\) we compute, for every \(\phi\), \(h\in H\),
\[\mathbb{E}\left[\left|\Phi\left(w_{T}^{t,\phi+h}\right)-\Phi \Big{(}\,w_{T}^{t,\phi}\,\Big{)}-\left\langle\nabla\Phi\left(w_{T}^{t,\phi} \right),Dw_{T}^{t,\phi}h\right\rangle_{H}\right]\right]\] \[\leq\left\|\nabla\Phi\right\|_{\infty}\mathbb{E}\left[\left\|w_{T }^{t,\phi+h}-w_{T}^{t,\phi}-Dw_{T}^{t,\phi}h\right\|_{2}\right]\] \[\qquad\qquad\qquad+\mathbb{E}\left[\left|\int_{0}^{1}\left\langle \nabla\Phi\left(w_{T}^{t,\phi}+r\left(w_{T}^{t,\phi+h}-w_{T}^{t,\phi}\right) \right)-\nabla\Phi\left(w_{T}^{t,\phi}\right),w_{T}^{t,\phi+h}-w_{T}^{t,\phi} \right\rangle_{H}\mathrm{d}r\right|\right]\] \[\leq\left\|\nabla\Phi\right\|_{\infty}\left\|w_{T}^{t,\phi+h}-w_ {T}^{t,\phi}-Dw_{T}^{t,\phi}h\right\|_{\mathcal{H}}+\left\|D^{2}\Phi\right\|_{ \infty}\left\|w_{T}^{t,\phi+h}-w_{T}^{t,\phi}\right\|_{\mathcal{H}}^{2}= \mathrm{o}\left(\left\|h\right\|_{2}\right). \tag{98}\]
Here, for the second inequality we use the Lipschitz continuity of the map \(\nabla\Phi\colon H\to H\) -guaranteed by the mean value theorem- and for the third equality we invoke Corollary 4 and Theorem 5. This shows (97), from which we deduce the continuity of the function \(\nabla u(t,\cdot)\colon H\to H\). In particular, by (35), there exists a constant \(C_{1}=C_{1}(d,T)\) such that \(\left\|\nabla u\right\|_{\infty}\leq C_{1}\left\|\nabla\Phi\right\|_{\infty}\).
We also note that, arguing as in (98) and thanks to the estimates of \(\left\|w_{T}^{t,\phi+h}-w_{T}^{t,\phi}-Dw_{T}^{t,\phi}h\right\|_{\mathcal{H}}\) in the proof of Theorem 5 (see, for instance, (40)-(47)), for every \(M>0\) we have
\[\sup_{t\in[0,T]}\sup_{\left\|\phi\right\|_{2},\left\|\psi\right\|_{2}\leq M} \mathbb{E}\left[\left|\Phi\left(w_{T}^{t,\phi+h\psi}\right)-\Phi\Big{(}\,w_{T} ^{t,\phi}\,\Big{)}-h\left\langle\nabla\Phi\left(w_{T}^{t,\phi}\right),Dw_{T}^{ t,\phi}\psi\right\rangle_{H}\right]\right]=\mathrm{o}\left(h\right), \quad h\in\mathbb{R}. \tag{99}\]
which gives the continuity of the map \((t,\phi,\psi)\mapsto\left\langle\nabla u(t,\phi),\psi\right\rangle_{H}\) in \([0,T]\times H\) as \(u\in C([0,T]\times H;\mathbb{R})\).
Secondly, we claim that \(u\left(t,\cdot\right)\) is twice Frechet differentiable in \(H\), with
\[\left\langle D^{2}u\left(t,\phi\right)\psi,\eta\right\rangle_{H}= \mathbb{E}\left[\left\langle D^{2}\Phi\left(w_{T}^{t,\phi}\right)Dw_{T}^{t,\phi }\psi,Dw_{T}^{t,\phi}\eta\right\rangle_{H}+\left\langle\nabla\Phi\left(w_{T}^ {t,\phi}\right),D^{2}w_{T}^{t,\phi}\left(\psi,\eta\right)\right\rangle_{H} \right],\] \[\phi,\psi,\eta\in H. \tag{100}\]
Indeed, recalling (97), an application of Taylor's formula on \(\nabla\Phi\) yields
\[\left|\left\langle\nabla u\left(t,\phi+h\right)-\nabla u\left(t, \phi\right)-D^{2}u\left(t,\phi\right)h,\psi\right\rangle_{H}\right|\] \[=\left|\mathbb{E}\Big{[}\left\langle\nabla\Phi\left(w_{T}^{t, \phi+h}\right),Dw_{T}^{t,\phi+h}\psi\right\rangle_{H}-\left\langle\nabla\Phi \left(w_{T}^{t,\phi}\right),Dw_{T}^{t,\phi}\psi\right\rangle_{H}\] \[\qquad\qquad\qquad\qquad-\left\langle D^{2}\Phi\left(w_{T}^{t, \phi}\right)Dw_{T}^{t,\phi}h,Dw_{T}^{t,\phi}\psi\right\rangle_{H}-\left\langle \nabla\Phi\left(w_{T}^{t,\phi}\right),D^{2}w_{T}^{t,\phi}\left(h,\psi\right) \right\rangle_{H}\Big{]}\right|\] \[\leq\mathbb{E}\left[\left|\left\langle D^{2}\Phi\left(w_{T}^{t, \phi}\right)\left(w_{T}^{t,\phi+h}-w_{T}^{t,\phi}-Dw_{T}^{t,\phi}h\right),Dw_{T}^{ t,\phi}\psi\right\rangle_{H}\right|\right]\] \[\qquad+\mathbb{E}\left[\left|\left\langle\nabla\Phi\left(w_{T}^{t, \phi}\right),Dw_{T}^{t,\phi+h}\psi-Dw_{T}^{t,\phi}\psi-D^{2}w_{T}^{t,\phi} \left(h,\psi\right)\right\rangle_{H}\right|\right]\] \[\qquad+\mathbb{E}\left[\left|\left\langle\nabla\Phi\left(w_{T}^{t, \phi+h}\right)-\nabla\Phi\left(w_{T}^{t,\phi}\right),\left(Dw_{T}^{t,\phi+h}-Dw_{T }^{t,\phi}\right)\psi\right\rangle_{H}\right|\right]+R_{\Phi}\left(\phi,\psi,h\right)\] \[=:\left(\mathbf{I}_{1}+\mathbf{I}_{1}+\mathbf{I}\mathbf{II}_{1}+R_{ \Phi}\right)\left(\phi,\psi,h\right), \tag{101}\]
for every \(\phi,\psi,h\in H.\) Here, we denote by
\[R_{\Phi}\left(\phi,\psi,h\right)= \mathbb{E}\left[\left|\left\langle\int_{0}^{1}\left(D^{2}\Phi\left(w _{T}^{t,\phi}+r\left(w_{T}^{t,\phi+h}-w_{T}^{t,\phi}\right)\right)-D^{2}\Phi \left(w_{T}^{t,\phi}\right)\right)\left(w_{T}^{t,\phi+h}-w_{T}^{t,\phi}\right) \mathrm{d}r,Dw_{T}^{t,\phi}\psi\right\rangle_{H}\right|\right].\]
Using (35), (96) and Corollary 4, for some constant \(c_{3}>0\) we compute
\[R_{\Phi}\left(\phi,\psi,h\right) \leq C_{\Phi}C_{1}\mathbb{E}\left[\left\|w_{T}^{t,\phi+h}-w_{T}^{ t,\phi}\right\|_{2}^{1+\beta}\right]\left\|\psi\right\|_{2}\leq C_{\Phi}C_{1} \left\|w_{T}^{t,\phi+h}-w_{T}^{t,\phi}\right\|_{\mathcal{H}}^{1+\beta}\left\| \psi\right\|_{2}\] \[\leq c_{3}\left\|\psi\right\|_{2}\left\|h\right\|_{2}^{1+\beta}, \quad\phi,\psi,h\in H,\]
where we also employ Jensen's inequality noticing that \(1+\beta\leq 2\). Next,
\[\left|\mathbf{I}_{1}\left(\phi,\psi,h\right)\right|\leq C_{1}\left\|D^{2}\Phi \right\|_{\infty}\left\|\psi\right\|_{2}\left\|w_{T}^{t,\phi+h}-w_{T}^{t,\phi }-Dw_{T}^{t,\phi}h\right\|_{\mathcal{H}},\quad\phi,\psi,h\in H,\]
and
\[\left|\mathbf{II}_{1}\left(\phi,\psi,h\right)\right|\leq\left\|\nabla\Phi \right\|_{\infty}\left\|\psi\right\|_{2}\left\|Dw_{T}^{t,\phi+h}-Dw_{T}^{t, \phi}-D^{2}w_{T}^{t,\phi}\left(h,\cdot\right)\right\|_{\mathcal{L}\left(H; \mathcal{H}\right)},\quad\phi,\psi,h\in H.\]
Finally, by Corollary 4 and (35) (recall that, under Assumption 3, we take \(\gamma=\beta\) in (30), see (57))
\[\left|\mathbf{III}_{1}\left(\phi,\psi,h\right)\right|\leq C_{1}\left\|D^{2} \Phi\right\|_{\infty}\left\|\psi\right\|_{2}\mathbb{E}\left[\left\|w_{T}^{t, \phi+h}-w_{T}^{t,\phi}\right\|_{2}^{1+\beta}\right]\leq\tilde{c}\left\|D^{2} \Phi\right\|_{\infty}\left\|\psi\right\|_{2}\left\|h\right\|_{2}^{1+\beta}, \quad\phi,\psi,h\in H,\]
for some \(\tilde{c}>0\). Going back to (101), by Theorem 6, the previous estimates let us write, for some constant \(C>0\),
\[\left\|\nabla u\left(t,\phi+h\right)-\nabla u\left(t,\phi\right) -D^{2}u\left(t,\phi\right)h\right\|_{2}=\sup_{\left\|\psi\right\|_{2}\leq 1} \left|\left\langle\nabla u\left(t,\phi+h\right)-\nabla u\left(t,\phi\right)-D^{ 2}u\left(t,\phi\right)h,\psi\right\rangle_{H}\right|\] \[\quad\leq C\left(\left\|w_{T}^{t,\phi+h}-w_{T}^{t,\phi}-Dw_{T}^{ t,\phi}h\right\|_{\mathcal{H}}+\left\|Dw_{T}^{t,\phi+h}-Dw_{T}^{t,\phi}-D^{ 2}w_{T}^{t,\phi}\left(h,\cdot\right)\right\|_{\mathcal{L}\left(H;\mathcal{H} \right)}+\left\|h\right\|_{2}^{1+\beta}\right)\] \[=\mathrm{o}\left(\left\|h\right\|_{2}\right),\quad\phi,h\in H, \tag{102}\]
which proves (100). In particular, by (35)-(59), there is a constant \(C_{2}=C_{2}(d,T)>0\) such that
\[\left\|D^{2}u\right\|_{\infty}\leq C_{2}\left(\left\|D^{2}\Phi\right\|_{\infty} +\left\|\nabla\Phi\right\|_{\infty}\right).\]
In addition, arguing as in (102) (see also (99)) and thanks to the estimates of \(\left\|Dw_{T}^{t,\phi+h}-Dw_{T}^{t,\phi}-D^{2}w_{T}^{t,\phi}\left(h,\cdot \right)\right\|_{\mathcal{L}\left(H;\mathcal{H}\right)}\) in the proof of Theorem 6 (see, for instance, (64)), for every \(M>0\) we have
\[\sup_{t\in[0,T]}\sup_{\left\|\phi\right\|_{2},\left\|\psi\right\|_{2},\left\| \eta\right\|_{2}\leq M}\left|\left\langle\nabla u\left(t,\phi+h\psi\right)- \nabla u\left(t,\phi\right)-hD^{2}u\left(t,\phi\right)\psi,\eta\right\rangle_{H }\right|=\mathrm{o}\left(h\right),\quad h\in\mathbb{R}.\]
Since we have proved that \(\left\langle\nabla u(t,\phi),\psi\right\rangle_{H}\) is continuous in \([0,T]\times H\times H\), the previous equation ensures that the map \((t,\phi,\psi,\eta)\mapsto\left\langle D^{2}u(t,\phi)\psi,\eta\right\rangle_{H}\) is continuous in \([0,T]\times H\times H\times H\), as desired.
In conclusion, we prove that \(u\left(t,\cdot\right)\in C_{b}^{2+\beta}\left(H\right)\). From (100), for every \(\phi_{1},\phi_{2}\in H\),
\[\left\langle\left(D^{2}u\left(t,\phi_{1}\right)-D^{2}u\left(t, \phi_{2}\right)\right)\psi,\eta\right\rangle_{H}\] \[\quad=\mathbb{E}\left[\left\langle\left(D^{2}\Phi\left(w_{T}^{t, \phi_{1}}\right)-D^{2}\Phi\left(w_{T}^{t,\phi_{2}}\right)\right)Dw_{T}^{t,\phi _{1}}\psi,Dw_{T}^{t,\phi_{1}}\eta\right\rangle_{H}\right]\] \[\quad\quad+\mathbb{E}\left[\left\langle D^{2}\Phi\left(w_{T}^{t, \phi_{2}}\right)Dw_{T}^{t,\phi_{1}}-Dw_{T}^{t,\phi_{2}}\right)\psi,Dw_{T}^{t, \phi_{1}}\eta\right\rangle_{H}\right]\] \[\quad\quad+\mathbb{E}\left[\left\langle D^{2}\Phi\left(w_{T}^{t, \phi_{2}}\right)Dw_{T}^{t,\phi_{2}}\psi,\left(Dw_{T}^{t,\phi_{1}}-Dw_{T}^{t, \phi_{2}}\right)\eta\right\rangle_{H}\right]\] \[\quad\quad+\mathbb{E}\left[\left\langle\nabla\Phi\left(w_{T}^{t, \phi_{1}}\right)-\nabla\Phi\left(w_{T}^{t,\phi_{2}}\right),D^{2}w_{T}^{t,\phi _{1}}\left(\psi,\eta\right)\right\rangle_{H}\right]\] \[\quad=\left(\mathbf{I}_{2}+\mathbf{II}_{2}+\mathbf{III}_{2}+\mathbf{ IV}_{2}+\mathbf{V}_{2}\right)\left(\phi_{1},\phi_{2},\psi,\eta\right),\quad\psi,\eta\in H.\]
To keep notation the short, in what follows we consider arbitrary \(\psi,\eta\in H\), we do not write \(\left(\phi_{1},\phi_{2},\psi,\eta\right)\) and we denote by \(c=c(d,T,\beta)>0\) a constant that might change from line to line. Observe that, by (35)-(96), Corollary 4 and Jensen's inequality,
\[\left|\mathbf{I}_{2}\right|\leq c\,C_{\Phi}\left\|\psi\right\|_{2}\left\|\eta \right\|_{2}\left\|\phi_{1}-\phi_{2}\right\|_{2}^{\beta}.\]
Moreover, by (35) (see also (57)),
\[\left|\mathbf{\Pi}_{2}\right|\leq c\left\|D^{2}\Phi\right\|_{\infty}\left\| \psi\right\|_{2}\left\|\eta\right\|_{2}\left\|\phi_{1}-\phi_{2}\right\|_{2}^{ \beta}.\]
An analogous estimate holds for \(\left|\mathbf{\Pi}_{2}\right|\), too. As for the remaining addends, by (59) we have
\[\left|\mathbf{IV}_{2}\right|\leq c\left\|D^{2}\Phi\right\|_{\infty}\left\| \psi\right\|_{2}\left\|\eta\right\|_{2}\left\|\phi_{1}-\phi_{2}\right\|_{2},\]
and
\[\left|\mathbf{V}_{2}\right|\leq c\left\|\nabla\Phi\right\|_{\infty}\left\| \psi\right\|_{2}\left\|\eta\right\|_{2}\left\|\phi_{1}-\phi_{2}\right\|_{2}^{ \beta}.\]
Thus, the function \(D^{2}u\left(t,\cdot\right):H\rightarrow\mathcal{L}\left(H;H\right)\) is \(\beta-\)Holder continuous uniformly in time and the proof is complete.
|
2303.17972 | $\varepsilon$ KÚ <MASK>: Integrating Yorùbá cultural greetings
into machine translation | This paper investigates the performance of massively multilingual neural
machine translation (NMT) systems in translating Yor\`ub\'a greetings
($\varepsilon$ k\'u [MASK]), which are a big part of Yor\`ub\'a language and
culture, into English. To evaluate these models, we present IkiniYor\`ub\'a, a
Yor\`ub\'a-English translation dataset containing some Yor\`ub\'a greetings,
and sample use cases. We analysed the performance of different multilingual NMT
systems including Google and NLLB and show that these models struggle to
accurately translate Yor\`ub\'a greetings into English. In addition, we trained
a Yor\`ub\'a-English model by finetuning an existing NMT model on the training
split of IkiniYor\`ub\'a and this achieved better performance when compared to
the pre-trained multilingual NMT models, although they were trained on a large
volume of data. | Idris Akinade, Jesujoba Alabi, David Adelani, Clement Odoje, Dietrich Klakow | 2023-03-31T11:16:20Z | http://arxiv.org/abs/2303.17972v2 | # \(\varepsilon\) ku <mask>: Integrating Yoruba cultural greetings into machine translation
###### Abstract
This paper investigates the performance of massively multilingual neural machine translation (NMT) systems in translating Yoruba greetings (\(\varepsilon\) ku <mask>1), which are a big part of Yoruba language and culture, into English. To evaluate these models, we present IkiniYoruba, a Yoruba-English translation dataset containing some Yoruba greetings, and sample use cases. We analysed the performance of different multilingual NMT systems including Google Translate and NLLB and show that these models struggle to accurately translate Yoruba greetings into English. In addition, we trained a Yoruba-English model by finetuning an existing NMT model on the training split of IkiniYoruba and this achieved better performance when compared to the pre-trained multilingual NMT models, although they were trained on a large volume of data.
Footnote 1: For simplicity of notation in the title, we make use of \(\varepsilon\) – the Beninese Yórbá letter representation of E (which is used in Nigeria), and <mask> provides the context of greeting.
## 1 Introduction
In recent years, multilingual neural machine translation (NMT) models have shown remarkable improvement in translating both high and low-resource languages and have become widely used in various applications (Kudugunta et al., 2019; Aharoni et al., 2019; NLLB Team et al., 2022; Bapna et al., 2022). Despite this progress, NMT models still struggle to accurately translate idiomatic expressions (Fadaee et al., 2018; Baziotis et al., 2022), cultural concepts such as proverbs (Alkhresheh and AlMaaytah, 2018; Adelani et al., 2021), and common greetings, particularly in African languages like Yoruba- a west African language, which has a rich cultural heritage.
Table 1 illustrates a Yoruba sentence containing frequently used greeting phrases by the Yoruba people, and the corresponding translations generated from three multilingual NMT systems, which are: Meta's NLLB (NLLB Team et al., 2022), Google Translate2, and our own model.
Footnote 2: [https://translate.google.com/](https://translate.google.com/) evaluated on 23rd January 2023
An examination of NLLB and Google Translate's model outputs reveals that they all fail to produce accurate translations for the input sentence. One possible explanation for this is the lack of sufficient training data including these types of greetings, even though they were trained on a large volume of multilingual data. Furthermore, \(k\dot{u}\), a common word in these kinds of greetings, has two main interpretations that could mean either death or a compliment, depending on the context. Similarly, the syntactic frame of occurrence also determines the meaning of the verb (the type of complement and adjunct), and this is due to the ambiguous nature of Yoruba verbs. Hence, it is possible that these models were trained on data with \(k\dot{u}\) having the meaning death.
To address this issue, this paper introduces a new dataset dubbed IkiniYoruba, a Yoruba-English translation dataset of popular Yoruba greetings. We evaluate the performance of existing multilingual NMT systems on this dataset, and the results demonstrate that although current multilin
\begin{table}
\begin{tabular}{l} \hline \hline
**Source:** E ku ojümo, e si ku dédede asikó yííí. \\
**Target:** Good morning and compliment for this period. \\ \hline
**NLLB:** You have died, and you have died to this hour. \\
**Google Translate:** Die every day, and die at this time. \\
**Our Model:** Good morning and compliment for this time. \\ \hline \hline \end{tabular}
\end{table}
Table 1: Translation outputs of 3 different NMT models.
gual NMT systems are good at translating Yoruba sentences into English, they struggle to accurately translate Yoruba greetings, highlighting the need for further research in translating such cultural concepts on low-resource African languages.
## 2 Yoruba cultural greetings
Yoruba is a language spoken by the Yoruba people. It is native to Nigeria, Benin and Togo with an estimate of over 40 million speakers Eberhard et al. (2020). Yoruba makes use of 25 Latin letters excluding the Latin characters (c, q, v, x and z), and additional letters (e, gb, s, o). Yoruba is a tonal language with three tones: low, middle and high. These tones are represented by the grave (e.g. "a "), optional macron (e.g. "a") and acute (e.g. "a") accents respectively.
Greetings are inseparable from the Yoruba people since they are important for first impressions and are even considered to be a part of Yoruba identity. After the abolition of the slave trade at the beginning of the 19th century, the Yoruba indigenes who were rescued by the British warship settled in Freetown, a place in present-day Sierra Leone. People began to call them _a ku_ which is a fragment attached to all forms of greetings in Yoruba Webster (1966). This is because while an English speaker will say _good morning_, _happy birthday_, _merry Christmas_, and so on, the Yoruba people would say _e kaaro_, _e ku ojo ibt_, and _e ku odan kresimesi_. The recurrence of _e ku_ in their everyday conversation resulted the appellation _a ku_.
_E ku_ has the same semantic importance as 'good-','merrry-' and 'happy-' in English greetings. Without the fragment _e ku_ in the communication frame of greeting, the cultural knowledge shared by interlocutors will be lost.
Structurally, _e ku_ can be syntactically explained to have a subject-predicate relationship, rather than being a single lexeme or a prefix as claimed by most scholars. Using the paradigmatic relationship de Saussure (1983); Asher and Simpson (1994) lens, \(e\) can be replaced with any pronoun or nominal item (as described by interlocutors) with +human feature and still fit in perfectly. The +human feature is necessary because compliments are mainly for humans and _ku_ requires a selection restriction to sieve out the non-human elements. Table 2 shows some of these constructions. It is equally important to note here that _e ku_ can also be used for supernatural beings or metaphysical beings which in this form sounds like a personification.
_Ku_ on the other hand is a transitive predicate that requires a compliment. This compliment could either be a noun that signifies time like _adro_ (morning), a noun that denotes season like _pirinin_/_otuu_ (cold), a noun that points to a celebration like _kresimesi_ (Christmas), a nominalized verb that describes an event or action like _ijooko_ (sitting), and many more. Omitting the compliment in a greeting construction will alter the interpretation of the expression which may also change the meaning of _ku_ to death.
## 3 Related Work
The development of machine translation systems for low-resource languages such as Yoruba has seen a significant amount of research efforts in recent years. One major area of focus has been on curating translation datasets for these languages, which are collected using either automatic or manual methods. Examples of automatically collected datasets that include Yoruba are JW300 Agic and Vulic (2019), CCMatrix Schwenk et al. (2021), and CCAligned El-Kishky et al. (2020). On the other hand, examples of manually translated datasets for Yoruba include MENYO-20k Adelani et al. (2021), MAFAND-MT Adelani et al. (2022), FLORES-101 Goyal et al. (2022), and NTREX Federmann et al. (2022). These datasets have been instrumental in the study, development, and improvement of machine translation systems for Yoruba.
For example, Adelani et al. (2021) investigated how domain data quality and the use of diacritics, a crucial aspect of Yoruba orthography, impact Yoruba-English translations. Adebara et al. (2022) examined the effectiveness of Yoruba-English machine translation in translating bare nouns (BN), by comparing the results obtained from using statistical machine translation methods and neural approaches. Adelani et al. (2022) investigated how to effectively leverage pre-trained models for transla
\begin{table}
\begin{tabular}{l l l} \hline \hline Greeting & Person & Meaning \\ \hline _O kú irin_ & 2nd person singular & Compliment for walking \\ _A kú ode_ & 1st person plural & Compliment for attending a party \\ _Wún kú ijooko_ & 3rd person plural & Compliment for sitting \\ \hline \hline \end{tabular}
\end{table}
Table 2: Some _E kú_ constructions
tion of African languages including Yoruba. Despite the attempts to create datasets and develop translation systems for Yoruba, to the best of our knowledge, only Adelani et al. (2021) has examined a cultural aspect of Yoruba by evaluating their models on Yoruba proverbs, which are a significant part of Yoruba tradition. However, this research has not looked into how these models perform on another cultural aspect which is Yoruba greetings. Furthermore, there appear to be no prior works that have evaluated machine translation performance specifically for this aspect of the language and for other languages. Therefore, in this work, we investigate the performance of Yoruba-English translation models on Yoruba greetings.
## 4 IkiniYoruba corpus
Greetings dataset:We introduce **IkiniYoruba**, a Yoruba-English translation dataset for Yoruba greetings and their usage in various contexts, containing \(960\) parallel instances. The data curation process involved three key stages. Firstly, we gathered commonly used Yoruba greetings that cover a variety of situations such as time, season, celebration, and more, as outlined in Section 2, resulting in a total of \(160\) Yoruba greetings. Secondly, we created \(5\) different example sentences for each greeting, where the greetings are used in context, by native speakers of the language, resulting in \(800\) use cases in total. Lastly, we asked an expert translator to translate the seed data and the use cases into English. We split the created data into train/dev/test splits with \(100/20/40\) seed greeting instances. For each instance in a split, the \(5\) example sentences created are assigned to the same split.
Conversational dataset:For our experiments, we used the movie transcripts subset of the MENYO-20k (Adelani et al., 2020) dataset, which is a human-translated English-Yoruba dataset for movie transcripts. We selected this dataset because it consists of conversational data.
Table 3 shows the sample sentences in the IkiniYoruba dataset and Movie Transcript datasets, while Table 4 highlights the statistics of these datasets.
## 5 Experiments
### Experimental Setup
Greetings play a crucial role in Yoruba culture and are widely used in daily conversations by Yoruba people. For every action, there is a customary way of greeting or complimenting those involved using the phrase _E ki_. In this work, we compare several existing translation systems and evaluate their performance on Yoruba greetings. We demonstrate the effectiveness of these translation systems by testing them on movie transcripts, which are conversational in nature. Below, we outline our experiments.
Translation Models:In this study, we evaluate the performance of three multilingual NMT systems. These systems were pre-trained on various languages, and they are Google multilingual NMT, the distilled version of Meta's NLLB (NLLB Team et al., 2022) with 600M parameters, and a publicly available M2M-100 (Fan et al., 2020) with 418M parameters fine-tuned on the MENYO-20k dataset. We generated translations for the test sets using the Google Translate web application3, while for Meta's M2M-100 and NLLB models, we used the
\begin{table}
\begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{**Data**} & \multicolumn{3}{c}{**Number of Sentences**} \\ & **train** & **dev** & **test** \\ \hline _IkiniYoruba_ & \(600\) & \(120\) & \(240\) \\ _Movie Transcript_ & – & – & \(775\) \\ _script_ & & & \\ \hline \hline \end{tabular}
\end{table}
Table 4: The split of the data
\begin{table}
\begin{tabular}{l c} \hline \hline
**Yoruba** & **English** \\ \hline \multicolumn{3}{l}{**IkiniYoruba- Seed Greetings**} \\ E kiti ife & Thanks for the love \\ Okb a refo & Safe ride \\ \multicolumn{3}{l}{**IkiniYoruba- Greetings with contexts**} \\ E ki ti, Ire la o ma ba ara & Thanks for the love, may \\ wa se. & we continue to celebrate \\ a one another. & \\ A ó ma fojd sona lati rfi & Looking forward to seeing \\ yin, okb a refo & you, safe ride. \\ \hline \multicolumn{3}{l}{**Movie Transcript**} \\ E kaasán ma. & Good afternoon ma. \\ E nile sál Mo mò yin & Hello sir! I know you \\ Femi ki lo sele báyii? & Femi what is it now? \\ Gbogbo nnkan a dara, a jo & Everything will be fine, \\ wa nfnû e ni & we’re in this together \\ \hline \hline \end{tabular}
\end{table}
Table 3: Sample sentence pairs from the IkiniYoruba and the Movie Transcripts datasets.
HuggingFace transformers4 library.
Footnote 4: [https://github.com/huggingface/transformers](https://github.com/huggingface/transformers)
Data preprocessing and evaluation:To standardize the format of the two parallel datasets, we converted the Yoruba texts in the dataset to Unicode Normalization Form Composition (NFC). And to automatically assess the performance of the models, we used BLEU (Papineni et al., 2002) score implemented in SacreBLEU5(Post, 2018).
Footnote 5: case:mixed|eff:no|
tok:13a|smooth:exp|version:2.3.1
### Experimental results
Table 5 shows the results of evaluating the three different models on the two datasets: IkiniYoruba test split and Movie Transcripts. The models obtained impressive performance on the Movie Transcript data with high BLEU scores but poorly on the IkiniYoruba data with significantly lower scores. This highlights their inability to translate Yoruba cultural content such as greetings. The best-performing model, M2M-100, had a BLEU score of \(34.70\) on Movie Transcript data as it was trained on this same data by its authors. However, it had a score of \(4.3\) on greetings data. The second-best model, Google Translate, was \(3.65\) points below the best model on Movie Transcript. It performed better on greetings data with a score of \(9.47\), though still lower compared to its performance on Movie Transcript data.
In addition, we finetuned the M2M-100 model on IkiniYoruba, Movie Transcripts, and a combination of both data sources and evaluated the models on the IkiniYoruba test split. Our results show that finetuning the M2M-100 on Movie Transcripts improves the model's performance on IkiniYoruba by \(1.92\) BLEU points compared to the original M2M-100. However, the best performance was achieved when the M2M-100 was finetuned on the IkiniYoruba training split, with a BLEU score of \(29.67\). Finetuning the M2M-100 on the combination of both datasets did not result in any improvement. We do not evaluate the M2M-100 model finetuned on MovieTranscript data on the MovieTranscript data, as this would result in evaluating on the same data used for training.
To understand the performance of individual models on the IkiniYoruba test set, we conducted human evaluations of the translated outputs from Google Translate, NLLB, M2M-100, and M2M-100 finetuned on the IkiniYoruba dataset. We asked three native Yoruba speakers fluent in English to rate the 240 sentences for each system on two criteria: adequacy (on a Likert scale of 1 to 5) and cultural content preservation - CCP (binary scale of 0 or 1). Here, adequacy describes how much of the meaning of the reference translation was preserved in the MT output, and CCP indicates whether the greetings/compliments within the translation are preserved or not. The results show that the NMT systems struggle at translating Yoruba greetings accurately, and they confirm the results of the automatic evaluation, showing that M2M-100 fine-tuned on IkiniYoruba outperforms all other models. Overall, we observed that human evaluation shows moderate agreement with automatic evaluation.
### Qualitative analysis of translation outputs
In Table 6, we present some translation outputs from the different models for 5 Yoruba sentences sampled from the IkiniYoruba test split.
Google Translate and NLLB perform well in some cases by generating translations that were similar and contextually appropriate, for instance, in the second and third examples. Google Translate gave the most similar output to the target sentence in the first example. Our model in this instance translated 'odun' (meaning 'year' in isolation or 'celebration' when it occurs alone with e ki) quite independently 'ajinde' (meaning'resurrection' in isolation). Hence,'resurrection celebration' appears in the output. NLLB fails in this example but in the second example, it gives the closest contextual interpretation while our model got everything right except 'apeje' which is translated as'reception' instead of 'feasting'.
Our model outperforms Google Translate and NLLB in the third and fourth examples. It generated nearly identical output to the target sentence, thereby showing the preservation of both cultural content and semantic interpretation ability learned from the training data. In contrast, both Google Translate and NLLB were unsuccessful in producing the correct translation. The third example is an inquiry about well-being and it is, therefore, appropriate to use the word 'fine', and not 'peace'. In the fourth example, our model also shows to have an understanding of the contextual usage of _kai_ as a compliment which both Google Translate and NLLB failed to do. In addition, similar to the
automatic evaluation result, our model generated better outputs when compared to M2M-100 which was the base model on which it was trained, confirming the ability of the model to learn from a few training instances even for low-resource languages such as Yoruba (Adelani et al., 2022).
However, all the models failed in the last example. The models incorporated the concept of celebration or birthday in their output, but none of them were able to produce output that was exactly or semantically equivalent to the target sentence. A mistake common to all the model output except for M2M-100, is that they tried to translate 'Oluwadamilare'6 which is a name of a person and should not be translated. Hence, there is a need for more effort in solving this greetings translation task, either by creating more data or developing better approaches at translating these greetings into English.
Footnote 6: translates to: ‘the lord justifies me’, but the models still failed in this case.
## 6 Conclusion
In this study, we analyzed the performance of machine translation models in translating Yoruba greetings into English. To achieve this objective, we introduced a novel dataset called IkiniYoruba, which contains a collection of Yoruba greetings and their respective sentence use cases. We evaluated three publicly available machine translation models on this dataset and found that, despite their ability to translate other Yoruba texts, they failed to accurately translate Yoruba greetings, which are a crucial aspect of Yoruba culture. In future research, we aim to expand the IkiniYoruba dataset by adding more profession-based greetings and exploring ways to enhance the performance of machine translation models with these data.
## Limitations
One of the main limitations of our study is the lack of parallel data for Yoruba greetings. Hence, we had to create IkiniYoruba, which has 960 parallel sentences and may not be representative of all the greetings in Yoruba language including profession-based greetings. In addition, our study did not explore the use of verb disambiguation methods or external knowledge bases, to enhance the performance of our models. We leave these for future
\begin{table}
\begin{tabular}{l l} \hline \hline & \multicolumn{1}{c}{\(\mathbf{yo\rightarrow\mathbf{en}}\)} \\ & **Target** & **We greet the Christians a happy Easter.** \\ \hline \multirow{2}{*}{\begin{tabular}{l} Google T. \\ **NLLB** \\ **M2M-100** \\ **Our Model** \\ \end{tabular} } &
\begin{tabular}{l} We wish Christians a happy Easter. \\ Celebrations are celebrated on New Year’s Eve. \\ We greet the hardworking people the resurrection celebration. \\ \end{tabular} \\ \hline
2. & **Source** & **E kápějějějějějějějějějějějějějějějějějějějějějějějějějjějjějějjějějějjějějějějějjějějějějjějjějějjějějějjějějjějějějjějjějějějjějjějějjějějjějějjějějjějějjějějjějějjějjějějjějjějjějějjějjějjějjějjějjějjějjějjějjějjějjějjějjějjějjějjějjějjějjějjějjějjějjjějjějjějjějjějjějjějjějjjějjějjějjjějjjějjějjějjějjjějjjějjjějjjějjjějjjějjjějjjějjjjějjjějjjjějjjjějjjjjějjjjjějjjjjjěj
research.
## Acknowledgements
We appreciate Dr. Ezekiel Soremekun for the initial discussion that led to this work. We are grateful for the feedback from Dr. Rachel Bawden, Vagrant Gautam and anonymous reviews from AfricaNLP and C3NLP. Moreover, we would like to thank Timileyin Adewusi, Ganiyat Afolabi, and Oluwatosin Koya who took part in the human evaluation process. Jesujoba Alabi was partially funded by the BMBF project SLIK under the Federal Ministry of Education and Research grant 01IS22015C. David Adelani acknowledges the support of DeepMind Academic Fellowship programme.
|
2309.06824 | Beyond Adapting SAM: Towards End-to-End Ultrasound Image Segmentation
via Auto Prompting | End-to-end medical image segmentation is of great value for computer-aided
diagnosis dominated by task-specific models, usually suffering from poor
generalization. With recent breakthroughs brought by the segment anything model
(SAM) for universal image segmentation, extensive efforts have been made to
adapt SAM for medical imaging but still encounter two major issues: 1) severe
performance degradation and limited generalization without proper adaptation,
and 2) semi-automatic segmentation relying on accurate manual prompts for
interaction. In this work, we propose SAMUS as a universal model tailored for
ultrasound image segmentation and further enable it to work in an end-to-end
manner denoted as AutoSAMUS. Specifically, in SAMUS, a parallel CNN branch is
introduced to supplement local information through cross-branch attention, and
a feature adapter and a position adapter are jointly used to adapt SAM from
natural to ultrasound domains while reducing training complexity. AutoSAMUS is
realized by introducing an auto prompt generator (APG) to replace the manual
prompt encoder of SAMUS to automatically generate prompt embeddings. A
comprehensive ultrasound dataset, comprising about 30k images and 69k masks and
covering six object categories, is collected for verification. Extensive
comparison experiments demonstrate the superiority of SAMUS and AutoSAMUS
against the state-of-the-art task-specific and SAM-based foundation models. We
believe the auto-prompted SAM-based model has the potential to become a new
paradigm for end-to-end medical image segmentation and deserves more
exploration. Code and data are available at https://github.com/xianlin7/SAMUS. | Xian Lin, Yangyang Xiang, Li Yu, Zengqiang Yan | 2023-09-13T09:15:20Z | http://arxiv.org/abs/2309.06824v2 | SAMUS: Adapting Segment Anything Model for Clinically-Friendly and Generalizable Ultrasound Image Segmentation
###### Abstract
Segment anything model (SAM), an eminent universal image segmentation model, has recently gathered considerable attention within the domain of medical image segmentation. Despite the remarkable performance of SAM on natural images, it grapples with significant performance degradation and limited generalization when confronted with medical images, particularly with those involving objects of low contrast, faint boundaries, intricate shapes, and diminutive sizes. In this paper, we propose SAMUS, a universal model tailored for ultrasound image segmentation. In contrast to previous SAM-based universal models, SAMUS pursues not only better generalization but also lower deployment cost, rendering it more suitable for clinical applications. Specifically, based on SAM, a parallel CNN branch is introduced to inject local features into the ViT encoder through cross-branch attention for better medical image segmentation. Then, a position adapter and a feature adapter are developed to adapt SAM from natural to medical domains and from requiring large-size inputs (1024\(\times\)1024) to small-size inputs (256\(\times\)256) for more clinical-friendly deployment. A comprehensive ultrasound dataset, comprising about 30k images and 69k masks and covering six object categories, is collected for verification. Extensive comparison experiments demonstrate SAMUS's superiority against the state-of-the-art task-specific models and universal foundation models under both task-specific evaluation and generalization evaluation. Moreover, SAMUS is deployable on entry-level GPUs, as it has been liberated from the constraints of long sequence encoding. The code, data, and models will be released at [https://github.com/xianlin7/SAMUS](https://github.com/xianlin7/SAMUS).
## 1 Introduction
Medical image segmentation, a crucial technology to discern and highlight specific organs, tissues, and lesions within medical images, serves as an integral component of computer-aided diagnosis systems [14]. Numerous deep-learning models have been proposed for automatic medical image segmentation, showcasing substantial potential [17, 18]. However, these models are tailored for specific objects and necessitate retraining when applied to other objects, resulting in great inconvenience for clinical use.
Segment anything model (SAM), serving as a versatile foundation model for vision segmentation, has garnered considerable acclaim owing to its remarkable segmentation capabilities across diverse objects and robust zero-shot generalization capacity [15]. According to user prompts, including points, bounding boxes, and coarse masks, SAM is capable of segmenting the corresponding objects. Therefore, through simple prompting, SAM can be effortlessly adapted to various segmentation applications. This paradigm enables the integration of multiple individual medical image segmentation tasks into a unified framework (_i.e._, a universal model), greatly facilitating clinical deployment [10].
Despite constructing the largest dataset to date (_i.e._, SA-1B), SAM encounters a rapid performance degradation in the medical domain due to the scarcity of reliable clinical annotations [10]. Some foundation models have been proposed to adapt SAM to medical image segmentation by tuning SAM on medical datasets [18, 19]. However, the same as SAM, they perform a no-overlap 16x tokenization on the input images before feature modeling, which destroys the local information crucial for identifying small targets and boundaries, making them struggle to segment clinical objects with complex/threadlike shapes, weak boundaries, small sizes, or low contrast. Besides, most of them require inputs with the size of \(1024\times 1024\), causing a substantial burden on GPU consumption due to the generated long input sequence.
In this paper, we present SAMUS to transfer the exceptional segmentation performance and strong generalization ability of SAM to the domain of medical image segmentation, while reducing computational complexity. SAMUS inherits the ViT image encoder, prompt encoder, and mask decoder of SAM, with tailored designs to the image encoder. First, we shorten the sequence length of the ViT-branch by reducing the input size to lower the computational complexity. Then, a feature adapter and a position adapter are developed to fine-tune the ViT image encoder from natural to medical domains. To complement local (_i.e._, low-level) information in the ViT image encoder, we introduce a parallel CNN-branch image encoder, running alongside the ViT-branch and propose a cross-branch attention module to enable each patch in the ViT-branch to assimilate local information from the CNN-branch. Furthermore, we construct a large ultrasound dataset called US30K to comprehensively evaluate the efficacy of SAMUS. Experimental
results demonstrate that SAMUS outperforms the state-of-the-art methods in both task-specific and universal medical image segmentation. More importantly, SAMUS exhibits remarkable generalization capabilities, while considerably reducing the training cost compared to SAM. The contributions can be summarized as follows:
* A foundation model, SAMUS, designed for universal ultrasound image segmentation, requiring much fewer GPU resources compared to SAM.
* A CNN-branch image encoder and a cross-branch attention module to complement local information effectively to the ViT image encoder of SAM.
* A feature adapter and a position adapter to fine-tune the ViT-branch image encoder, further optimizing SAM for the medical domain
* A large ultrasonic dataset comprising 30,106 images with 68,570 masks to thoroughly evaluate the effectiveness of SAMUS.
## 2 Related Works
### Visual Tuning
With the surprising development of foundation models in computer vision, a series of visual tuning approaches have been proposed to adapt these foundation models to downstream tasks. Generally, recent visual tuning approaches can be categorized into five main categories, including fine-tuning, parameter tuning, remapping tuning, prompt tuning, and adapt tuning [23]. Specifically, fine-tuning methods involve either adjusting the entire parameter set of pre-trained models or selectively fine-tuning specific parts of pre-trained models [12]. Parameter tuning methods directly modify the weights or biases of model parameters [13]. Remapping methods transfer the learned information from pre-trained models to downstream models through knowledge distillation, weight-based remapping, or architecture-based remapping [14]. Prompt tuning introduces the knowledge of downstream tasks by either incorporating a set of learnable parameters with the inputs or designing a sub-network to generate visual prompts [23]. Adapter tuning, the most widely-adopted strategy, facilitates the learning of downstream tasks by incorporating additional learnable parameters with frozen pre-trained models [23].
### Adapt SAM to Medical Image Segmentation
SAM has demonstrated remarkable performance in natural images but struggles with some medical image segmentation tasks, especially on objects with complex shapes, blurred boundaries, small sizes, or low contrast [10]. To bridge this gap and enable SAM to adapt effectively to the medical image domain, several methods have been proposed to tune SAM using limited downstream medical datasets. MedSAM trains SAM on medical images at an acceptable cost by freezing the image encoder and the prompt encoder, focusing on tuning the mask decoder of SAM [12]. SAMed applies the low-rank-based (LoRA) strategy on the image encoder to tune SAM at a lower computational cost, making it more feasible for medical image segmentation [14]. MSA adopts two down-ReLU-up adapters on each transformer layer of the ViT image encoder to introduce task-specific information [23]. As illustrated in Fig. 1, compared to current SAM-based foundation models, the proposed SAMUS focuses more on complementing local features and reducing GPU consumption, which is crucial for accurate and easy-to-deploy medical image segmentation in clinical scenarios.
## 3 Methods
### Overview
As depicted in Fig. 8, the overall architecture of SAMUS is inherited from SAM, retaining the structure and parameters of the prompt encoder and the mask decoder without any adjustment. Comparatively, the image encoder is carefully modified to address the challenges of inadequate local features and excessive computational memory consumption, making it more suitable for clinically-friendly segmentation. Major modifications include reducing the input size, overlapping the patch embedding, introducing adapters to the ViT branch, adding a CNN branch, and introducing cross-branch attention (CBA). Specifically, the input spatial resolution is scaled down from \(1024\times 1024\) pixels to \(256\times 256\) pixels, resulting in a substantial reduction in GPU memory cost due to the shorter input sequence in transformers. The overlapped patch embedding uses the same parameters as the patch embedding in SAM while its patch stride is half to the original stride, well keeping the information from patch boundaries. Adapters in the ViT branch include a position adapter and five feature adapters. The position adapter is to accommodate the global position embedding in shorter sequences due to the smaller input size. The first feature adapter follows the overlapped patch embedding to align input features with the required feature distribution of the pre-trained ViT image encoder. The remaining feature adapters are attached to the residual connections of the feed-forward network in the global transformer to fine-tune the pre-trained image encoder. In terms of the CNN branch, it is parallel to the ViT branch, providing complementary local information to the latter through the CBA module, which takes the ViT-branch features as the query and builds global dependency with features from the CNN branch. It should be noted that CBA is only integrated into each global transformer. Finally,
Figure 1: Structure comparison of different SAM-based foundation models for medical image segmentation.
the outputs of both the two branches are combined as the final image feature embedding of SAMUS.
### Adapters in the ViT Branch
To facilitate the generalization of the trained image encoder (_i.e._, the ViT branch) of SAM to smaller input sizes and the medical image domain, we introduce a position adapter and five feature adapters. These adapters can effectively tune the ViT branch while only requiring much fewer parameters. Specifically, the position adapter is responsible for adjusting the positional embedding to match the resolution of the embedded sequence. It begins by downsampling the positional embedding through a max pooling with the stride and kernel size as 2, achieving the same resolution as the embedded sequence. Subsequently, a convolution operation with a kernel size of \(3\times 3\) is applied to tune the position embedding, further aiding the ViT branch in better handling smaller inputs. All feature adapters have the same structure that comprises three components: a down linear projection, an activation function, and an up linear projection. The procedure of each feature adapter can be formulated as:
\[\mathcal{A}(x)=\mathcal{G}(xE_{d})E_{u}, \tag{1}\]
where \(\mathcal{G}\) represents the GELU activation function, \(E_{d}\in\mathbb{R}^{d\times\frac{\Phi}{4}}\) and \(E_{u}\in\mathbb{R}^{\frac{\Phi}{4}\times d}\) are the projection matrices, \(d\) is the dimension of the feature embedding. Through these simple operations, feature adapters enable the ViT branch to better adapt to the feature distribution of medical image domains.
### The CNN Branch
The CNN branch consists of sequentially-connected convolution-pooling blocks. Specifically, the inputs pass through a single convolution block initially, followed by being processed through three convolution-pooling blocks. Then, the feature maps in the CNN branch share the same spatial resolution as those of the ViT branch. In the rest part of the CNN branch, such single convolution blocks are repeated four times in sequence. More details are illustrated in Fig. 8. This minimalist and lightweight design of the CNN branch is to prevent overfitting during training.
### Cross-branch Attention
The cross-branch attention (CBA) module creates a bridge between the CNN branch and the ViT branch to further complement missing local features with the ViT branch. For a pair of feature maps from the ViT branch \(F_{v}\) and the CNN branch \(F_{c}\), cross-branch attention in the single head can be formulated as:
\[\mathcal{F}(F_{v},F_{c})=(\mathcal{S}(\frac{F_{v}E_{q}(F_{c}E_{k})^{T}}{\sqrt {d_{m}}})+R)(F_{c}E_{v}), \tag{2}\]
where \(\mathcal{S}\) represents the Softmax function. \(E_{q}\in\mathbb{R}^{d\times d_{m}}\), \(E_{k}\in\mathbb{R}^{d\times d_{m}}\), and \(E_{v}\in\mathbb{R}^{d\times d_{m}}\) are the learnable weight matrices used to project \(F_{c}\) and \(F_{v}\) to different feature subspaces. \(R\in\mathbb{R}^{hw\times hw}\) is the relative position embedding, and \(d_{m}\) is the dimension of CBA. The final output of CBA is the linear combination of \(g\) such single-head attention.
### Training Strategies
Before training, SAMUS initializes the parameters that are inherited from SAM using the weights trained on SA-1B. The remaining parameters are initialized randomly. During the training process, only the parameters from adapters, the CNN branch, and the CBA module are updated, while other parameters are kept frozen. The training process is supervised using a combined loss function, comprising the dice loss and the binary cross-entropy loss. For ease of use, SAMUS only uses the simplest positive point prompt. We mimic the process of experts providing prompts by randomly sampling a point in the foreground area of the label. SAMUS is trained by the Adam optimizer with an initial learning rate of 0.0001 and a batch size of 8 for 200 epochs.
## 4 Experiments
### Datasets
To comprehensively evaluate the effectiveness of SAMUS, we have constructed a large ultrasonic dataset named US30K as summarized in Table 6, containing data from seven publicly-available datasets, including TN3K (Gong et al., 2023), DDTI (Pedraza et al., 2015), TG3K (Wunderling et al., 2017), BUSI (Al-Dhabyani et al., 2020), UDIAT (Yap
Figure 2: Overview of the proposed SAMUS.
et al. 2020), CAMUS (Leclerc et al. 2019), and HMC-QU (Kiranyaz et al. 2020). The data in TN3K and TG3K is partitioned into train, validation, and test sets following TRFE (Gong et al. 2023). BUSI is randomly split into 7:1:2 for training, validation, and testing, respectively. CAMUS is divided into a train set and a test set first according to the challenge (Leclerc et al. 2019). Then, we randomly select 10\(\%\) patients from the train set to validate the model and the rest data as the final training data. To evaluate the generalization of different models, the other datasets in US30K are unseen during the training and validation process. To evaluate the segmentation performance and generalization ability of SAMUS against the state-of-the-art (SOTA) task-specific methods, several SOTA methods are implemented and trained on TN3K, BUSI, and CAMUS datasets individually for comparison. Furthermore, the comparison between SAMUS and other foundation models is conducted by training them on the entire US30K dataset and evaluating them on separate tasks.
### Compare with SOTA Task-specific Methods
**Comparison methods:** Twelve SOTA task-specific approaches are selected for comparison, covering CNN-based, transformer-based, and CNN-Transformer hybrid approaches. CNN-based methods include U-Net (Ronneberger, Fischer, and Brox 2015), CPFNet (Feng et al. 2020), CA-Net (Gu et al. 2020), CE-Net (Gu et al. 2019), and AAU-Net (Chen et al. 2022). Transformer-based methods include SwinUnet (Cao et al. 2022), SETR (Zheng et al. 2021), and MISSFormer (Huang et al. 2022). CNN-Transformer hybrid methods include TransUNet (Chen et al. 2021), TransFuse (Zhang, Liu, and Hu 2021), FAT-Net (Wu et al. 2022), and H2Former (He et al. 2023).
**Quantitative results:** Quantitative results of different task-specific approaches on TN3K, BUSI, CAMUS-LV, CAMUS-MYO, and CAMUS-LA are summarized in Table 7. Among these state-of-the-art approaches, H2Former achieves the best performance on TN3K and CAMUS-MYO, leading to the average Dice scores of \(82.48\%\) and \(87.31\%\) respectively. TransUnet, CA-Net, and FATNet achieve the best performance on BUSI, CAMUS-LV, and CAMUS-LA respectively, with the average Dice scores of \(82.22\%\), \(93.59\%\), and \(91.55\%\). Comparatively, SAMUS consistently achieves better performance on all five tasks including TN3K, BUSI, CAMUS-LV, CAMUS-MYO, and CAMUS-LA with the average Dice scores of \(84.45\%\)
\begin{table}
\begin{tabular}{c c c c c c|c c} \hline \hline Dataset & Slice number & Mask number & Train slice & Validation slice & Test slice & Segmentation target \\ \hline TN3K & 3493 & 3493 & 2303 & 576 & 614 & Thyroid nodule \\ DDTI & 637 & 637 & - & - & 637 & Thyroid nodule \\ TG3K & 3585 & 3585 & 3226 & 359 & - & Thyroid gland \\ BUSI & 647 & 647 & 454 & 64 & 129 & Breast cancer \\ UDIAT & 163 & 163 & - & - & 163 & Breast cancer \\ CAMUS & 19232 & 57696 & 15315 & 1949 & 1968 & LV, MYO, LA \\ HMC-QU & 2349 & 2349 & - & - & 2349 & MYO \\ \hline US30K & 30106 & 68570 & 21298 & 2948 & 5860 & Above 6 categories \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of the datasets in US30K. LV, MYO, and LA are short for the left ventricle, myocardium, and left atrium.
\begin{table}
\begin{tabular}{c|c c|c c|c c|c c|c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c|}{TN3K} & \multicolumn{2}{c|}{BUSI} & \multicolumn{2}{c|}{CAMUS-LV} & \multicolumn{2}{c|}{CAMUS-MYO} & \multicolumn{2}{c}{CAMUS-LA} \\ & Dice & HD & Dice & HD & Dice & HD & Dice & HD & Dice & HD \\ \hline U-Net & 79.01 & 34.12 & 78.11 & 33.60 & 93.56 & 9.90 & 86.86 & 16.87 & 91.00 & 12.91 \\ CPFNet & 79.43 & 33.07 & 80.56 & 27.98 & 93.32 & 9.63 & 86.68 & 16.51 & 91.51 & 12.26 \\ CA-Net & 80.52 & 33.65 & 81.88 & 28.67 & 93.59 & 9.77 & 87.21 & **16.24** & 91.28 & 12.22 \\ CE-Net & 80.37 & 32.79 & 81.60 & 29.19 & 93.31 & 9.65 & 86.47 & 16.66 & 91.14 & 12.39 \\ AAU-Net & 82.28 & 30.53 & 80.81 & 28.96 & 93.32 & 9.97 & 86.98 & 16.49 & 91.35 & 12.12 \\ SwinUnet & 70.08 & 44.13 & 67.23 & 47.02 & 91.72 & 12.80 & 84.46 & 20.25 & 89.80 & 14.74 \\ SETR & 67.80 & 44.11 & 68.22 & 40.37 & 92.82 & 11.34 & 86.20 & 18.27 & 90.52 & 13.91 \\ MISSFormer & 79.42 & 32.85 & 78.43 & 33.10 & 93.25 & 9.94 & 86.57 & 16.50 & 91.18 & 11.82 \\ TransUNet & 81.44 & 30.98 & 82.22 & 27.54 & 93.54 & 9.60 & 87.20 & 16.36 & 91.37 & 12.10 \\ TransFuse & 78.50 & 32.44 & 73.52 & 34.95 & 93.30 & 10.07 & 86.77 & 17.25 & 90.68 & 12.46 \\ FAT-Net & 80.45 & 32.77 & 82.16 & 28.55 & 93.59 & **9.20** & 87.19 & 15.93 & 91.55 & 12.05 \\ H2Former & 82.48 & 30.58 & 81.48 & 27.84 & 93.44 & 9.79 & 87.31 & 16.60 & 90.98 & 11.92 \\ SAMUS & **84.45** & **28.22** & **85.77** & **25.49** & **93.73** & 9.79 & **87.46** & 16.74 & **91.58** & **11.60** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Quantitative comparison of our SAMUS and SOTA task- specific methods on segmenting thyroid nodule (TN3K), breast cancer (BUSI), left ventricle (CAMUS-LV), myocardium (CAMUS-MYO), and left atrium (CAMUS- LA). The performance is evaluated by the Dice score (%) and Hausdorff distance (HD). The best results are marked in bold.
85.77\(\%\), 93.73\(\%\), 87.46\(\%\), and 91.58\(\%\) respectively. It validates the effectiveness of adapting SAM to the medical image domain by SAMUS.
**Qualitative results:** Qualitative segmentation results of different methods, including U-Net Ronneberger, Fischer, and Brox (2015), AAU-Net Chen et al. (2022), MISIS-Former Huang et al. (2022), H2Former He et al. (2023), and SAMUS, are illustrated in Fig. 4. Visually, segmenting the ultrasound images poses a challenge due to their low contrast, non-uniform features, and vague object boundaries. Existing methods struggle to accurately distinguish the target from the background, leading to extensive false negatives and/or false positives. Comparatively, SAMUS demonstrates superiority in preserving the integrity of target regions and reducing false positives. It is attributed to the inherent advantages of the SAM framework, as well as the specific adjustments and designs introduced in SAMUS.
**Generalization ability:** Quantitative comparison on the generalization performance of different task-specific methods is illustrated in Fig. 3. Among comparison methods, H2Former, TransUNet, and TransFuse achieve the best performance on DDTI, UDIAT, and HMC-QU respectively. Comparatively, SAMUS surpasses the best comparison method on each dataset with an average increase of \(7.06\%\), \(12.22\%\), and \(7.42\%\) in Dice respectively. Comparing the performance between seeable and unseen datasets, SAMUS encounters the least performance degradation in contrast to other comparison methods across three different segmentation tasks. One interesting observation is that on the breast cancer segmentation task, the performance of SAMUS on the unseen dataset (_i.e._, UDIAT) is even better than that of the best comparison method on the seeable dataset (_i.e._, BUSI). It demonstrates the exceptional generalization ability of SAMUS in handling unseen domains, showcasing its robustness and adaptability across diverse medical image segmentation scenarios.
### Compare with SOTA Foundation Models
**Comparison methods:** Four SOTA foundation models are selected for comparison, including the original SAM Kirillov et al. (2023), MedSAM Ma and Wang (2023), SAMed Zhang and Liu (2023), and MSA Wu et al. (2023).
**Quantitative results:** To validate the universal performance of SAMUS as a foundation model on diverse downstream tasks, we conduct a comparison among foundation models on the US30K dataset. As summarized in Table 8, SAM, the model trained on SA-1B, exhibits significant performance degradation on medical image segmentation without tuning. With simple fine-tuning on the mask decoder of SAM based on the US30K dataset, MedSAM considerably improves the performance of SAM. MAS, the best-performing model among comparison foundation models, effectively improves the segmentation performance of SAM with an average increase of \(53.08\%\), \(27.65\%\), \(62.77\%\), \(53.05\%\), and \(74.52\%\)
Figure 4: Qualitative comparisons between SAMUS and task-specific methods. From top to bottom are examples of segmenting thyroid nodule, breast cancer, and myocardium.
Figure 5: Qualitative comparisons between SAMUS and foundation models. From top to bottom are examples of segmenting thyroid nodule, breast cancer, and myocardium.
Figure 3: Comparison between SAMUS and task-specific methods evaluated on seeable (marked in blue) and unseen datasets (marked in orange).
in Dice across TN3K, BUSI, CAMUS-LV, CAMUS-MYO, and CAMUS-LA. Compared to MSA, SAMUS consistently achieves remarkable improvements across the above five datasets, with superior Dice scores of \(83.05\%\), \(84.54\%\), \(91.13\%\), \(83.11\%\), and \(92\%\), respectively. It validates the effectiveness of the CNN branch and the CBA module in SAMUS especially for complementing local information which is crucial for medical image segmentation.
**Qualitative results:** Qualitative segmentation results of different foundation models, including SAM, MedSAM, SAMed, MSA, and SAMUS, are presented in Fig. 5. Without tuning in medical images, SAM completely loses the ability to segment everything. Through applying tuning methods to SAM, MedSAM, SAMed, and MSA can somewhat restore the segmentation capability of SAM. However, they still struggle to accurately delineate segmentation boundaries in ultrasound images, resulting in extensive false negatives and false positives. In contrast, SAMUS exhibits superior performance by accurately locating segmentation boundaries, even for the low-contrast ones. It is consistent with the analysis that complementing local information with the image encoder is helpful, especially for boundary/shape preservation in medical image segmentation.
**Generalization ability:** Comparison of different foundation models on unseen domains is summarized in Fig. 6. In general, the generalization performance of foundation models trained on US30K in the medical image segmentation tasks is far better than that of the original SAM. In terms of the three pairs of segmentation tasks, namely thyroid nodule segmentation, breast cancer segmentation, and myocardium segmentation, all foundation models encounter severe performance degradation on myocardium segmentation and generalize well on breast cancer segmentation. SAMUS consistently achieves the best performance across all three unseen datasets, leading to superior Dice scores of \(66.78\%\), \(78.06\%\), and \(56.77\%\) for the segmentation of thyroid nodule, breast cancer, and myocardium, respectively. It underscores the exceptional generalization ability of SAMUS, outperforming other foundation models consistently and substantially on unseen domains.
**Deployment cost:** We conducted a comprehensive evaluation of SAMUS and other foundation models on deployment efficiency, including GPU memory cost, model parameters, computational complexity, inference speed, segmentation performance, and generalization performance. For ease of comparison, the GPU memory is tested when batch size is set as 1 during training and measured by gigabyte (_i.e._, G). The computational complexity and inference speed are measured by floating-point operations per second (_i.e._, GFLOPs) and frame per second (_i.e._, FPS). The segmentation performance is measured by the average Dice score across all seeable datasets and the generalization performance is evaluated based the average Dice score across all unseen datasets. All the above indicators are normalized and depicted with a radar plot as shown in Fig. 7. Among comparison models, SAMed exhibits the lowest GPU memory cost, model
\begin{table}
\begin{tabular}{c|c c|c c|c c|c c|c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c|}{TN3K} & \multicolumn{2}{c|}{BUSI} & \multicolumn{2}{c|}{CAMUS-LV} & \multicolumn{2}{c|}{CAMUS-MYO} & \multicolumn{2}{c}{CAMUS-LA} \\ & Dice & HD & Dice & HD & Dice & HD & Dice & HD & Dice & HD \\ \hline SAM & 29.59 & 134.87 & 54.01 & 82.39 & 28.18 & 196.64 & 29.42 & 184.10 & 17.28 & 193.70 \\ MedSAM & 71.09 & 42.91 & 77.75 & 34.26 & 87.52 & 15.28 & 76.07 & 25.72 & 88.06 & 15.70 \\ SAMed & 80.40 & 31.29 & 74.82 & 34.60 & 87.67 & 13.24 & 82.60 & 19.48 & 90.92 & 12.60 \\ MSA & 82.67 & 29.15 & 81.66 & 28.87 & 90.95 & **11.29** & 82.47 & 19.28 & 91.80 & **11.59** \\ SAMUS & **83.05** & **28.82** & **84.54** & **27.24** & **91.13** & 11.76 & **83.11** & **18.99** & **92.00** & 12.08 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Quantitative comparison of our SAMUS and other foundation models on seeable US30K data. The performance is evaluated by the Dice score (%) and Hausdorff distance (HD).
Figure 6: Segmentation and generalization ability comparison of our SAMUS and other foundation models on seeable (in light color) and unseen (in dark color) US30K data.
Figure 7: Comparison of SAMUS and foundation models on GPU memory cost, model parameters, computational complexity, inference speed, performance, and generalization.
parameters, computational complexity, and the fastest inference speed. However, its segmentation and generalization performance is inferior to both MSA and SAMUS. Though SAMUS owns more parameters than other models, its GPU memory cost and computational complexity are the second-fewest and its inference speed is the second-fastest, indicating that SAMUS is a more clinically-friendly model. Moreover, the deployment performance of SAMUS is quite close to that of the most easy-to-deploy method (_i.e._, SAMed) with much better segmentation and generalization performance.
### Ablation Study
**Effectiveness of each component in SAMUS:** Four components in SAMUS, including the CNN branch, CBA, the feature adapter, and the position adapter, are introduced to the original SAM sequentially and trained on the TN3K and BUSI datasets for evaluation. As summarized in Table 9, coupling any component of SAMUS can effectively improve the segmentation performance and generalization ability of SAM on medical tasks. Even a simple position adapter can improve SAM by \(50.6\%\), \(38.1\%\), \(26.77\%\), and \(30.54\%\) in Dice on TN3K, DDTI, BUSI, and UDIAT, respectively. By introducing local features, the CNN branch obtains more significant performance improvement than the position adapter. Besides, simply adding the outputs of the CNN branch and the ViT branch for fusion is not the best option. Introducing CBA can further promote the exploration of local features, thereby achieving an average increase of 1.48\(\%\) and \(2.11\%\) in Dice compared to the CNN branch. By coupling all four components, SAMUS achieves the best segmentation performance and generalization.
**Effect of different prompts for SAMUS:** To analyze the effect of prompts, we evaluate the performance of SAMUS trained on US30K under different point prompts. In general, SAMUS is robust to point locations and numbers. For objects with large within-class representation variations (_e.g._, thyroid nodule, left ventricle, and myocardium), the segmentation performance varies within \(1\%\) in Dice under single-point prompts in different locations. Moreover, the performance can be improved greatly by introducing multipoint prompts. Comparatively, for objects with homogeneous features (_e.g._, breast cancer and left atrium), performance variations under different single-point prompts are within the range of \(\pm 0.3\%\), and introducing multipoint prompts would not necessarily bring performance gains. One possible reason is that using more points may produce information redundancy or exclusion.
## 5 Conclusion
In this paper, we propose SAMUS, a universal foundation model derived from SAM, for clinically-friendly and generalizable ultrasound image segmentation. Specifically, we present a parallel CNN branch image encoder, a feature adapter, a position adapter, and a cross-branch attention module to enrich the features for small-size objects and boundary areas while reducing GPU consumption. Furthermore, we construct a large ultrasound image dataset US30K, consisting of 30,106 images and 68,570 masks for evaluation and potential clinical usage. Experiments on both seeable and unseen domains demonstrate the outstanding segmentation ability and strong generalization ability of SAMUS. Moreover, the GPU memory cost of SAMUS is merely 28\(\%\) of that required to train the entire SAM, and SAMUS is about 3\(\times\) faster than SAM for inference.
\begin{table}
\begin{tabular}{c c c c|c c c|c c c|c c} \hline \hline \multicolumn{1}{c}{} & \multicolumn{3}{c|}{Components} & \multicolumn{3}{c|}{TN3K} & \multicolumn{2}{c|}{DDTI} & \multicolumn{2}{c|}{BUSI} & \multicolumn{2}{c}{UDIAT} \\ CNN Branch & CBA & F-Adapter & P-Adapter & Dice & HD & Dice & HD & Dice & HD & Dice & HD \\ \hline ✗ & ✗ & ✗ & ✗ & ✗ & 29.59 & 134.87 & 25.57 & 116.23 & 54.01 & 82.39 & 49.18 & 104.43 \\ ✓ & ✗ & ✗ & ✗ & ✗ & 82.17 & 31.41 & 68.31 & 48.66 & 81.42 & 29.50 & 82.24 & 22.53 \\ ✓ & ✓ & ✗ & ✗ & ✗ & 83.65 & 28.47 & 72.71 & 35.76 & 83.53 & 30.26 & 80.87 & 25.60 \\ ✗ & ✗ & ✓ & ✗ & 83.64 & 29.83 & 70.38 & 45.29 & 84.53 & 26.30 & 81.25 & 23.18 \\ ✗ & ✗ & ✗ & ✓ & 80.19 & 32.12 & 63.67 & 53.86 & 80.78 & 29.00 & 79.72 & 24.71 \\ ✓ & ✓ & ✓ & ✓ & **84.45** & **28.22** & **74.66** & **21.03** & **85.77** & **25.49** & **83.17** & **21.25** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ablation study on different component combinations of SAMUS on the thyroid nodule and breast cancer segmentation. F-Adapter and P-Adapter represent the feature adapter and the position adapter respectively.
\begin{table}
\begin{tabular}{c|c c|c c|c c|c c|c c|c c} \hline \hline \multirow{2}{*}{Prompt} & \multicolumn{2}{c|}{TN3K} & \multicolumn{2}{c|}{DDTI} & \multicolumn{2}{c|}{BUSI} & \multicolumn{2}{c|}{CAMUS-LV} & \multicolumn{2}{c|}{CAMUS-MYO} & \multicolumn{2}{c}{CAMUS-LA} \\ & Dice & HD & Dice & HD & Dice & HD & Dice & HD & Dice & HD \\ \hline pt1 & 83.05 & 28.82 & 66.78 & 44.35 & 84.54 & 27.24 & 91.13 & 11.76 & 83.11 & 18.99 & 92.00 & **12.08** \\ pt2 & 83.13 & 28.67 & 67.07 & 44.22 & 84.43 & 27.89 & 91.45 & 11.40 & 82.45 & 19.34 & 92.01 & 12.11 \\ pt3 & 82.96 & 28.71 & 67.00 & 44.37 & 84.32 & **26.96** & 90.53 & 11.93 & 82.94 & 18.97 & **92.03** & 12.15 \\
5 points & **83.98** & **28.24** & **68.93** & **43.87** & **85.20** & 27.71 & **92.95** & **10.60** & **87.20** & **17.19** & 91.68 & 12.46 \\
10 points & 83.45 & 28.95 & 68.12 & 45.29 & 84.61 & 29.05 & 92.26 & 10.76 & 87.15 & 17.45 & 91.29 & 12.63 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Ablation study of different prompts. Pt1, pt2, and pt3 represent the single-point prompt in different (randomly determined) foreground positions. Multipoint prompts are generated by random sampling on the foreground areas. |
2309.17097 | Benchmarking Collaborative Learning Methods Cost-Effectiveness for
Prostate Segmentation | Healthcare data is often split into medium/small-sized collections across
multiple hospitals and access to it is encumbered by privacy regulations. This
brings difficulties to use them for the development of machine learning and
deep learning models, which are known to be data-hungry. One way to overcome
this limitation is to use collaborative learning (CL) methods, which allow
hospitals to work collaboratively to solve a task, without the need to
explicitly share local data.
In this paper, we address a prostate segmentation problem from MRI in a
collaborative scenario by comparing two different approaches: federated
learning (FL) and consensus-based methods (CBM).
To the best of our knowledge, this is the first work in which CBM, such as
label fusion techniques, are used to solve a problem of collaborative learning.
In this setting, CBM combine predictions from locally trained models to obtain
a federated strong learner with ideally improved robustness and predictive
variance properties.
Our experiments show that, in the considered practical scenario, CBMs provide
equal or better results than FL, while being highly cost-effective. Our results
demonstrate that the consensus paradigm may represent a valid alternative to FL
for typical training tasks in medical imaging. | Lucia Innocenti, Michela Antonelli, Francesco Cremonesi, Kenaan Sarhan, Alejandro Granados, Vicky Goh, Sebastien Ourselin, Marco Lorenzi | 2023-09-29T09:47:18Z | http://arxiv.org/abs/2309.17097v2 | # Benchmarking Collaborative Learning Methods Cost-Effectiveness for Prostate Segmentation
###### Abstract
Healthcare data is often split into medium/small-sized collections across multiple hospitals and access to it is encumbered by privacy regulations. This brings difficulties to use them for the development of machine learning and deep learning models, which are known to be data-hungry. One way to overcome this limitation is to use collaborative learning (CL) methods, which allow hospitals to work collaboratively to solve a task, without the need to explicitly share local data.
In this paper, we address a prostate segmentation problem from MRI in a collaborative scenario by comparing two different approaches: federated learning (FL) and consensus-based methods (CBM).
To the best of our knowledge, this is the first work in which CBM, such as label fusion techniques, are used to solve a problem of collaborative learning. In this setting, CBM combine predictions from locally trained models to obtain a federated strong learner with ideally improved robustness and predictive variance properties.
Our experiments show that, in the considered practical scenario, CBMs provide equal or better results than FL, while being highly cost-effective. Our results demonstrate that the consensus paradigm may represent a valid alternative to FL for typical training tasks in medical imaging.
Keywords:Collaborative Learning Cost-Effectiveness Prostate Segmentation.
## 1 Introduction
Prostate cancer is the most frequently diagnosed cancer in men in more than half of the countries worldwide [1]. While accurate prostate segmentation is crucial for effective radiotherapy planning [2], traditional manual segmentation is expensive, time-consuming, and dependent on the observer [3]. Automated or semi-automated methods are needed for efficient and reliable prostate segmentation [4], and deep learning is nowadays the main tool for solving the segmentation task [5]. Hospital data are highly sensitive and are difficult to collect in data silos for centralized training. This makes their use in notoriously data-hungry deep learning systems problematic. For this reason, collaborative learning (CL) is emerging as a powerful approach: it allows different decentralized entities to
collaborate in solving a task, and researchers are exploring ways to do this by keeping the local data private [6].
Federated learning (FL) [7] has gained great attention since the first apparition. FL solves a collaborative training problem in which a model is collectively optimized by different clients, each of them owning a local private dataset [8]. Through different training rounds, a server orchestrates local optimization and aggregation of trained parameters across clients. Since training data is kept on the client's side, FL addresses the problems of data privacy and governance. Nevertheless, FL still poses several challenges in real-world applications [9, 10], consisting of 1) the sensitivity of the optimization result to the heterogeneity of system and data distribution across clients, and 2) the need for a large number of communication rounds, making communication cost a critical aspect. Moreover, from a practical perspective, FL systems are costly, since they are based on the setup and maintenance of complex computational infrastructures in hospitals, and thus require the availability of local resources and personnel [11, 12, 13].
Consensus-based methods (CBM) are a class of algorithms widely explored in machine learning, where the outputs from an ensemble of weak-predictors are aggregated to define a strong-predictor, outperforming the weak experts in terms of predictive robustness [14]. In medical imaging, CBM are often at the core of state-of-the-art approaches for image segmentation tasks [15, 16].
In this paper, we propose a comparison of these two different collaborative methods. Our specific focus is on collaborative prostate segmentation applied to magnetic resonance images (MRI). Differently from FL, in CBM independent models are locally trained by each client only once and, at testing time, a strong predictor is obtained by aggregating the output of the local models. Contrarily to FL, the setup of a CBM system in a hospital is straightforward, since no coordination in training is needed. Moreover, CBM provides data privacy and governance guarantees akin to FL, because no private information is shared during training, and model parameters are shared only once after training. Note that CBM has been coupled to FL training in previous works [17, 18, 19, 20, 21, 22, 23]. Nevertheless, most of these approaches are still based on distributed optimization, and thus they require setting up the whole FL infrastructure in hospitals, while the CBM we are analyzing here overtaken this limitation.
We present in this work a thorough benchmark of these models based on a cross-silo collaborative prostate segmentation task. The contributions of this paper are the following:
* We generate a distributed scenario based on natural data splits from a large collection of prostate MRI datasets currently available to the community, thus defining a realistic federated simulation.
* We define novel metrics to compare FL and CBM in terms of accuracy, robustness, cost-effectiveness, and utility.
* We apply the two CL approaches to this federated scenario and evaluate them in terms of accuracy and new-proposed metrics.
The paper is structured as follows. In Section 2 we present the data and the learning models used for the benchmark, i.e. federated learning and consensus
-based methods, and present the experiments and evaluation methods adopted in this work. Section 3 presents the experiment setting and results. Finally, Section 4 discusses our findings and future perspectives.
## 2 Benchmark definition
Starting from a large publicly available collection of data for prostate segmentation, we first define the federated setting by partitioning the data based on image acquisition characteristics and protocols. This allows us to obtain splits with controlled inter-center heterogeneity, thus simulating a realistic collaborative training scenario. We further define experiments to evaluate segmentation accuracy, cost-effectiveness, robustness to data heterogeneity, and utility for clients. Finally, we apply the differential privacy (DP) paradigm to different methods and we analyze how they respond to it.
### Distributed Scenario
We gathered data provided by 3 major publicly available datasets on prostate cancer imaging analysis, and by 1 private dataset:
* Prostate**[24] provides 32 prostate MRIs for training.
* **Promise12**[25] consists of 50 training cases obtained with different scanners. Of those, 27 cases were acquired by using an endorectal coil.
* **ProstateX**[26] contains prostate MRIs acquired by using two different scanners (Skyra and Triotin, both from Siemens). Segmentations of 194 cases are available [27].
* **Private Hospital Dataset** (PrivateDS) is composed of 36 MRIs collected by using a Siemens Aera scanner during a project on active surveillance for prostate cancer detection. An expert radiologist produced prostate masks. This dataset is used as an independent test set.
Datasets were split as in Table 1, to define centers characterized by specific image acquisition properties, thus allowing to obtain heterogeneous image distributions among centers. The common preprocessing pipeline applied to all the data comprised of flipping, cropping/padding to the same dimension, and intensity normalization. N4-bias-correction has also been applied to the data from Promise12 in N03 in order to compensate for the intensity artifacts introduced by the endorectal coil.
### Collaborative Learning Frameworks
In our scenario we consider \(M\) hospitals, each having a local dataset \(\mathcal{D}_{i}=\{z_{k,i}\}_{k=1}^{N_{i}}\). Given \(z\), a volumetric MRI, and a vector of parameters \(\theta\), we define a segmentation problem in which a model \(g\) produces binary masks \(h_{z}=g(z,\theta)\). Each hospital is a client indexed by \(i\in[0,M]\), and the local training consists in solving the loss minimization problem, considering a loss function \(f(\cdot)\).
#### 2.2.2 Federated learning.
FL is a collaborative optimization problem defined by:
\[\theta_{g}=\operatorname*{arg\,min}_{\theta}(\mathcal{L}(\theta))\text{ s.t. }\mathcal{L}(\theta)\coloneqq\sum_{i=1}^{n}p_{i}\mathcal{L}_{i}(\theta_{i}). \tag{1}\]
In FL, local losses are weighted by \(p_{i}\), such that \(\sum_{i=1}^{n}p_{i}=1\), where the weights \(p_{i}\) are arbitrarily set, for example, based on the local dataset size. Different strategies on how to optimize the weights have been proposed in the literature, with the aim of mitigating the impact of data heterogeneity or client drift. In this paper, we consider the following FL strategies from the state-of-the-art:
* **FedAvg[28]** is the backbone of FL optimization where, at round \(r\), each client locally executes a number of stochastic gradient descent steps, and sends the partially optimized model \(\theta_{i}^{r}\) to the server. The received models are weighted and averaged by the server into a global one, \(\theta_{g}^{r+1}\), which is then sent back to the clients to initialize the next optimization round. This process is repeated for \(R\) rounds until convergence.
* **FedProx[29]** tackles the problem of federated optimization with data heterogeneity across clients. This approach extends FedAvg by introducing a proximal term to the local objective function to penalize model drift from the global optimization during local training. The proximal term is controlled by a trade-off hyperparameter, \(\mu\), through the following optimization problem: \[\mathcal{L}_{i}(\theta)^{r}\coloneqq\frac{1}{N_{i}}\sum_{k=1}^{N_{i}}\mathcal{ L}(z_{k,i},\theta_{i}^{r})+\frac{\mu}{2}||\theta_{i}^{r}-\theta_{g}^{r}||^{2}.\] (2)
#### 2.2.3 Consensus-based methods.
With CBM, a global federated ensemble of weak predictors is composed by aggregating the outputs from the different local models. During _training_, each client fully optimizes the segmentation model \(g(z,\theta_{i})\) on its local dataset \(D_{i}\), by independently minimizing the local objective function \(\mathcal{L}_{i}\). Trained local models are subsequently centralized and, for a given test image \(z^{\prime}\) at _inference_ time, the segmentation masks from all the local models are computed and aggregated by applying an ensembling strategy:
\[h_{z^{\prime}}=\texttt{ensembling}(\{h_{i}(z^{\prime})\}|_{i=1}^{M})\text{ s.t. }h_{i}(z^{\prime})=g(z^{\prime},\theta_{i}). \tag{3}\]
\begin{table}
\begin{tabular}{l c c c c} \hline \hline ID \#Samples & Dataset & Subset Selection & Training & Test \\ \hline N01 & 32 Decathlon & Full Dataset & Y & Y \\ N02 & 23 Promise12 & No Endorectal Coil & Y & Y \\ N03 & 27 Promise12 & Only Endorectal Coil & Y & Y \\ N04 & 184 ProstateX & Only Scanner Skyra & Y & Y \\ N05 & 5 ProstateX & Only Scanner Triotin & N & Y \\ N06 & 36 PrivateDS & Full Dataset & N & Y \\ \hline \hline \end{tabular}
\end{table}
Table 1: Description of the different centers here considered for the distributed learning scenario, derived by partitioning the four dataset Decathlon, ProstateX, Promise12, and PrivateDS.
Among the different approaches to ensembling proposed in the literature [30], in this work we consider:
* **Majority Voting**[31] (MV) is a simple merging method that assigns to each voxel the label predicted by the majority of the local models.
* **Staple**[32] optimizes a consensus based on Expectation-Maximization (E-M) defined by the following iterative process:
* the E-step computes a probabilistic estimate of the true segmentation, that is a weighted average of each local prediction;
* the M-step assigns a performance level to each individual segmentation, which will be used as weights for the next E-step.
* **Uncertainty-Based Ensembling** (UBE) is based on weighted averaging of local decisions, in which the weights represent the uncertainty of each local model on the prediction task. As uncertainty can be quantified in different ways, in this work we adopt dropout [33] to compute a measure of the global uncertainty of each local model for the segmentation of a testing image \(z\). In particular, here the uncertainty is computed as the total voxel-wise variance at inference time, defined as: \(p_{i}=\sum_{x\in\Omega}\text{Var}\,(g(z,\theta_{i}))[x]\), where \(\Omega\) is the set of voxels in \(z\), \(\text{Var}\,(\cdot)[x]\) is the sampling variance estimated from \(S\) stochastic forward passes of the model, computed at voxel \(x\).
### Experiments details
The benchmark is based on four experiments, quantifying a different aspect for comparison between different strategies. The experiments are characterized by the same baseline model used for segmentation, which is presented below.
**Segmentation accuracy** was quantified through 5-fold cross-validation across all nodes, by testing all training strategies for each unique combination of training/testing split. The final result was obtained by averaging across all splits.
Additionally, N05 and N06 from Table 1 were not used for training, and exclusively reserved for use as independent test set. The performance of the trained model was evaluated using the Dice Score (DSC) and a Normalised Surface Distance (NSD), following the guidelines from the Decathlon Segmentation Challenge [34].
We benchmarked the following strategies. _Local_: model trained only on the data from a single node, without aggregation; _Centralized_: model trained on the aggregated data from all the centers; _Federated_: federated training using both FedAvg and FedProx as FL strategies; _Consensus_: ensembling of prediction using the CBMs strategies presented above.
**Cost-effectiveness** was investigated in terms of training and inference time and communication bandwidth [35, 36, 37]. For estimating the bandwidth we consider the amount of data exchanged through the network during the training phase; this value depends on the model size, that in our setting is constant among all the experiments, and the number of exchanges, which is strategy-dependent.
**Model robustness** was assessed to compare FL and CBM with respect to varying data heterogeneity across clients. To this end, we evaluated the change in
performance of the methods when removing N03 from the experiment. We expect a large variation in performance depending on the presence of N03, being this client the only one with images acquired with an endorectal coil, thus introducing large heterogeneity in the collaborative segmentation task.
**Clients Utility** refers to the evaluation of how beneficial it is for an individual client to participate in a collaborative method, and which specific method would bring the most value to that client. To determine this, we consider the accuracy of different models on various test sets.
Let's consider a client labeled as \(l\). We have two models: a local model denoted as \(\mathcal{M}_{l}\), and a collaborative model denoted as \(\mathcal{M}_{c}\). We evaluate the performance of these models on two different test sets: \(\mathcal{T}_{l}\), which is the local test set specific to client \(l\), and \(\mathcal{T}_{e}\), which is the union of all test sets excluding \(\mathcal{T}_{l}\).
To compare the utility of the two models, we examine two metrics:
* variation in accuracy on the local test set: This is computed as the difference between the accuracy of the collaborative model on the local test set (DSC\({}_{\mathcal{M}_{e},\mathcal{T}_{l}}\)) and the accuracy of the local model on the same test set (DSC\({}_{\mathcal{M}_{l},\mathcal{T}_{l}}\)).
* variation in accuracy on the external test sets: This is calculated as the difference between the accuracy of the collaborative model on the combined external test sets (DSC\({}_{\mathcal{M}_{e},\mathcal{T}_{e}}\)) and the accuracy of the local model on the same combined external test sets (DSC\({}_{\mathcal{M}_{l},\mathcal{T}_{e}}\)).
By analyzing these two metrics, we can quantify the impact of using either the local or collaborative methods on both internal and external datasets. Ideally, a positive value for both metrics indicates that collaboration is beneficial for the client in all scenarios. However, it is more common to observe that collaboration improves model generalization but may affect local performance. Therefore, striking a balance between these two values is crucial.
In summary, the client's utility aims to determine the most advantageous approach for a client by comparing the accuracy variations of local and collaborative models on local and external test sets, respectively.
**Privacy mechanisms** such as differential privacy (DP) [38] have been proposed in the literature to quantify the privacy that a protocol provides and to train a model in a privacy-preserving manner. In the context of DP, the term "budget" refers to the amount of privacy protection available for the entire federated learning process, and represents the cumulative privacy loss allowed during the training phase. This budget is typically defined as a function of \(\epsilon\), where \(\epsilon\) is used to control the strength of privacy guarantees for each round of federated learning updates. Here we compared the accuracy we can obtain by spending a fixed privacy budget \(\epsilon\) while protecting different collaborative methods.
A common **baseline model** was defined to obtain comparable results across strategies. We employed a 3D UNet architecture with residual connections [5]. The training was based on the optimization of the DICE Loss, by using the ADAMW optimizer for all experiments [39]. The UNet implementation is avail
able in the MONAI library3. We fixed model hyper-parameters and maintained consistency in the amount of training, loss type, and optimizer used across all configurations. Hyperparameter search was performed by varying training parameters for all experiments (see Appendix 0.Table 5) and selecting those performing averagely better on the local models, obtaining a learning rate of 0.001, a batch size of \(B=8\), and a dropout value of 0.3. All the experiments were executed using Fed-BioMed [40], an open-source platform that simulates the FL infrastructure. The code for running the experiments is available on the GitHub page of the author.
Footnote 3: [https://monai.io/index.html](https://monai.io/index.html)
The number of epochs and rounds were defined using a standard strategy [41], which ensured comparable numbers of training steps among local and federated training for each node. Specifically, the number of rounds \(R\) for FL methods was defined as follows: \(R=E\cdot N_{T}/M/B/s\), where \(E\) is the number of epochs required to train the model locally, \(s=20\) is the fixed number of local SGD steps, and \(N_{T}\) is the total number of samples in the training set.
## 3 Results
**Segmentation Accuracy.** Table 2 presents the average DSC among the 5-Fold evaluations obtained with the different collaborative learning strategies, while an illustrative example of the results on a sample image is available in Figure 1. The best results are indicated in **bold**. Similar results are obtained with the NSD metric and can be found in Appendix 0.Table 1 and Table 2. Details about standard deviation among the K runs can be found in Appendix 0.Table 3.
Overall, CBM obtain better or at least comparable results than FL: the last row in Table 2 shows that UBE is on average the best-performing method, but all the CBM provide very similar results. In general, distributed methods highly outperform local methods, which fail to generalize.
**Cost-Effectiveness.** We consider the total training time for FL and the longest time for local training across clients for CBM. Federated training is roughly three times longer than CBM training (\(\sim 2\) hours vs \(\sim 30\) minutes).
Figure 1: A representation of the segmentation task on a sample image using different strategies. In white, the ground truth; in red, the segmentation provided by each training approach.
Among the CBM methods, UME is associated with the largest testing time, having to perform many inferences to estimate the uncertainty map. MAV is the most efficient and takes two times longer than the average FL (though still in the order of seconds). However, we note that testing time is a magnitude lower than training time, making its impact irrelevant in a real case application. The amount of exchanged data for FL is equal to \(2\cdot M\cdot m_{s}\cdot R\), where \(R\) is the number of rounds and \(m_{s}\) is the model size. For CBM, is only \(M\cdot m_{s}\), resulting in a difference of \(C\cdot m_{s}\cdot(2\cdot R-1)\). Considering the UNet used in the experiment, \(m_{s}=30MB\), the difference between FL and CBM is roughly of 9.25 GBytes.
**Model Robustness.** The performance of local models reported in Table 2 (panel "Local") allows to appreciate the heterogeneity across clients. As expected, N03 emerges as the client with the highest heterogeneity from this analysis, given the drop in testing performance of the models locally trained on the other clients. As shown in Appendix Table 4, CBM leads to an average absolute DSC variation of 1.7%, 2.4%, and 2.7%, for respectively UBE, MV, and Staple, as compared to the 3.1% and 5.7% DSC change respectively associated with FedAvg and FedProx. A graphical representation of this property is available in Appendix Figure 1. This result denotes the improved robustness of CBM to clients' heterogeneity. The overall results obtained after removing N03 are compatible with those shown in Table 2, and confirm the positive performances of CBM as compared to FL.
**Clients Utility.** Figure 2 presents a comparison of the utility of different collaborative methods for the four clients in the experiment.
\begin{table}
\begin{tabular}{c|c c c c||c c c c} \hline \hline & \multicolumn{3}{c||}{**Local**} & \multicolumn{3}{c||}{**Centralized**} & \multicolumn{3}{c}{**Federated**} & \multicolumn{3}{c}{**Consensus**} \\ & N01 & N02 & N03 & N04 & & FedAvg & FedProx & UBE & Staple & MV \\ \hline N01-test & 0.86 & 0.64 & 0.49 & 0.44 & 0.92 & 0.85 & 0.70 & **0.89** & 0.83 & 0.84 \\ N02-test & 0.80 & 0.69 & 0.66 & 0.73 & 0.90 & 0.82 & 0.75 & 0.85 & **0.87** & **0.87** \\ N03-test & 0.64 & 0.72 & 0.75 & 0.44 & 0.83 & 0.70 & 0.75 & 0.73 & 0.75 & **0.76** \\ N04-test & 0.79 & 0.66 & 0.62 & 0.88 & 0.91 & **0.88** & 0.84 & 0.87 & 0.86 & 0.86 \\ N05 & 0.57 & 0.68 & 0.71 & 0.73 & 0.77 & 0.71 & 0.67 & **0.72** & 0.68 & 0.68 \\ N06 & 0.75 & 0.63 & 0.61 & 0.75 & 0.83 & **0.82** & 0.80 & 0.80 & **0.82** & **0.82** \\ \hline Average & 0.73 & 0.67 & 0.64 & 0.66 & 0.86 & 0.80 & 0.75 & **0.81** & 0.80 & 0.80 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison of the 5-fold DSC obtained in the segmentation task by different training strategies.
\begin{table}
\begin{tabular}{c|c c c c c||c c c c c} \hline \hline & \multicolumn{3}{c||}{**Local**} & \multicolumn{3}{c||}{**Centralized**} & \multicolumn{3}{c}{**Federated**} & \multicolumn{3}{c}{**Consensus**} \\ & N01 & N02 & N03 & N04 & & & FedAvg & FedProx & UBE & Staple & MV \\ \hline Train. time (min) & 22 & 35 & 38 & 36 & 421 & 116 & 116 & **38** & **38** & **38** \\ Inf. time (sec) & 0.4 & 0.4 & 0.4 & 0.4 & 0.4 & **0.3** & **0.3** & 16.3 & 3.7 & 0.9 \\ Train. Bandwidth (MB) & 30 & 30 & 30 & 30 & 0 & 9600 & 9600 & **120** & **120** & **120** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison of costs of different training strategies in terms of training and inference time and training bandwidth.
For all clients and all methods, collaborative methods lead to improvements in model generalization when evaluated on external test sets. This implies that collaborating with other clients helps to enhance the overall performance of the models on unseen data. Additionally, it is worth noting that even for small clients like N02, collaborative methods also result in improved local performance. This suggests that even clients with limited local data can benefit from participating in the collaboration. Surprisingly, even the largest client, N04, still experiences advantages by joining the collaboration. This indicates that size alone does not diminish the benefits of collaborative methods and that even clients
Figure 2: The chart shows the utility of collaborative methods with respect to local models when used on the local test sets (red bar) or external test sets (blue bar) for each client indicated in the sub-captions. Each histogram corresponds to a different client. For all clients, collaborative methods improved generalization by a difference of up to 25%, while decreasing local performance by at most 15% and in some cases even improving it. A significant degree of heterogeneity can be observed in the impact on generalization and local performance among different test sets as well as different methods.
with substantial local datasets can gain value from collaboration. Overall, in this particular experiment, the performance of CBM is comparable to that of FL. However, UBE method consistently demonstrates the most substantial improvements across various metrics, making it the preferred choice among the collaborative methods evaluated.
**Privacy mechanisms.** The privacy analysis is performed in the framework of Renyi Differential Privacy (RDP) [42], a relaxation of the classical definition [43] allowing a convenient way to keep track of the cumulative privacy loss. This allows us to quantify the privacy budget \(\epsilon\) corresponding to SGD optimization with parameters defined as for the baseline model of Section 2.3.
Following [44], the DP Gaussian mechanism was defined with noise \(\sigma=4\). One can show that the privacy budget for obtaining the results presented in Table 2 is \(\epsilon_{CBM}=5.2\) in the CBM scenario, and \(\epsilon_{FL}=7.9\) in the federated one, denoting the lower privacy cost of CBM. CBM is also characterized by a lower privacy cost in relation to our chosen performance metric: DSC. We compared how DSC on unseen data evolved for the ensembling method MV and the FedAVG aggregation strategy when the privacy budget \(\epsilon\) varied between \(0.5\) and \(5.5\). Figure 3 shows that the CMB method achieved a higher DSC than FL with a lower privacy budget: CBM reached a plateau at roughly \(\epsilon=3\) while FL reached a plateau only after \(\epsilon>4\), and at a lower DSC value.
Figure 3: The chart compares the accuracy reached by different methods when spending a privacy budget \(\epsilon\) for differential privacy. The two compared methods are majority voting (MV) for CBM and federated averaging (FedAvg) for FL. CBM obtain on average better performances when \(\epsilon\) is fixed, and it already reaches the plateau with \(\epsilon\approx 3\).
## 4 Conclusions
In this paper, we proposed a realistic benchmark for collaborative learning methods for prostate segmentation. To this end, we used a collection of large public and private prostate MRI datasets to simulate a realistic distributed scenario across hospitals and we defined experiments and metrics to compare local training with different collaborative learning methods, namely FL and CBM, in terms of performances, cost-effectiveness, robustness and privacy of the models. For the considered scenario of cross-silo federated prostate segmentation, our results show that CBM represent a reliable alternative to FL in terms of performances, while being highly competitive in terms of robustness, and superior in cost-effectiveness when considering the practical implementation and required resources. Indeed, CBM avoid synchronization of training across hospitals, while the setup of an FL infrastructure is costly and time-consuming, and often prohibitive for typical hospital applications.
By simply sharing locally trained models and applying CBM to local predictions, we can rely on established theory from the state-of-the-art of multi-atlas segmentation to obtain competitive results at much less cost, as CBM avoid synchronization of training across hospitals.
Our preliminary results on privacy-preserving methods based on differential privacy show that CBM can guarantee a stronger level of privacy protection.
Moreover, secure aggregation techniques could be used at inference time for CBM in order to avoid sharing the whole model, adding another privacy layer to the framework. Other FL schemes could be included in our benchmark, such as SCAFFOLD [45] or FedOpt[46], to better account for heterogeneity. Nevertheless, given previous benchmark results on similar medical imaging tasks [41], we do not expect a substantial change in the overall message of this study, especially concerning the comparison of cost-effectiveness between FL and CBM paradigms. Different consensus strategies could be implemented in the future, for example, to account for voxel-wise uncertainty across local models. The benchmark here proposed focuses on a cross-silo setup, typical of FL applications in hospitals proposed so far. Future investigations could extend our study to include a larger number of clients, thus allowing to better exploit the robustness guarantees associated with consensus strategies.
|
2302.14834 | DAG-Inducing Problems and Algorithms | Consider the execution of a sequential algorithm that requires the program to
converge to an optimal state, and then terminate/stutter. To design such an
algorithm, we need to ensure that the state space that it traverses forms a
directed acyclic graph (DAG) and its sink nodes are optimal states. However, if
we run the same algorithm on multiple computing nodes running in parallel, and
without synchronization, it may not reach an optimal state. In most parallel
processing algorithms designed in the literature, a synchronization primitive
is assumed. Synchronization ensures that the nodes read fresh value, and the
execution proceeds systematically, such that the subject algorithm traverses a
DAG induced among the global states.
With this observation, we investigate the conditions that guarantee that the
execution of an algorithm is correct even if it is executed in parallel and
without synchronization. To this end, we introduce DAG-inducing problems and
DAG-inducing algorithms. We show that induction of a $\prec$-DAG (induced among
the global states -- that forms as a result of a partial order induced among
the local states visited by individual nodes) is a necessary and sufficient
condition to allow an algorithm to run in asynchrony.
In the paper, we first give a comprehensive description of DAG-inducing
problems and DAG-inducing algorithms, along with some simple examples. Then we
show some properties of an algorithm that is tolerant to asynchrony, which
include the above-mentioned condition. | Arya Tanmay Gupta, Sandeep S Kulkarni | 2023-02-28T18:31:34Z | http://arxiv.org/abs/2302.14834v3 | # DAG-Inducing Problems and Algorithms
###### Abstract.
In this paper, we show that in a parallel processing system, if a directed acyclic graph (DAG) can be induced in the state space and execution is _enforced_ along that DAG, then synchronization cost can be eliminated. Specifically, we show that in such systems, correctness is preserved even if the nodes execute asynchronously and rely on old/inconsistent information of other nodes. We present two variations for inducing DAGs - _DAG-inducing problems_, where the problem definition itself induces a DAG, and _DAG-inducing algorithms_, where a DAG is induced by the algorithm.
We demonstrate that the dominant clique (DC) problem and shortest path (SP) problem are DAG-inducing problems. Among these, DC allows self-stabilization, whereas the algorithm that we present for SP does not. We demonstrate that maximal matching (MM) and 2-approximation vertex cover (VC) are not DAG-inducing problems. However, DAG-inducing algorithms can be developed for them. Among these, the algorithm for MM allows self-stabilization and the 2-approx. algorithm for VC does not. Our algorithm for MM converges in \(2n\) moves and does not require a synchronous environment, which is an improvement over the existing algorithms in the literature. Algorithms for DC, SP and 2-approx. VC converge in \(2m\), \(2m\) and \(n\) moves respectively. We also note that DAG-inducing problems are more general than, and encapsulate, lattice linear problems (Garg, SPAA 2020). Similarly, DAG-inducing algorithms encapsulate lattice linear algorithms (Gupta and Kulkarni, SSSS 2022).
DAG-inducing problems, DAG-inducing algorithms, asynchrony, dominant clique, shortest path, maximal matching, 2-approximation vertex cover +
Footnote †: journal: Computer Science and Engineering,
+
Footnote †: journal: Computer Science and Engineering,
Michigan State University
+
Footnote †: journal: Computer Science and Engineering,
Michigan State University
+
Footnote †: journal: Computer Science and Engineering,
Michigan State University
## 1. Introduction
A parallel/distributed algorithm consists of multiple processes that solve a problem. In these algorithms, processes need to coordinate with each other. This could be achieved via shared memory (where the data is stored centrally and processes have direct access to all data) or message passing (where processes share their information via messages). In these programs, as the level of parallelization increases, the need for synchronization increases as well.
Consider the example of the minimal dominating set problem. An algorithm for this problem is as follows: if node \(i\) can leave the dominating set while ensuring that \(i\) and its neighbours stay dominated, then \(i\) moves out. Similarly, if \(i\) or one of the neighbours of \(i\) are not dominated, then \(i\) turns itself in. If this algorithm is run in an interleaving fashion (where one node executes at a time), it will result in a minimal dominating set. However, consider the case where nodes are executing asynchronously. Consider a complete graph of two nodes \(i\) and \(j\). Let that nodes \(i\) and \(j\) are out of the dominating set and read the state of each other simultaneously. Consider that \(i\) executes first, node \(i\) executes and changes its state to \(IN\). However, when \(j\) executes, it has old/inconsistent information about the state of \(i\). Hence, \(j\) also will change its state to \(IN\), leading to a non-optimal state. Effectively, without proper synchronization, \(i\) and \(j\) can repeat this execution forever without converging. Thus, we have that if this algorithm is run in parallel where each node executes at its own pace without synchronization with others then such a system may fail to find a minimal dominating set.
If an algorithm can be run without synchronization and still lead to a correct answer then it can highly benefit from available concurrency permitting each node to execute at its own pace. While the embarrassingly parallel algorithms, in which there is no data dependency among the nodes, do fit this requirement, most problems require that the data of neighbouring nodes is correlated. We study problems and algorithms where such _asynchronous execution_ can be permitted. Towards this end, in this paper, we introduce _DAG-inducing problems_ and _DAG-inducing algorithms_.
### Permitting Asynchronous Execution
When an algorithm executes without synchronization, it is possible that some nodes use an old value of other nodes. Thus, if we allow asynchrony, then we have to ensure that the algorithm provides a correct answer even if a node executes based on old values of variables. Next, we discuss some introductory background on asynchronous systems that ensure correctness. This work includes lattice linear problems (Bauer and Krotzsch, 2007), eventually lattice linear algorithms (Bauer and Krotzsch, 2007) and fully lattice linear algorithms (Bauer and Krotzsch, 2007).
Garg (Bauer and Krotzsch, 2007) introduced modelling of problems such that the state space forms a lattice. These problems are called _lattice linear problems_. The local states in such problems form a total order, and as a result, the global states form a partial order. A node moves ahead only if it determines that it is violating the global state, i.e., the system will not converge if it retains its state. The induction of a lattice ensures that if some node \(i\) is reading an old state of node \(j\), we would only have moved \(up\) in the lattice. As a result, the nodes move only in one direction, allowing the system to execute in asynchrony. In these problems, there is only one optimal state.
Gupta and Kulkarni (Gupta and Kulkarni, 2007) studied problems that contain multiple optimal states and require self-stabilization. The algorithms in (Gupta and Kulkarni, 2007) induce single or multiple disjoint lattices in a subset of the state space. The supremum of each lattice is an optimal state, and so self-stabilization is allowed. These algorithms guarantee that the system traverses from an arbitrary state to one of the states in
the lattices and from there, traverses that lattice to reach an optimal state. Because of the kind of partial order induced among the states, these algorithms are called _eventually lattice linear algorithms_. Furthermore, Gupta and Kulkarni (Gupta and Kulkarni, 2017) provided with a _fully lattice linear algorithm_ for minimal dominating set. This algorithm is capable of inducing multiple disjoint lattices in the entire state space while guaranteeing that every supremum is an optimal state. Hence starting from an arbitrary state, the algorithm traverses one of the lattices and reaches an optimal state.
We note the class of DAG-inducing problems (respectively, DAG-inducing algorithms) identified in this paper are a superset of lattice linear problems (respectively, lattice linear algorithms).
### Contributions of the paper
In this paper, we study the problems that can be represented by a predicate which induces a DAG among the states, and the problems in which a DAG can be induced algorithmically. Inducing a DAG is extremely valuable as it allows the nodes to run asynchronously. In other words, each node can execute its actions by reading the variables of other nodes at its own pace and take action based on old values while still preserving correctness. Furthermore, inducing a DAG is more general than inducing lattices (as considered in (Becker, 1993; Becker, 1993)). Specifically, inducing a DAG permits more non-determinism (more design choices) than the case where lattices are induced. Our specific contributions are listed as follows.
* We introduce the classes of DAG-inducing problems and DAG-inducing algorithms.
* We show that the dominant clique problem and the shortest path problem are a DAG-inducing problems. Among these the dominant clique problem allows self-stabilization, whereas the algorithm that we develop for the shortest path problem is not self-stabilizing.
* We demonstrate that maximal matching problem and \(2\)-approximate vertex cover are not DAG-inducing problems. We present DAG-inducing algorithm for them. The algorithm for maximal matching allows self-stabilization and the \(2\)-approximation algorithm for vertex cover does not.
* We study upper bound to the convergence time of an algorithm traversing a DAG of states. We show how inducing a \(<\)-DAG in the state space is crucial to allow asynchrony.
* Algorithms for the dominant clique and shortest path, both, converge in \(2m\) moves. The algorithm for the maximal matching converges in \(2n\) moves and the \(2\)-approximation algorithm for vertex cover converges in \(n\) moves.
### Organization of the paper
We discuss the notations and definitions used in this paper in Section 2. In Section 3, we study the characteristics of DAG-inducing problems with examples. In particular, we study the dominant clique problem (Section 3.3) and the shortest path problem (Section 3.4) as DAG-inducing problems. In Section 4, we study the characteristics of DAG-inducing algorithms for non-DAG-inducing problems with examples. Specifically, we study the maximal matching problem (Section 4.2) and \(2\)-approximation vertex cover (Section 4.3). In Section 5, we study the properties of an algorithm traversing a DAG of global states. In particular, we study significance of inducing a DAG to allow asynchrony (Section 5.1) and the time complexity of an algorithm traversing a DAG of global states (Section 5.2). We study the related work in Section 6. Finally, we conclude in Section 7.
## 2. Preliminaries
In this paper, we are mainly interested in graph algorithms where the input is a graph \(G\), \(V(G)\) is the set of its nodes, \(E(G)\) is the set of its edges, \(n=|V(G)|\), and \(m=|E(G)|\). For a node \(i\in V(G)\), \(Adj_{i}\) is the set of nodes connected to \(i\) by an edge, and \(Adj_{i}^{T}\) is the set of nodes within \(x\) hops from \(i\), excluding \(i\)\(deg(i)=|Adj_{i}|\). \(dis(i,j)\) is the length of shortest path from \(i\) to node \(j\).
Each node \(i\) is associated with a set of variables. The algorithms are written in terms of rules, where each _rule_ for every process \(i\) is of the form \(g\longrightarrow a_{c}\) where the _guard_\(g\) is a proposition consisting of the variables of \(i\) along with the variables of other nodes. If at least one of the guards \(g\) hold true for \(i\), we say that \(i\) is _enabled_. We say that \(i\) makes a _move_ when \(i\) is enabled and updates its variables by executing the corresponding action \(a_{c}\). A _round_ is the minimum time-frame where each node evaluates its guards at least once and takes corresponding action (if it is enabled). An algorithm is _silent_ if no node is enabled when \(G\) reaches an optimal state.
We notice \(S\) as the set of all global states. A global state \(s\in S\) is represented as a vector where \(s[i]\) denotes the variables of node \(i\), \(s[i]\) itself is a vector of the variables of \(i\).
**Scheduler/Daemon.** A _scheduler/daemon_ is a node whose function is to choose one, some, or all nodes in a time step, throughout the execution, so that the chosen nodes can evaluate their guards and take corresponding action. A _central scheduler_ chooses only one node per time-step. A _distributed scheduler_ chooses the nodes in an arbitrary subset of \(V(G)\) per time-step. A _synchronous scheduler_ chooses all the nodes in \(V(G)\) in each time-step.
**Computation with Synchronization.** A computation of an algorithm \(A\) (with synchronization) is a sequence of the form \(\langle s_{0}\), \(s_{1}\), \(\cdots\rangle\) where each global state \(s_{t}\) identifies the values of variables of all nodes. Elaborated, a computation is of the form \(\langle\)\(\langle\)\(s_{0}[1]\), \(s_{0}[2]\), \(\cdots\), \(s_{0}[n]\)\(\rangle\), \(\langle\)\(s_{1}[1]\), \(s_{1}[2]\), \(\cdots\)\(\rangle\) where \(s_{t}[i]\) denotes the variables of node \(i\) in global state \(s_{t}\). Here, \(s_{t+1}\) (\(\forall t\geq 0\)) is obtained by executing an action of some node \(i\) in state \(s_{t}\). Thus, a transition \((s_{t},s_{t+1})\) corresponds to \((\langle s_{\ell}[1]\), \(s_{\ell}[2]\), \(\cdots\), \(s_{\ell}[n]\)), \(\langle s_{\ell+1}[1]\), \(s_{\ell+1}[2]\), \(\cdots\), \(s_{\ell+1}[n]\))).
We assume that node \(i\) executes its action in the _current state_\(s_{\ell}\) (i.e., it executes based on fresh values of variables as in \(s_{\ell}\)) and while node \(i\) is executing no other node is updating its state. Clearly, this definition is very restrictive, as it permits only a single node to execute at a time. It can be easily extended to the case where two nodes can execute simultaneously under proper synchronization.
However, in this paper, we are interested in asynchronous computations. This means that while node \(i\) is executing, other arbitrary nodes could be updating their state as well. We study such computations without synchronization, next.
**Computation without Synchronization.** When node \(i\) executes without synchronization, it reads the states of other nodes at its own pace and the updates its own variables. This means that by the time \(i\) executes its action, the states of other nodes may have
changed. Effectively, node \(i\) will be reading old values of variables in \(Adj_{s}^{x}\). In other words, the state read by node \(i\) may appear to \(i\) as \(\langle s_{m_{2}}[1],s_{\alpha_{1}}[2],\cdots s_{\alpha_{m}}[n]\rangle\) where \(\forall t,\alpha_{t}\leq t\). Node \(i\) can execute its action subsequently, based on such inconsistent state, and update its state. Then, \(s_{\ell+1}\) is obtained from \(s_{\ell}\) by changing the state of node \(i\) where node \(i\) views its state as \(\langle s_{\alpha_{0}}[1],s_{\alpha_{1}}[2],\cdots s_{\alpha_{m}}[n]\rangle\) (where \(\forall t,\alpha_{t}\leq t\)) and executes its action.
**Self stabilization**. An algorithm \(A\) is _self-stabilizing_ with respect to the subset \(S_{o}\) of the set \(S\) of global states iff
* **Convergence:** Starting from an arbitrary state, any sequence of computations of \(A\) reaches a state in \(S_{o}\).
* **Closure:** Any computation of \(A\) starting from \(S_{o}\) always stays in \(S_{o}\).
## 3. Natural DAG Induction: Dag-Inducing Problems
In this section, we discuss properties of problems where a DAG can be induced naturally. Let \(\mathcal{P}\) be a problem (e.g., shortest path problem) that is defined by a predicate \(\mathcal{P}\), i.e., \(\mathcal{P}\) requires that \(\mathcal{P}\) is true. It follows that if \(\mathcal{P}\) is false in the current global state \(s\), i.e., if \(\mathcal{P}(s)\) is false, then there exists at least one node that must change its state in order to solve \(\mathcal{P}\). The problems/algorithms considered in this section rely on the requirement that execution of such node(s) is critical for reaching an optimal state.
### Embedding a \(<\)-DAG among global states
To explain the embedding of a \(<\)-DAG, recall that a global state \(s\) is of the form \(\langle s[1],s[2],\cdots,s[n]\rangle\), where \(s[i]\) denotes the state of node \(i\). Furthermore, in asynchrony, in a transition \((s_{\ell},s_{\ell+1})\), when node \(i\) reads the variables of another node \(j\), it may read it \(s_{\alpha_{j}}[j]\), where \(\alpha_{j}\leq\ell\) instead of the latest state \(s_{\ell}[j]\), i.e., it may read an older value of node \(j\). The \(<\)-DAG captures how the state of a node may change during its execution.
First, we define a partial order \(<_{l}\) among the local states of a node. (Intuitively, node \(i\) can go from state \(s[i]\) to \(s^{\prime}[i]\) iff \(s[i]<_{l}s^{\prime}[i]\)). In a partial order, we allow the possibility that \(-(s[i]<_{l}s^{\prime}[i])\wedge-(s^{\prime}[i]<_{l}s[i])\). We demonstrate this in a later part in this section.
Using \(<_{l}\), we define a partial order \(<_{g}\) among global states as follows. We say that \(s_{<g}s^{\prime}\) iff \((\forall i:s[i]=s^{\prime}[i]\lor s[i]<_{l}s^{\prime}[i])\wedge(\exists i:s[i] <_{l}s^{\prime}[i])\). \(s=s^{\prime}\) iff \(\forall i:s[i]=s^{\prime}[i]\). For brevity, we use \(<\) to denote \(<_{l}\) and \(<_{g}<\) corresponds to \(<_{l}\) while comparing local states, and \(<\) corresponds to \(<_{g}\) while comparing global states. We use the symbol '\(>\)' which is the opposite of '\(<\)', i.e. \(s>s^{\prime}\) iff \(s^{\prime}<s\). Similarly, we use symbols '\(\leq\)' and '\(\geq\)'; e.g. \(s\leq s^{\prime}\) iff \(s=s^{\prime}\lor s<s^{\prime}\). We call the DAG, formed from such a partial order, a \(<\)_DAG_.
_Definition 3.1_.: \(<\)_DAG_. Given a partial order \(<_{l}\) on local states of \(s[i]\), the \(<\)-DAG corresponding to \(<_{l}\) is defined by the partial order: \(s<s^{\prime}\) iff \((\forall i\;s[i]\leq_{l}s^{\prime}[i])\wedge(\exists i\;s[i]<_{l}s^{\prime}[i])\).
_Remark_.: \(A<\)-DAG can be induced in any problem/state space. It will permit asynchronous execution only if it guarantees reachability to an optimal state. We demonstrate this later in the section.
In (Bordes et al., 2017; Bordes et al., 2017; Bordes et al., 2017), the discrete structure in the state space \(S\) is constrained to be a lattice. This is achieved by requiring a total order among the local states of a node. If local states were totally ordered, we would obtain a lattice instead, where join and meet could be defined in the natural way: the meet (respectively, join), of two states \(s_{1}\) and \(s_{2}\) is a state \(s_{3}\) where \(\forall i,s_{3}[i]\) is equal to \(\min(s_{1}[i],s_{2}[i])\) (respectively, \(\max(s_{1}[i],s_{2}[i])\)). If the local states were totally ordered, then \(\min(s_{1}[i],s_{2}[i])\) (and \(\max(s_{1}[i],s_{2}[i])\)) is either \(s_{1}[i]\) or \(s_{2}[i]\). However, if local states are partially ordered, \(\min(s_{1}[i],s_{2}[i])\) (\(\forall,\max(s_{1}[i],s_{2}[i])\)) may not be defined. Hence, the resultant structure may not be a lattice.
### General properties
Let that problem \(P\) is defined by a predicate \(\mathcal{P}\). For some problems, when \(\mathcal{P}(s)\) is false in a state \(s\), we can identify at least one node, say \(i\), such that its execution is required in order to solve \(P\). In other words, if node \(i\) does not execute (i.e., \(s[i]\) remains unchanged) then \(\mathcal{P}\) remains false. Thus, we have the following definition.
_Definition 3.2_.: _[_3_]_ \(\textsc{Impedanceable}(i,s,\mathcal{P})\equiv\neg\mathcal{P}(s)\wedge(\forall s^{ \prime}>s:s^{\prime}[i]=s[i]\implies\neg\mathcal{P}(s^{\prime}))\).
_Remark_.: We use the term _impedensable_ as a combination of the words impediment (obstacle) and indispensable (essential), as the nodes that are impedensable are essential to make progress but they also prevent the system from making progress if they do not execute. This term is similar to the notion of a node being _forbidden_ described in (Bordes et al., 2017), which comes from predicate detection background (Bordes et al., 2017). We changed the notation to avoid misinterpretation due to the English meaning of the word _forbidden_ while reading the paper.
The above definition states that in a state where \(\mathcal{P}(s)\) is false, we can identify a _specific_ node \(i\) that must change its state to reach an optimal state. If \(\mathcal{P}\) is false in state \(s\) then it will remain false if node \(i\) does not execute, even if other nodes execute. The problems where this constraint cannot be applied are studied in Section 4.
If no node is _impedensable_ in \(s\), then \(\mathcal{P}(s)\) is true, i.e., \(s\) is an optimal state. Based on the above definition and given that \(S\) forms a \(<\)-DAG, we define DAG-inducing problems as follows.
_Definition 3.3_.: _DAG-inducing problem (DIP)_. A problem \(P\) is DAG-inducing iff there exists a predicate \(\mathcal{P}\) and a \(<\)-DAG such that
* \(\mathcal{P}\) requires that we reach a state where \(\mathcal{P}\) is true, and
* \(\mathcal{P}\) is DAG-inducing with respect to the \(<\)-DAG induced in \(S\), i.e., \(\forall s:\neg\mathcal{P}(s)\Rightarrow\exists i:\textsc{Impedensable}(i,s, \mathcal{P})\).
For a DAG-inducing problem \(P\), we can design an algorithm \(A\) by following the following guidelines (Bordes et al., 2017).
* Each node \(i\) checks if it is impedensable.
* If \(\mathcal{P}(s)\) is false then there is at least one node that is impedensable.
* If \(\mathcal{P}(s)\) is true then no node is impedensable.
* If \(i\) is impedensable, then \(i\) increments its value with respect to \(<_{l}\), otherwise, no action is taken.
If the above rules are followed, then \(A\) will reach a state where \(\mathcal{P}\) is true if there exists a state \(o\) which satisfies \(\mathcal{P}\) and \(o\) is reachable from the initial state. Definition 3.1 ensures that this property is satisfied even if the nodes read old values: if \(\mathcal{P}\) is false in the state observed by the given node, it will also be false in the current state. Thus such an algorithm finds the lowest state in the DAG where \(\mathcal{P}\) is true.
**Definition 3.4**.: _Suecess of a global state._ A state \(s^{\prime}\) is a successor of a state \(s\) iff \(s^{\prime}\) is reachable from \(s\) in the induced DAG. Formally, \(\textsc{Successors}(s)\equiv\{s^{\prime}:s^{\prime}>s\}\).
**Definition 3.5**.: \(\textsc{Successors}(s,s^{\prime})\equiv s^{\prime}\in\textsc{Successors}(s)\)_._
**Definition 3.6**.: _Terminal Successors of a global state._ A state \(s^{\prime}\) is a terminal successor of a state \(s\) iff \(s^{\prime}\) is the successor of \(s\) and \(s^{\prime}\) has no successor. Formally, \(\textsc{Terminal-Successors}(s)\equiv\{s^{\prime}|\textsc{Successors}(s,s^{ \prime})\}\land\textsc{Successors}(s^{\prime})=\phi\}\).
**Definition 3.7**.: \(\textsc{Terminal-Successors}(s,s^{\prime})\equiv s^{\prime}\in\) Terminal-Successors}(s)\).
Continuing from Definition 3.3, we have
**Definition 3.8**.: _Self-stabilizing DIP._\(P\) is a self-stabilizing DIP if and only if all terminal successors of every node are optimal states, i.e. \(\forall s,s^{\prime}\in S:\textsc{Terminal-Successors}(s,s^{\prime}) \Rightarrow\mathcal{P}(s^{\prime})=true\).
In the next two subsections, we show that the dominant clique problem and the shortest path problem are DAG-inducing problems. We also note that these are not lattice linear problems, as the discrete structure that is induced among the global states is not a lattice.
### Dominant Clique (DC) problem
**Definition 3.9**.: _Dominant clique._ In the dominant clique problem, the input is an arbitrary graph \(G\) suvth that for the variable \(clique.i\) of each node \(i\), \(clique.i\subseteq Adj_{i}\cup\{i\}\) and \(\{i\}\subseteq clique.i\). The task is to compute maximal cliques such that for any node \(i\), \((1)\) all the nodes in \(clique.i\) form a clique, i.e. \(\forall j,k\in clique.i,j\neq k:k\in Adj_{j}\), and \((2)\) there exists no clique \(c\) in \(G\) such that \(clique.i\) is a proper subset of \(c\), i.e. \(\exists j\in Adj_{i}:j\notin clique.i\land(\forall k\in clique.i:k\in Adj_{j})\).
Thus, the DC problem can be defined by the following predicate.
\(\mathcal{P}_{dc}\equiv(\forall j,k\in clique.i,j\neq k:k\in Adj_{j})\land\)
\((\exists j\in Adj_{i}:j\notin clique.i\land(\forall k\in clique.i,:k\in Adj_{j}))\)
An impedensable node \(i\) in a state \(s\) is a node for which \((1)\) all the nodes in \(clique.i\) do not form a clique, or otherwise \((2)\) there exists some node \(k\) in \(Adj_{i}\) such that \(clique.i\cup\{k\}\) is a valid clique, but \(k\) is not in \(clique.i\). Formally,
\(\textsc{Impedance-DC}(i,s,\mathcal{P}_{dc})\equiv(\exists j\in clique.i:(j \notin Adj_{i}\lor(\exists k\in clique.i,k\neq j:j\notin Adj_{k})))\lor\)
\((\exists j\in Adj_{i}:j\notin clique.i\land(\forall k\in clique.i:k\in Adj_{j}))\).
The algorithm is defined as follows. If there exists some node \(k\) in \(Adj_{i}\) which forms a clique with every node in \(clique.i\), but is not in \(clique.i\), then \(k\) is added to \(clique.i\). If otherwise all the nodes in \(clique.i\) do not form a clique, then \(clique.i\) is reset to be \(\{i\}\).
**Algorithm 1**.: _Rules for node \(i\) in state \(s\)._
The DC problem induces a partial order among the local states. As an instance, the partial order induced among the local states of node \(v_{1}\) (of the graph in Figure 1 (a)) is shown in Figure 1 (b).
_Remark:_ Notice from Figure 1 that the states \(\{v_{1},v_{2}\}\) and \(\{v_{1},v_{3}\}\) are not related under the \(<_{l}\) relation. If we restricted \(<_{l}\) to be a total order, then all possible local states would be related under \(<_{l}\).
As a result, the global states in the graph presented in Figure 1 (a) form a DAG that we show in Figure 2. Note that the state space for this instance has a total \(512\) states. In the figure, we only show the states where the second guard is false in all the nodes. Observe that all the global states where the second guard is true in some nodes will converge to one of the states present in this figure, and then it will converge to one of the terminal successors of this graph.
**Lemma 3.10**.: _The \(d\) omniant Clique problem is a DAG-inducing problem._
Proof.: For a node \(i\), \(clique.i\) contains the nodes that \(i\) is connected with and the nodes in \(clique.i\) should form a clique. A global state does not manifest a dominant clique if at least one node \(i\) in \(s\) does not store a set of nodes forming a maximal clique with itself, i.e. \((1)\)\(clique.i\) is not a maximal clique, that is, \(\exists j\in Adj_{i}\setminus clique.i\cup\{j\}\) forms a valid clique, or \((2)\) the nodes in \(clique.i\) do not form a clique.
Next, we need to show that if some node \(i\) in state \(s\) is violated, then \(\forall s^{\prime}>s\), if \(s^{\prime}[i]=s[i]\), then \(s^{\prime}\) will not manifest a dominant clique. This is straightforward from the definition itself, that if a node \(i\) is impedensable, then \(i\) does not store a set of nodes forming a maximal clique with itself. If \(i\) is impedensable in \(s\), and \(i\) has the same state in some \(s^{\prime}>s\) then \(i\) stays impedensable in \(s^{\prime}\) as well as it does not store a set of nodes forming a maximal clique.
We define the state value and rank as follows.
\(\textsc{State-Value-DS}(i,s)=\\ \begin{cases}|C|-|clique.i|:C=\text{largest clique}\\ \text{which }clique.i\text{ is a subset of}&\text{if }clique.i\text{ is a clique.}\\ |Adj.i+1|&\text{otherwise.}\end{cases}\)
\(\textsc{Rank-DC}(s)=\sum_{i\in V(G)}\textsc{State-Value-DC}(i,s)\).
**Theorem 3.11**.: _Algorithm 1 is an converging algorithm for the dominant clique problem._
Proof.: We need to show that (1) Algorithm 1 traverses a DAG of global states, (2) for all suboptimal states, \(\exists\) a terminal successor, and (3) all terminal global states are optimal states.
Figure 1. (a) Input graph. (b) Partial order induced among the local states of node 1.
Let the current state be \(s\). If \(s\) is suboptimal, then for at least for one of the nodes \(i\): (1) \(clique.i\) is not a maximal clique, that is, \(\exists j\in Adj_{i}\setminus clique.i:\)\(clique.i\cup\{j\}\) forms a valid clique, or (2) the nodes in \(clique.i\) do not form a clique.
In the case that \(s\) is suboptimal and the first case holds true for some node \(i\), then under Algorithm 1, \(i\) will include a node \(j\) in \(clique.i\) which forms a clique with the nodes already present in \(clique.i\), which reduces the state value of \(i\) by at least 1.
In the case that \(s\) is suboptimal and the second case holds true for some node \(i\), then under Algorithm 1, \(i\) will change \(clique.i\) to be \(\{i\}\), which reduces the state value of \(i\) from \(|Adj+1|\) to some value less than or equal to \(|Adj_{i}|\).
Thus under Algorithm 1, an arbitrary graph will follow a DAG of states and if it transitions from a state \(s\) to another state \(s^{\prime}\), then we have that \(s^{\prime}>s\) such that rank of \(s^{\prime}\) is less than the rank of \(s\).
The state value of any node is impedensable. Thus the rank of any global state is also impedensable. When a impedensable node \(i\) makes an execution, then its state value reduces, until it becomes 0. Thus if there is a global state \(s\) with rank greater than 0, then there exists at least one impedanceable node in it. When any node performs execution in \(s\) then \(s\) transitions to some state with rank less than \(s\). This shows that for every suboptimal global state, there exists at least one terminal successor.
Let that \(s\) is a terminal successor. This implies that \(\mathcal{P}(s)\) is true. There does no exist an impedensable node in \(s\), so any node will not change its state and \(s\) manifests a dominant clique. Thus we have that all terminal states are optimal states.
### Shortest Path (SP) Problem
Definition 3.12.: _Shortest path._ In the shortest path problem, the input is a weighted arbitrary connected graph \(G\) (all edge weights are positive) and a destination node \(d_{es}\). Every node \(i\) stores \(p.i\) (initialized with \(\top\)) and \(d.i\) (initialized with \(\infty\)). The task is to compute, \(\forall i\in V(G)\), the length \(d.i\) of a shortest path from \(i\) to \(d_{es}\), and the parent \(p.i\) through which an entity would reach \(d_{es}\) starting from \(i\).
The positive weights assigned for every edge denote the cost that it would take to move from node \(i\) to node \(j\), given that \(\{i,j\}\in E(G)\). In this problem, if we consider the local state of a node \(i\) to be represented only by the variable \(d.i\) then the local states of the nodes will form a total order. Consequently, the resultant discrete structure formed among the global states will be a lattice. This was shown in (Bordes and Schuster, 2010). On the other hand, in applications such as source routing (Krause et al., 2010) where the source node specifies the path that should be taken, the local states form a partial order. Hence, the global states form a DAG as opposed to a lattice. For brevity, we only represent the next hop in \(p.i\). The SP problem can be represented by the following predicate, where, \(w(i,j)\) is the weight of edge \(\{i,j\}\).
\(\mathcal{P}_{sp}\equiv\forall i:(d.i=dis(i,d_{es})=\min\{dis(j,d_{es})+w(i,j):j \in Adj_{j})\land(p.i=\arg\min\{dis(j,d_{es})+w(i,j):j\in Adj_{j}\})\).
An impedensable node \(i\) in a state \(s\) is a node for which its current parent is not a direct connection to the shortest path from \(i\) to \(d_{es}\). Formally,
\(\textsc{Impedensable}(i,s,\mathcal{P}_{sp})\equiv(d.i\neq 0\wedge i=d_{es}) \vee(\exists j:d.i>d.j+w(i,j))\).
The algorithm is defined as follows. If an impedensable node \(i\) is \(d_{es}\), then \(d.i\) is updated to 0 and \(p.i\) is updated to \(d_{es}\). Otherwise, \(p.i\) is updated to the \(j\) in \(Adj_{j}\) for which \(d.j+w(i,j)\) is minimum.
Definition 3.12.: _Rules for node \(i\)._
\[\left\{\begin{aligned} &\textsc{Impedensable}(i,s,\mathcal{P}_{sp}) \longrightarrow\\ &\begin{cases}d.i=0,p.i=i&\text{if }i=d_{es}\\ \langle d.i,p.i\rangle=\langle d.j+w(i,j),j\rangle:j\\ &=\arg\min\{d.k+w(i,k):k\in Adj_{i}\}&\text{otherwise}\end{cases} \end{aligned}\right.\]
As a result, the global states form a DAG. We show an example in Figure 3: Figure 3 (a) is the input graph and Figure 3 (b) is the induced DAG. In this figure, a global state is represented as \(\langle(p.o_{1},d.o_{1}),...,(p.o_{4},d.o_{4})\rangle\).
_Remark:_ Notice that in this case (Figure 3), the DAG among the global states is induced because \(o_{1}\) has a choice which node to choose as its parent. If we remove the parent information from representing the local state, this discrete structure will get reduced to a lattice. Also, if we restrict the choice of \(o_{1}\) (and all nodes) to, e.g., choose only the node with higher ID as a parent in the case of conflict, then this discrete structure will get reduced to a lattice.
Lemma 3.13.: _The shortest path problem is a DAG-inducing problem._
Figure 2. DAG, assuming that initial state is \(\langle\{1\},\{2\},\{3\}\rangle\). In all these states, the second guard of Algorithm 1 is false. Observe that any other state will converge to one of these states and then converge to one of the optimal states in this DAG. Transitive edges are not shown in this DAG.
For a node \(i\), \(d.i\) contains the distance of \(d_{es}\) from node \(i\). A global state \(s\) does not manifest all correct distances if for at least one node \(i\) in \(s\), \((1)\)\(dis(i,d_{es})\neq d.i\), that is, \(i\) does not store a shortest path from \(i\) to \(d_{es}\), or (2) the parent of \(i\) is not a valid direct connection in a shortest path from \(i\) to \(d_{es}\).
Next, we need to show that if some node \(i\) in state \(s\) is violated, then \(\forall s^{\prime}>s\), if \(s^{\prime}[i]=s[i]\), then \(s^{\prime}\) will not manifest all shortest paths. This is straightforward from the definition itself, that if a node \(i\) is impensable, then either \(i=d_{es}\) and it is not pointing to itself through \(p.i\), or there is at least one other node \(j\) such that \(d.i>d.j+w(i,j)\). If \(i\) is impedensable in \(s\), and \(i\) has the same state in some \(s^{\prime}>s\) then \(i\) stays impedensable in \(s^{\prime}\) as well as it does not store the length of a shortest path in \(d.i\).
Since the local states of all nodes have to be initialized as \(d.i=\infty,p.i=\top\), this solution is not self-stabilizing.
We define the state value and rank as follows.
\[\textsc{State-Value-SP}(i,s)=d.i-dis(i,d_{es}).\]
\[\textsc{Rank-SP}(s)=\sum_{i\in\mathcal{V}(G)}\textsc{State-Value-SP}(i,s).\]
**Theorem 3.1**.: _Algorithm 2 solves the shortest path problem on a connected positive weighted graph._
Proof.: We need to show that (1) Algorithm 2 traverses a DAG of global states, (2) for all suboptimal states, \(\exists\) a terminal successor, and (3) all terminal global states are optimal states.
Let the current state be \(s\). If \(s\) is suboptimal, then for at least for one of the nodes \(i\): (1) \(p.i\neq i\wedge i=d_{es}\), that is, \(i\) is the destination node and is not pointing to itself, or (2) \(dis(i,d_{es})\neq d.i\), that is, \(i\) does not store a shortest path from \(i\) to \(d_{es}\).
In the case that \(s\) is suboptimal and the first case holds true for some node \(i\), then under Algorithm 1, \(i\) updates \(d.i\) to 0 and \(p.i\) to \(i\), which reduces the state value of \(i\) to 0.
In the case that \(s\) is suboptimal and the second case holds true for some node \(i\), then under Algorithm 2, \(i\) will reduce its \(d.i\) value and update \(p.i\), which reduces the state value of \(i\) at least by 1.
Thus under Algorithm 1, an arbitrary graph will follow a DAG of states and if it transitions from a state \(s\) to another state \(s^{\prime}\), then we have that \(s^{\prime}>s\) such that rank of \(s^{\prime}\) is less than the rank of \(s\).
If no node is impedensable, then this implies that all nodes have computed the shortest distance in their \(d.i\) variable, and thus the rank is 0. Thus if there is a global state \(s\) with rank greater than 0, then there exists at least one impedensable node in it. When any node performs execution in \(s\) then \(s\) transitions to some state with rank less than \(s\). This shows that for every suboptimal global state, there exists at least one terminal successor.
Let that \(s\) is a terminal successor. This implies that \(\mathcal{P}(s)\) is true. There does no exist an impedensable node in \(s\), so any node will not execute and \(s\) manifests correct shortest path evaluation for all nodes. Thus we have that all terminal states are optimal states.
### Limitations of modelling problems as DAG-inducing problems
Unlike the DAG-inducing problems where the problem description creates a DAG among the states in \(S\), there are problems where the states do not form a DAG naturally. E.g., the maximal matching (MM) problem is not a lattice linear problem. This can be illustrated through a simple instance of a 3 nodes network forming a simple path \(\langle A,B,C\rangle\). Initially no node is paired with any other node. Here, MM can be obtained by matching \(A\) and \(B\). Thus, \(C\) is not impedensable. Another maximal matching can be obtained by matching \(B\) and \(C\), in which case \(A\) is not impedensable. Thus the problem itself does not define which node is impedensable. Similarly, it can be shown that, e.g., vertex cover (both minimal vertex cover and 2-approximation vertex cover) is not a DAG-inducing problem.
We observe that it is possible to induce a DAG in non-DAG-inducing problems algorithmically. In the next section (Section 4), we study the properties of a DAG-inducing algorithm, which is capable of inducing a DAG in a non-DAG-inducing problem, along with examples for demonstration.
## 4. Imposed DAG induction: DAG-inducing _algorithms_
In this section, we discuss problems where the problems cannot be represented by a predicate under which the global states form a DAG. It means that the problem does not naturally define which node is impedensable. Effectively, there may be multiple optimal states. We start by describing the general properties of such problems.
### General Properties
In DAG-inducing problems, we observe that \(\mathcal{P}(s)\) is of the form \(\forall i\ \mathcal{P}_{i}(s)\) a node \(i\) is impedensable in state \(s\) if and only if \(\mathcal{P}_{i}(s)\) is false. But if \(P\) is not a DAG-inducing problem, then a node violating \(\mathcal{P}_{i}(s)\) may not always be impedensable. (cf. the example of maximal matching as demonstrated in Section 3.5). In these cases, to utilize the definition of impedensable, a DAG is induced algorithmically. In other words, the algorithm creates rules to determine which nodes are impedensable, and creates a partial order among
Figure 3. (a) Input graph. (b) DAG induced among the global states in evaluating for the shortest path parplem in the graph shown in (a); a global state is represented as \(\langle(p.v_{1},d.v_{1}),...,\langle p.v_{4},d.v_{4}\rangle\rangle\). Transitive edges are not shown.
the local states, and thereby induces a DAG among the global states. The algorithm must also ensure that an optimal state is reached while traversing the induced DAG.
Next, we consider how we can define the DAG induced by an algorithm \(A\). \(A\) creates rules to identify \(<_{I}\). Specifically, \(A\) changes the state of node \(i\) from \(s_{0}[i]\) to \(s_{1}[i]\) only if \(s_{0}[i]<_{I}s_{1}[i]\). Consequently, the definition of \(<_{g}\) is identical to that in Section 2. If \(<_{g}\) identified in this fashion is a DAG then we call it the DAG induced by \(A\). Using the DAG induced by \(A\) in this manner, we define DAG-inducing algorithms, next.
Definition 4.1 ().: _DAG-inducing algorithms (DIA)._ Algorithm \(A\) is a DIA for a problem \(P\), represented by a predicate \(\mathcal{P}\), iff
* \(P\) requires that we reach a state where \(\mathcal{P}\) is true.
* \(\mathcal{P}\) is DAG-inducing with respect to the partial order induced in S by \(A\), i.e. \(\forall s\in S:\neg\mathcal{P}(s)\Rightarrow\exists i\)Impdensable\((i,s,\mathcal{P})\).
* The guard of any action of \(A\) checks that \(\mathcal{P}(s)\) is false (i.e., if \(g\) is a guard of an action in \(A\) then \(g\Rightarrow\neg\mathcal{P}(s)\)).
Continuing from Definition 4.1, we have
Definition 4.2 ().: _Self-stabilizing DIA. \(A\) is self-stabilizing only if \(\forall s,s^{\prime}\in S:\textsc{Terminal-Successor}(s,s^{\prime}) \Rightarrow\mathcal{P}(s^{\prime})=true\)._
_Remark_: Note that this definition appears identical to Definition 3.8. The only difference is that the partial order to define terminal successors is based on the DAG induced by \(A\).
### Maximal Matching (MM) problem
As discussed above, MM is one of the problems which is not a DAG-inducing problem. However, we find that a DAG-inducing algorithm can be developed for this problem, which we discuss in the following. We start by defining the MM problem as follows.
Definition 4.3 ().: _Maximal matching._ In the maximal matching problem, the input is an arbitrary graph \(G\). The state \(match.i\) of each node \(i\) has the domain \(Adj_{i}\cup\{\top\}\). The task is to compute the matchings such that for any node \(i\), \((1)\)\(\forall i:match.i\neq top\Rightarrow match.(match.i)=i\), and \((2)\) if \(match.i=\top\), then there must not exist a \(j\) in \(Adj_{i}\) such that \(match.j=\top\).
We use the following macros. A node \(i\) is _wrongly matched_ if \(i\) is pointing to some node \(j\), but \(j\) is pointing to some node \(k\neq i\). A node \(i\) is _matchable_ if \(i\) is not pointing to any node, i.e. \(match.i=\top\), and there exists a node \(j\) adjacent to \(i\) which is also not pointing to any node. A node \(i\) is being _pointed to_, or \(i\) is \(i\)-_pointed_, if \(i\) is not pointing to any node, and there exists a node \(j\) adjacent to \(i\) which is pointing to \(i\). A node sees that another node is being pointed, or \(i\) "sees" _else-pointed_, if some node \(j\) around (in \(2\)-hop neighbourhood of) \(i\) is pointing to another node \(k\) and \(k\) is not pointing to anyone. A node is _unsatisfied_ if it is wrongly matched or matchable. A node is _impdensable_ if \(i\) is i-pointed, or otherwise, given that \(i\) does not see else-pointed, \(i\) is the highest ID node which is unsatisfied in its \(2\)-hop neighbourhood.
Lemma 4.4 ().: _Algorithm 3 induces a DAG in the global state space._
Proof.: The DAG is induced in the global state space with respect to the state values. Let \(s\) be a suboptimal state that the input graph is in. A node \(i\) is impdensable in \(s\) (\(1\)) if \(i\) is wrongly matched, (\(2\)) if \(i\) is matchable, or (\(3\)) if \(i\) is being pointed at by another node \(j\), but \(i\) does not point back to \(j\) or any other node.
Next we show that if some node \(i\) is impdensable in some state \(s\), then for any state \(s^{\prime}:s^{\prime}>s\), if \(s^{\prime}[i]=s[i]\), then \(s^{\prime}\) will not form a maximal matching under Algorithm 3.
In the case if \(i\) is wrongly matched in \(s\) and is impdensable, and it is pointing to the same node in \(s^{\prime}\) as well, then \(i\) stays to be impdensable in \(s^{\prime}\) because any other node will not take back its
Figure 4. (a) Input graph. (b) The state transition diagram, a DAG, assuming that initial global state is \(\langle\top,\top,\top,\top\rangle\). In every state, the first row shows the global state, the second row shows the respective local state values of nodes and the rank of the global state. Observe that any other state will converge to one of these states and then converge to one of the optimal states in this DAG. Transitive edges are not shown for brevity.
pointer before \(i\) does under Algorithm 3. Thus \(s^{\prime}\) does not have a correct matching.
In the case if in \(s\), \(i\) is being pointed to by some node but \(j\) does not point to any node, and \(i\) stays in the same state in \(s^{\prime}\), then \(s^{\prime}\) is stays to be impedensable as \(s^{\prime}\) because once a node points towards an unmatched node, it does not retreat its pointer under Algorithm 3. Also, any node in \(Adj_{j}^{2}\) will not execute until \(i\) does under. Thus \(s^{\prime}\) does not form a correct matching.
Finally, in the case if \(i\) is matchable and impedensable in \(s^{\prime}\), and it stays the same in \(s^{\prime}\), then it is still impedensable as any other node in \(Adj_{i}\) will not initiate matching with it under Algorithm 3. Also, any node in \(Adj_{i}^{2}\) will not execute until \(i\) does. Thus \(s^{\prime}\) stays to be impedensable.
We define the state value and rank as follows.
\[\textsc{State-Value-MM}(i,s)=\begin{cases}3&\text{if Wrongly-Matched}(i).\\ 2&\text{if Matchable}(i)\land\neg\text{I-Pointed}(i).\\ 1&\text{if I-Pointed}(i).\\ 0&otherwise.\end{cases}\]
\[\textsc{Rank-MM}(s)=\sum_{i\in V(G)}\textsc{State-Value-MM}(i,s).\]
**Theorem 4.5**: _Algorithm 3 is a DAG-inducing algorithm for the maximal matching problem._
We show that (1) Algorithm 3 traverses a DAG that has the properties as mentioned in Lemma 4.4, (2) for all suboptimal states \(\exists\) a terminal successor, and (3) all terminal global states are optimal states.
If \(s\) is suboptimal, then for at least one of the nodes \(i\): (1) \(i\) is wrongly matched, (2) \(i\) is matchable, or (3) \(i\) is being pointed at by another node \(j\), but \(i\) does not point back to \(j\) or any other node.
In the case that \(s\) is suboptimal and some node \(i\) is being pointed to by some node \(j\) and \(i\) does not point to any node, then under Algorithm 3, \(i\) will point back to \(j\), and thus the state value of \(i\) will get reduced from \(1\) to \(0\).
In the case that \(s\) is suboptimal and some node \(j\) is wrongly matched or matchable with \(\neg\textsc{Else-Pointed}(j)\), then at least one node (e.g., a node with highest ID which is wrongly matched or matchable) will be unsatisfied and impedensable. Let that \(i\) is unsatisfied and impedensable. Here \(i\) is either wrongly matched or matchable. If \(i\) is wrongly matched, then \(i\) will change its pointer and start pointing to \(\top\), in which case its state value will change from \(3\) to \(2\), \(1\), or \(0\). If \(i\) is matchable then, \(i\) will start pointing to some node \(i^{\prime}\) in \(Adj_{i}\), in which case, the state value of \(i\) will change from \(2\) to \(0\) and the state value of \(i^{\prime}\) will change from \(2\) to \(1\).
In all the above cases, we have that under Algorithm 3, if \(s\) is a suboptimal state, then its rank will be of some value greater than zero because at least one of the nodes will be impedensable. \(s\) will transition to some state \(s^{\prime}\) whose rank is less than that of \(s\). Thus, we have that Algorithm 3 transitions \(s\) to \(s^{\prime}\) and thus decreases the rank of the system. This shows that (1) Algorithm 3 traverses a DAG that has the properties as mentioned in Lemma 4.4, (2) for all suboptimal states \(\exists\) a terminal successor.
In the case that \(s\) is an optimal state, then none of the nodes will be optimal. So no node will change its state. This state will manifest a maximal matching. This shows that all terminal successors are optimal states.
### \(2\)- approximation vertex cover (VC)
In this section, we consider the problem of \(2\)-approximation vertex cover to demonstrate that DAG-inducing algorithms can be applied for approximation algorithms for NP-Hard problems as well.
**Definition 4.6**: _Vertex cover._ In the vertex cover problem, the input is an arbitrary graph \(G\) with every node having a domain \(\{IN,OUT\}\). The output is a minimal set \(\mathcal{V}\) such that for every edge \(\{i,j\}\in E(G)\), either \(i\in\mathcal{V}\) or \(j\in\mathcal{V}\) or both.
The standard two approximation algorithm for the VC problem can be stated as follows. _Choose an uncovered edge \(\{A,B\}\), select both \(A\) and \(B\), repeat until all edges are covered._ Since the minimum VC must contain either \(A\) or \(B\), the selected VC is at most twice the size of the minimum VC.
While the above algorithm is sequential in nature, (B
modified; we only show how the vertex cover is formed. Only the reachable states are shown. The global states in the lattice are written as \((st.o_{1},st.o_{2},\cdots,st.o_{8})\).
#### 4.3.1. Correctness of the algorithm in Figure 5
If an impedensable node has an uncovered edge, and assume that \(\{i,j\}\) is an uncovered edge with \(j\) being the highest ID node which is out (note that \(i\) is out), then \(i\) turns both \(i\) and \(j\) into the VC. If otherwise \(i\) evaluates that all its edges are covered, then it declares that it is done (\(i\) sets \(done.i\) to true), while staying out of the VC.
This is straightforward from the 2-approximation algorithm for VC. 3-neighbourhood to evaluate impedensable ensures that race conditions do not arise while execution from the perspective of the 2-approximation algorithm for VC.
#### 4.3.2. Why the algorithm in Figure 5 is lattice linear
Under this algorithm, when any impedensable node evaluates that one of its edges is uncovered, it selects itself and a node with highest ID to move into the VC. As a result, the local states form a total order. And therefore, the partial order that results among the global states because of this total order among the local states is a lattice.
#### 4.3.3. Transforming into a DAG-inducing algorithm
If the chosen node \(j\) such that \(j\) is the highest ID node in \(Adj_{i}\) that is not in the VC and is not done, then the resultant discrete structure will be a lattice (Bordes and Sorn, 2002). However, ideally, a chosen node \(j\) can be any node in \(Adj_{i}\) which is not in the VC and is not done. If we let \(i\) choose an arbitrary neighbour, in this way, to add to the VC, the resulting structure will be a DAG but not a lattice. And thus, the resulting algorithm will be a DAG-inducing algorithm. We write this as Algorithm 4 for the sake of completeness.
Algorithm 4.: _Follow the algorithm in Figure 5. Change line 9 to \(j=x:x\in Adj_{i}\wedge done.x=false\) to enable node \(i\) to choose an arbitrary vertex \(j\)._
Thus, it follows that there are cases where lattice linear algorithms force certain determinism to induce a lattice structure. By contrast, DAG-inducing algorithms permit more non-determinism.
Example 4.7 ().: In Figure 7, we show the DAG induced by Algorithm 4 (where a node can choose any of its neighbours to turn into the VC) in the state space of the graph presented in Figure 6 (a).
## 5. Properties of DAG-induction
### DAG-induction to obtain asynchrony
Theorem 5.1 ().: _An algorithm converges in asynchronous systems iff it traverses a DAG of global states (given that an optimal state is reachable, starting from the set of provided initial states)._
Proof.: _If_: This follows from the definition of DAG-inducing problems and DAG-inducing algorithms.
_Only If_: An algorithm \(A\) must guarantee two things in any scheduler: (1) upper bound on convergence time, (2) correctness in all executions.
For any scheduler, to guarantee convergence with correctness, for any pair of global states \(s,s^{\prime}\in S\) if there is a path from \(s\) to \(s^{\prime}\), then there must not be a path from \(s^{\prime}\) to \(s\). Otherwise, there will always a possibility that \(A\) can traverse \(s^{\prime}\) to \(s\) and then from \(s\) to \(s^{\prime}\); this can continue for an arbitrary number of times, and that is why, an upper bound to the convergence time cannot be guaranteed.
To guarantee convergence within an upper time bound, we must guarantee for a finite value \(k\) that the rank of the system will decrease in every \(k\) steps. That is, within \(k\) time steps, if the system is in state \(s\), then it moves to some state \(s^{\prime}\) such that the rank of \(s^{\prime}\) is less than the rank of \(s\). If the states do not form a DAG, then again there is a possibility for the system to transition from \(s^{\prime}\) to \(s\) or otherwise, never reach \(s^{\prime}\).
Thus, we have that if the algorithm is running under an asynchronous scheduler and the states do not form a DAG, then one of the above properties cannot be guaranteed.
### Time Complexity Properties of an algorithm traversing a \(<\)-DAG
Theorem 5.2 ().: _The convergence time is the longest path in the DAG induced among the global states._
Proof.: We have that any impedensable state \(s\) will transition to a state \(s^{\prime}\) where \(s^{\prime}>s\). Thus every transition takes the system higher in the induced DAG. Therefore, the convergence time is the longest path in the \(<\)-DAG induced among the global states.
Figure 6. Execution of the algorithm in Figure 5: (a) input graph, and (b) lattice induced in the input graph. Transitive edges are not shown for brevity.
Figure 7. Execution of Algorithm 4: DAG induced in the graph as shown in Figure 6 (a). Transitive edges are not shown for brevity.
Since the local states form a partial order, the the upper bound to the convergence time can also be evaluated in terms of the size of a local state as follows.
Corollary 5.3 ().: _Let that in each node, atmost \(r\) of these variables, \(var_{1},j,\ldots,var_{r}\) (with domain sizes \(m^{\prime}_{1},\ldots,m^{\prime}_{r}\) respectively) contribute independently to the formation of the lattice. Then an algorithm traversing a \(<\)DAG will converge in \(n\times\left(\begin{smallmatrix}\Gamma\\ j=1\end{smallmatrix}\left(m^{\prime}_{j}-1\right)\right)\) moves._
Corollary 5.4 ().: _(From Theorem 3.11 and Corollary 5.3) Algorithm 1 converges in \(\sum\limits_{i\in V(G)}deg(i)=2m\) moves. In terms of rounds, it converges in \(\Lambda\) rounds, where \(\Lambda\) is the maximum degree of the input graph._
Corollary 5.5 ().: _(From Theorem 3.14 and Corollary 5.3) Algorithm 2 converges in \(2n\) moves._
Corollary 5.6 ().: _(From Theorem 4.5 and Corollary 5.3) Algorithm 3 converges in \(2n\) moves._
Corollary 5.7 ().: _(_8_)_ _Algorithm 4 converges in \(n\) moves._
## 6. Related Work
**Lattices and DAGs among states in distributed systems**: In (Brandt et al., 2015), the authors have described problems which possess a predicate under which the states naturally form a lattice. Problems like the stable marriage problem, job scheduling, market clearing price and others are studied in (Brandt et al., 2015). In (Brandt et al., 2015) and (Brandt et al., 2015), the authors have studied lattice linearity in, respectively, housing market problem and several dynamic programming problems.
In (Brandt et al., 2015), the authors have extended the theory in (Brandt et al., 2015) to develop eventually lattice linear self-stabilizing algorithms for some nondiltic linear problems. Such algorithms induce single or multiple disjoint lattices in a subset of the state space of the transition system. In (Brandt et al., 2015), the authors presented a fully lattice linear algorithm for the minimal dominating set problem, where the algorithm induces single or multiple disjoint lattices among the global states.
In this paper, we introduce DAG-inducing problems and DAG-inducing algorithms. We observed that induction of a DAG provides greater choices to individual nodes as compared to their lattice linear counterparts, and thus provides more nondeterminism.
We demonstrate that the dominant clique problem and the shortest path problem are DAG-inducing problems. Among these the dominant clique problem allows self-stabilization whereas the (algorithm for) shortest path problem does not. We demonstrate that the maximal matching problem and 2-approximation vertex cover are not DAG-inducing problems. We present DAG-inducing algorithm for them. Among these, the algorithm for maximal matching allows self-stabilization and the 2-approximation algorithm for vertex cover does not. We study upper bound to the convergence time of an algorithm traversing a DAG of states. We show why inducing a \(<\)DAG in the state space allows nodes to perform execution in asynchrony based on old values of other nodes. The algorithms for the dominant clique problem and shortest path problem both converge in \(2m\) moves. The algorithm for the maximal matching problem converges in \(2n\) moves and the 2-approximation algorithm for vertex cover converges in \(n\) moves.
**Maximal Matching**: A distributed self-stabilizing algorithm for the maximal matching problem is presented in (Kumar et al., 2016); this algorithm converges in \(O(n^{3})\) moves. The algorithm in (Brandt et al., 2015) converges in \(O(\log^{4}n)\) moves under a synchronous scheduler. The algorittrithm for maximal matching presented in (Brandt et al., 2015) converges in \(n+1\) rounds. Hedetniemi et al. (Hedetniemi et al., 2001) showed that the algorithm presented in (Kumar et al., 2016) converges in \(2m+n\) moves.
In this paper, the DAG-inducing algorithm for maximal matching that we present converges in \(2n\) moves and does not require a synchronous environment. This is an improvement as compared to the results presented in the literature.
## 7. Conclusion
In this paper, we introduce the class of problems called DAG-inducing problems and the class of algorithms called DAG-inducing algorithms.
The key idea in these problems and algorithms is that when the algorithm is in a suboptimal state, there is at least one process that is _impedensable_; those are the processes whose execution is essential to reach the optimal state.
DAG-inducing problems exhibit a DAG among their states based on the problem definition. In the case that a problem is not DAG-inducing, a DAG-inducing algorithm may be designed for it. In both cases, since the execution of the impedensable nodes causes the program to move _up_ in the DAG, correctness is guaranteed in asynchronous execution (where nodes may have to rely on old information).
As an example, we study the dominant clique (DC) problem and the shortest path (SP) problem and show that they are DAG-inducing problems. We demonstrated an algorithm which traverses a DAG of states and reaches an optimal state, starting from an arbitrary state. Thus, we have that the DC problem is a self-stabilizing DAG-inducing problem. In contrast, the algorithm that we present for SP problem is not self-stabilizing.
We also study the maximal matching (MM) problem and 2-approximation vertex cover. These problems are non-DAG-inducing problems. We present a DAG-inducing algorithm for the MM problem, which induces a \(<\)-DAG of states and reaches an optimal state, starting from an arbitrary state. Thus, the algorithm that we presented for the MM problem is a self-stabilizing DAG-inducing algorithm. On the other hand, the algorithm presented for 2-approximation vertex cover is not self-stabilizing. It remains to be an open problem whether a distributed DAG-inducing self-stabilizing algorithm can be developed for 2-approximation vertex cover.
In the literature, there are works on lattice linear problems and lattice linear algorithms, where a (\(<\)-)lattice is induced under a predicate or an algorithm respectively. Since a lattice is a subclass of a DAG, from the observations that we make in this paper, we have that the lattice linear problems are a proper subset of the class of DAG-inducing problems, and lattice linear algorithms are a proper subset of the class of DAG-inducing algorithms. This can be observed from the fact that the discrete structure that is induced among the states of an instance of the DC problem is a DAG but not a lattice (cf. Figure 1), and similarly, the states of an instance of the MM problem, as induced under Algorithm 3, is also a DAG
but not a lattice (cf. Figure 4). The same can be observed in Figures 3 and 7.
We analyzed the time complexity bounds of an algorithm traversing a DAG of states (whether present naturally in the problem or imposed by the algorithm). As corollaries of this analysis, we obtain the time complexity bounds for all the algorithms present in this paper. We also observed that asynchrony can be allowed if and only if a DAG is induced among the global states.
|
2309.07284 | Toward Lossless Homomorphic Encryption for Scientific Computation | This paper presents a comprehensive investigation into encrypted computations
using the CKKS (Cheon-Kim-Kim-Song) scheme, with a focus on multi-dimensional
vector operations and real-world applications. Through two meticulously
designed experiments, the study explores the potential of the CKKS scheme in
Super Computing and its implications for data privacy and computational
efficiency. The first experiment reveals the promising applicability of CKKS to
matrix multiplication, indicating marginal differences in Euclidean distance
and near-to-zero mean square error across various matrix sizes. The second
experiment, applied to a wildfire dataset, illustrates the feasibility of using
encrypted machine learning models without significant loss in accuracy. The
insights gleaned from the research set a robust foundation for future
innovations, including the potential for GPU acceleration in CKKS computations
within TenSEAL. Challenges such as noise budget computation, accuracy loss in
multiplication, and the distinct characteristics of arithmetic operations in
the context of CKKS are also discussed. The paper serves as a vital step
towards understanding the complexities and potentials of encrypted
computations, with broad implications for secure data processing and privacy
preservation in various scientific domains. | Muhammad Jahanzeb Khan, Bo Fang, Dongfang Zhao | 2023-09-13T20:05:31Z | http://arxiv.org/abs/2309.07284v1 | # Toward Lossless Homomorphic Encryption for Scientific Computation
###### Abstract.
This paper presents a comprehensive investigation into encrypted computations using the CKKS (Cheon-Kim-Kim-Song) scheme, with a focus on multi-dimensional vector operations and real-world applications. Through two meticulously designed experiments, the study explores the potential of the CKKS scheme in Super Computing and its implications for data privacy and computational efficiency. The first experiment reveals the promising applicability of CKKS to matrix multiplication, indicating marginal differences in Euclidean distance and near-to-zero mean square error across various matrix sizes. The second experiment, applied to a wild-fire dataset, illustrates the feasibility of using encrypted machine learning models without significant loss in accuracy. The insights gleaned from the research set a robust foundation for future innovations, including the potential for GPU acceleration in CKKS computations within TenSEAL. Challenges such as noise budget computation, accuracy loss in multiplication, and the distinct characteristics of arithmetic operations in the context of CKKS are also discussed. The paper serves as a vital step towards understanding the complexities and potentials of encrypted computations, with broad implications for secure data processing and privacy preservation in various scientific domains.
## 1. Introduction
Homomorphic encryption (HE) has emerged as a transformative cryptographic paradigm, enabling arithmetic computations to be executed on encrypted data without necessitating decryption. This innovative approach holds immense promise for secure data processing, where privacy can be preserved, and insights can be garnered without revealing sensitive information (Krishnan et al., 2017). The HE-enabled computation is particularly promising for scientific applications with the emergence of HPC cloud [], where the data and computation are moved to the non-HPC domain. Among the myriad of homomorphic encryption schemes, the Cheon-Kim-Kim-Song (CKKS) encryption scheme (Cheon and Kim, 2017) is particularly noteworthy for its ability to handle floating-point numbers, aligning with the requirements of diverse applications including scientific simulations, medical data analysis, and financial computations (Beng et al., 2018).
The adoption of homomorphic encryption, and specifically the CKKS scheme, depends on the complex interplay between computational efficiency, precision, and accuracy. When performing arithmetic operations on encrypted data, inconsistencies and discrepancies may arise when compared with operations on plain data. These variations can lead to substantial deviations from the anticipated results, posing intricate challenges in balancing computation time, precision, and security constraints (Beng et al., 2018; Chen et al., 2018).
The CKKS scheme, while powerful in its ability to handle floating-point numbers, suffers from computational inefficiencies that render it slow and currently unsuitable for certain scientific computations. This slowness arises from multiple factors, including the complexity of encryption and decryption operations, the management of noise during computations, and the need for bootstrapping techniques to refresh ciphertext. Additionally, the precision requirements of scientific applications often demand meticulous control over numerical representations, further complicating computations and leading to longer processing times. The interplay between these factors poses challenges in terms of computational overhead and real-time processing capabilities, and ongoing research is needed to enhance the efficiency and applicability of CKKS for demanding scientific scenarios.
In addition to its performance limitation, CKKS exhibits a notable characteristic known as lossy precision. While CKKS is designed to work well with real-world applications involving large-scale computations on encrypted data, its lossy precision property can pose challenges when applied to scientific applications that demand high accuracy and exactness. Scientific simulations, data analysis, and mathematical computations often require strict preservation of precision to ensure reliable results. The lossy nature of CKKS encryption may introduce errors that could compromise the validity and reliability of outcomes in such contexts. That said, to ensure the use of CKKS in the scientific domain, an understanding of the accuracy resulting from the CKKS scheme is critical.
The goal of our study is to provide solutions for scientific applications to take advantage of holomorphic encryption schemes when the computation needs to be conducted securely. Toward this goal, we first perform a feasibility study of applying real-valued homomorphic encryption schemes (e.g., CKKS) in the scientific domain. We undertake an exhaustive examination unearthing the differences in decimal point precision of vector operations with CKKS. We discern the discrepancies that emerge, unravel the oscillating behavior in floating-point representations, and evaluate the broader implications for practical deployment (Beng et al., 2018; Chen et al., 2018).
In our work, we perform arithmetic operations on singular vectors and multidimensional vectors within extensive encrypted vector datasets. By utilizing the correlation of encrypted ciphertext, we execute these calculations and gather the results, all within the encrypted domain. This method is tailored specifically to our research domain, drawing inspiration from existing schemes operating directly on integer vectors that support addition, linear transformation, and weighted inner products (Krishnan et al., 2017). The ability to conduct such operations has the potential to expand the practical
applicability of encrypted data processing, broadening the horizons of secure data analytics.
Implementing our experimental framework on the Wildfire dataset (Han et al., 2017; Wang et al., 2018), a compendium of multifaceted numerical information, we encrypt and subsequently decrypt the data to discern variances from the baseline outcomes. The evaluation encompasses standard methods such as Linear Regression and Decision Tree, as well as Encrypted Linear Regression (Han et al., 2017) through the TenSEAL (Bauer et al., 2016) library, illustrating a remarkable improvement in performance despite increased training time (Bauer et al., 2016; Wang et al., 2018).
Concluding with an outlook on future developments, we anticipate progressing towards GPU-accelerated Homomorphic encryption and enhancing TenSEAL's feature set. This exploration constitutes a seminal step in understanding the multifaceted landscape of encrypted vector arithmetic, laying a steadfast foundation for further inquiry and technological advancement in this burgeoning field (Han et al., 2017; Wang et al., 2018).
The key contributions of this research are summarised below:
1. We conduct a systematic evaluation that covers a wide range of parameters affecting the precision of CKKS on floating point operations. The results show that the choices of the global scale and the polynomial degree play more important roles in the final accuracy of the floating-point computation.
2. We leverage the CKKS encryption scheme to demonstrate its application on matrix multiplication. Our extensive analysis underscores the scheme's potential for applicability in real-world scenarios, marking a significant step in secure computations on encrypted data. We show that the CKKS scheme produces close-to-noise-free results for the matrix multiplication, enabling a potential adaption in the scientific domain.
3. We implement encrypted logistic regression models using CKKS. Our work applies the CKKS encryption scheme to existing logistic regression algorithms, creating an EncryptedLR class that embodies privacy-preserving computations. While not a novel contribution, this implementation highlights the adaptability of CKKS in safeguarding data privacy in standard modeling techniques, providing a valuable exploration of its real-world applicability. The result shows that with and without the CKKS scheme the models achieve similar accuracy (94%).
## 2. Background and Related Work
### CKKS Scheme in Homomorphic Encryption
The Cheon-Kim-Kim-Song (CKKS) scheme represents a pioneering advancement in Homomorphic Encryption (HE), enabling the computation of encrypted floating-point numbers without decryption, unlike most HE schemes that operate on integer arithmetic (Chen et al., 2017). This novel ability to perform approximate arithmetic on encrypted data allows a favorable balance between efficiency and precision.
CKKS is distinct in its design, optimized for scientific applications, and its capacity to handle complex computations on encrypted data confidentially. Key parameters in the scheme include the polynomial modulus degree, impacting both approximation level and noise growth. Higher degrees lead to increased precision at the expense of computational costs. Other essential parameters involve scaling factors and levels in the modulus switching chain, which control the scheme's efficiency and security.
In our experiment with the wildfire dataset, CKKS was carefully optimized by balancing polynomial modulus degree and other parameters to achieve the desired efficiency and precision. This study might be the first to quantitatively explore the application of CKKS, presenting significant evidence of its feasibility in real-world scientific applications and reporting the trade-offs involved.
Nevertheless, limitations exist, including the CKKS scheme's inherent complexity, tuning requirements, and potential noise susceptibility in successive computations. These challenges might restrict its adaptability in some practical scenarios, emphasizing the need for continued research and development.
### SEAL and TenSEAL Libraries
The Microsoft Simple Encrypted Arithmetic Library (SEAL) and its tensor extension, TenSEAL, have revolutionized the field of Homomorphic Encryption (HE). SEAL provides a set of efficient tools for managing HE operations, effectively bridging the gap between theoretical cryptographic techniques and practical implementations. Its modular design, flexibility in choosing parameters, and ease of deployment have made it accessible to various domains (Khan et al., 2017). TenSEAL builds upon the foundation laid by SEAL, extending its capabilities to handle tensor operations securely. This means the mathematical manipulations commonly used in deep learning and data analysis can now be performed directly on encrypted data. The combination of SEAL and TenSEAL offers a rich environment for developing encrypted computation solutions, maintaining data privacy, and enabling secure collaboration among parties. TenSEAL's optimizations for tensor arithmetic mark a significant advancement in encrypted deep learning, giving researchers and engineers a valuable tool for privacy-preserving data analysis (Bauer et al., 2016), (Bauer et al., 2016).
### Graphics Processing Units in HE
Graphics Processing Units (GPUs) have become a crucial component in accelerating HE operations. Unlike traditional CPUs, GPUs are designed to handle parallel processing, distributing computation across multiple cores. This capability enables GPUs to tackle HE's computational complexity and facilitate real-time operations. Leveraging GPUs in HE has allowed for an immense acceleration of complex mathematical computations, including polynomial multiplications and fast Fourier transformations, vital components in HE. By significantly reducing computation times, GPUs have extended the range of practical applications for HE, making it more accessible for large-scale data processing. Future research and integration between GPUs and specific HE schemes like CKKS could lead to groundbreaking improvements in computational speed and efficiency (Khan et al., 2017).
### Floating-Point Analysis of the CKKS
Scheme
The CKKS (Cheon-Kim-Kim-Song) scheme, introduced at Asiacrypt 2017, has become one of the most widely implemented approximate homomorphic encryption schemes. Its floating-point behavior has been a subject of extensive analysis, with researchers aiming to
understand how noise grows through computation, and how the scheme's precision and efficiency are affected (Krause et al., 2017).
A critical aspect of working with the CKKS scheme is ensuring that the evaluation output is within a tolerable error of the corresponding plaintext computation. This requires a nuanced understanding of the noise growth in both encoding and homomorphic operations. Comprehensive analyses, such as the average-case analysis and refinements to prior worst-case noise analyses, have led to heuristic estimates that closely model observed noise growth. However, the complexity and occasional underestimation of noise growth indicate a need for implementation-specific noise analyses (Krause et al., 2017).
Furthermore, research into high-precision bootstrapping and optimal minimax polynomial approximation has improved the message precision in the bootstrapping operation of the RNS-CKKS scheme (Krause et al., 2017). Advances such as the composite function method and the improved multi-interval Remez algorithm have reduced approximation errors, improving precision and expanding the utility of the RNS-CKKS scheme.
An insightful example of the CKKS scheme in practical use can be seen in convolutional neural networks (CNNs), where approximate activation functions over homomorphic encryption have been applied to increase the classification accuracy for inference processing (Krause et al., 2017). Such applications of the CKKS scheme exemplify how the understanding and refinement of precision in HE can lead to tangible improvements in various fields, including machine learning and data privacy.
By leveraging insights into the CKKS scheme, developers and researchers can explore the broad applications and benefits of Homomorphic Encryption (HE). From large-scale data analysis to secure and privacy-preserving computations, the ongoing development and refinement of these techniques promise to enhance both security and efficiency in processing encrypted data. In conclusion, the application of CKKS demonstrates a significant advancement in encrypted computation, emphasizing its broad potential and underscoring the need for continuous exploration of its limitations and opportunities for further optimization.
## 3. Methodology and Implementation
In an era of cloud computing where privacy and security are paramount, the utilization of Homomorphic Encryption (HE) in scientific data management and applications presents a compelling avenue for research. This study explores the feasibility of applying HE schemes to encrypted data computations while preserving the confidentiality and integrity of the data. In particular, the trade-off between encryption efficiency and decryption precision will be quantitatively studied to avoid the two obvious extremes: (i) A highly efficient HE scheme usually implies a low precision (i.e., only a small number of decimal digits are identical between a decrypted value and the original plain value), (ii) A highly precise HE scheme usually entails significant, if not unacceptable, computation time. We will exemplify our quantitative approach through two concrete workloads: multidimensional vector arithmetic and encrypted predictive modeling for wildfire detection.
### Data Collection and Preprocessing
The methodology in this study comprises two main components, both aiming to explore and assess the potential of HE in scientific applications:
Multidimensional Arrays of Vectors.This component emphasizes the ability of HE to handle complex arithmetic operations on encrypted multidimensional vectors. The experiments are designed to provide insights into how HE might be employed in scientific computations, enabling data processing without compromising privacy. The input random vectors are generated to form multidimensional matrices that mimic real-world scenarios.
Wildfire Detection.The exploration of HE's feasibility in real-world applications is exemplified through a wildfire dataset (Krause et al., 2017). The first step is data cleaning which ensures data consistency and handling missing values. In the data integration stage, we merge various datasets into a unified representation. In the final file-handling stage, data are encrypted for further processing.
### Encrypted Data Analysis
Feature Engineering.The feature engineering process was executed on the wildfire dataset, focusing on both computational experiments and predictive modeling aspects. Essential features are extracted, such as temperature, area, and vegetation indexes were extracted from the dataset. In the stage of Multicollinearity Check, correlations among features were analyzed to maintain independence, especially for the implementation of the logistic regression model.
Model Development.A logistic regression model was adapted as the baseline. In the _EncryptedLR Class_, an encrypted version of the logistic regression is developed using the CKKS scheme. The activation Sigmoid function was approximated by
\[\text{sigmoid}(x)=0.5+0.197\cdot x-0.004\cdot x^{3}\]
The weight and bias of the model were encrypted using CKKS. The decryption function was applied to the CKKS ciphertext to retrieve the original values after computation. The encrypted computation ensures data privacy, a paramount requirement in modern applications (Krause et al., 2017). The models were trained and evaluated on both plain and encrypted data. The CKKS scheme's parameters were chosen to minimize noise and ensure accurate results (Krause et al., 2017) (more details in the Evaluation section). A comprehensive analysis was conducted on three arithmetic operations: \((+,-,*)\) for multidimensional vectors using the CKKS scheme. The operations were performed considering the CKKS properties, enabling complex computations while maintaining data privacy and integrity.
### Implementation
The implementation part of the research consists of two main components. The first one involves the Homomorphic Encryption (HE) CKKS scheme on multidimensional vectors with decimal points, and the second focuses on the encrypted logistic regression model training. These implementations are essential in assessing the effectiveness of encrypted computations.
#### 3.3.1. He CKKS Scheme on Multidimensional Vectors of Decimal Points
Algorithm 1 provides a detailed outline of the CKKS scheme applied to multidimensional vectors. The main procedure orchestrates a series of experiments with different parameters \(n,d,p\), where \(n\) is the number of dimensions, \(d\) is the size of the vectors, and \(p\) is the decimal precision.
**Procedure RunExperiment:** is responsible for running an individual experiment, where the vectors are generated, encrypted, and arithmetic operations are performed. Depending on the fine-tuning, different context configurations may be applied.
**Procedure GenerateVectors:** produces random vectors \(\mathbf{A},\mathbf{B}\) of dimensions \(n\times d\) with \(p\) decimal places. These vectors are then encrypted, and arithmetic operations such as addition, subtraction, and multiplication are executed.
**Procedure RunArithmeticOperations:** performs the encrypted operations on vectors and computes the matching decimals, accuracy percentage, and accuracy loss. The results are finally written into a CSV file. The overall methodology integrates various aspects of encrypted computations, ranging from context configurations to the generation of results, offering a comprehensive understanding of encrypted vector arithmetic dynamics.
#### 3.3.2. Encrypted Logistic Regression Model Training
Algorithm 2 describes the processes for encrypted logistic regression model training. The procedures encapsulate the initialization, forward pass, backward pass, and parameter updates for an encrypted logistic regression model.
**Procedure InitializeLR:** initiates the logistic regression model by randomly assigning values to the weights and setting the bias to zero.
**Procedure Forward:** represents the forward pass, applying the sigmoid activation function to obtain the prediction for the given input vector.
**Procedure InitializeEncryptedLR:** initializes the encrypted logistic regression model and gradient accumulators for weight and bias updates.
**Procedure EncryptedForward:** conducts the encrypted forward pass, while _Procedure EncryptedBackward_ computes the encrypted backward pass, determining the gradient updates for weights and bias.
**Procedure UpdateParameters:** updates the model parameters based on the computed gradients and resets the gradient accumulators for the next iteration. The methodology employed in this algorithm integrates encrypted computations into the training of a logistic regression model, advancing state-of-the-art techniques in secure data processing and model training.
The methodology outlined in this study offers a methodical investigation into the utilization of Homomorphic Encryption (HE) within scientific data management and applications. Through a dual approach, focusing on the arithmetic of multidimensional vectors and the modeling of real-world data, the research illuminates the functional applications of HE in contemporary scientific contexts. Notably, the work emphasizes both the assurance of data privacy and the pioneering possibilities for the secure manipulation of sensitive information. By detailing explicit algorithms, this methodology furnishes a foundational framework, demonstrating the practicality and integrity of HE for encrypted data processing and predictive modeling. This serves as a valuable reference for subsequent investigations in this domain.
```
1:procedureMain
2:for\((n,d,p)\in\) experiments do
3:RunExperiment(\(n,d,p\))
4:endfor
5:endprocedure
6:procedureRunExperiment(\(n,d,p\))
7:for\(i=0\)to\(1\)do
8:if fineTuned then
9:Configure context with specific parameters
10:else
11:Configure context with different parameters
12:endif
13:\((\mathbf{A},\mathbf{B})\leftarrow\textsc{GenerateVectors}(n,d,p)\)
14:\(\mathrm{enc}_{\mathbf{A}},\mathrm{enc}_{\mathbf{B}}\leftarrow\textsc{Encrypt }\mathbf{A},\mathbf{B}\)
15:RunArithmeticOperations(\(\mathbf{A},\mathbf{B},\mathrm{enc}_{\mathbf{A}},\mathrm{enc}_{\mathbf{B}}\))
16:endfor
17:endprocedureRunArithmeticOperations(\(\mathbf{A},\mathbf{B},\mathrm{enc}_{\mathbf{A}},\mathrm{enc}_{\mathbf{B}}\))
18:for\(\mathrm{op}\in\{+,-,*\}\)do
19:plainResult, \(\mathrm{encResult}\leftarrow\) Perform operation op on \(\mathbf{A},\mathbf{B}\) and \(\mathrm{enc}_{\mathbf{A}},\mathrm{enc}_{\mathbf{B}}\)
20:Compute average, minimum, and maximum matching decimals
21:Compute accuracy percentage
22:Compute accuracy loss
23:Write results to CSV
24:endfor
25:endprocedure
26:procedureGenerateVectors(\(n,d,p\))
27:Generate random vectors \(\mathbf{A},\mathbf{B}\in\mathbb{R}^{n\times d}\) with \(p\) decimal places
28:return\(\mathbf{A},\mathbf{B}\)
29:endprocedure
30:procedureCalculateAccuracyLoss(\(\mathbf{N},\mathbf{T}\))
31:Compute the accuracy loss between Numpy vectors \(\mathbf{N}\) and TenSEAL vectors \(\mathbf{T}\)
32:return accuracy loss
33:endprocedure
```
**Algorithm 1** HE CKKS Scheme on Multidimensional Vectors of Decimal Points
## 4. Evaluation
### Experimental Setup
The experimental environment consists of a Cloudlab (Khan et al., 2017) Node with the following specifications: 32 nodes (Intel Skylake, 20 cores, 2 disks, GPU), two Intel Xeon CPUs, 192GB RAM, dual disks (1 TB SAS HD and 480 GB SATA SSD), NVIDIA 12GB PCI P100 GPU, dual NICs, running on Ubuntu 22.04 OS. In Initial experiments, The data is designed based on different dimensions, sizes, and decimal places, facilitating the study of these parameters' impact on encrypted computation. The dataset encompasses various combinations of
these parameters, yielding multiple scenarios of addition, subtraction, and multiplication operations. In the Other experiment, the data consisted of a wildfire dataset. The data used in this research consists of historical wildfires, weather conditions, vegetation indexes, and weather forecasts. These datasets were collected and categorized into different regions and then subjected to preprocessing steps to handle missing values and properly format dates. An example of the datasets includes:
Historical Wildfires: Records of past wildfire occurrences, including the date and region. Weather Data: Information about weather conditions such as temperature, area, min, max, mean, and variance. Vegetation Index: Historical vegetation index information. Weather Forecasts: Forecasts of weather conditions for specific regions and times.
### Evaluation Results
The evaluation was systematically carried out through two sequential experiments, encompassing a diverse array of multidimensional matrix vectors with CKKS.
#### 4.2.1. Experiment 1: Multidimensional Vector Operations
This experiment was meticulously designed to perform a comprehensive analysis involving multidimensional matrix vectors:
**Single Dimension-Decimal Points Comparison** In this particular phase, arithmetic operations were executed on single-dimensional matrix vectors of size 10. For each parameter, we choose different values indicated in (Zhou et al., 2017) as a reasonable spectrum in practice. We fix the decimal point of the values in the original vectors (i.e. plain text) and randomly generate floating-point values from (-1, 1) to represent the scientific data that is normalized. We present the results for three vector operations, namely addition, subtraction, and multiplication. To provide a clear estimation of the results, we use the averaged matching decimal points (i.e. across 10 elements in the vector) to indicate how accurate the results are with/without CKKS scheme (note: the integer parts are always identical).
Table 1 shows the summary of the matching decimals of the results for each type of vector operation with CKKS scheme. The rows are ordered based on the matching decimal points. It shows that the cases where there is a larger global scale are likely to lead to a higher number of matching decimal points. In addition, for all three operators, the polynomial degree of 8192 seems to consistently produce the highest number of matching decimal points.
**MultiDimensional Analysis**
Next, we focus on conducting the matrix-matrix multiplication with CKKS scheme. The investigation was expanded to include the comparison of multidimensional vectors in a series of sizes. A thorough collection and analysis of results were conducted, culminating in Figure 1, where we pick the best parameter combination from Table 1. We compute the averaged mean square error and Euclidean distance between the original resulting matrix and the matrix decrypted from the CKKS scheme. Figure 1 shows that two matrices have marginal differences in terms of Euclidean distance and near-to-zero MSE across all sizes of the matrices considered. This evaluation results provide a promising view on applying CKKS to matrix multiplication, one of the dominant building blocks of scientific computation.
#### 4.2.2. Experiment 2: Wildfire Dataset
Informed by the results of Experiment 1, this experiment tests using the CKKS scheme with a wildfire dataset, consisting of historical wildfires, weather conditions, vegetation indexes, and weather forecasts to test its efficacy and to understand how the CKKS encryption scheme could improve security in real-world applications.
**Comparison of Model Accuracy** The comparison of different models, including Linear Regression, Normalized Linear Regression, and Decision Trees, both in their plain and encrypted forms, was performed using the Coefficient of Determination \(R^{2}\) as the metric. The results are shown in Figure 2. The baseline Linear Regression has an \(R^{2}\) value of 0.11, while the encrypted version is slightly lower at 0.10. Normalized Linear Regression exhibits a higher value of 0.31 for the baseline and 0.30 for the encrypted version. The Decision
Tree has the highest baseline value of 0.40, with the encrypted version slightly lower at 0.38.
These results indicate that employing the CKKS encryption scheme allows for computations on encrypted data without a significant loss in accuracy. Different models show minor variations in \(R^{2}\) when encrypted, but none exhibit a substantial drop, supporting the notion that a variety of models can be used with encrypted data without a major loss in accuracy.
This experiment demonstrates the feasibility of applying CKKS encryption to machine learning models, enabling privacy-preserving
\begin{table}
\begin{tabular}{l l l l l l}
**ArithmeticOperation** & **Coefficient Bit Size** & **Modulus Degree** & **Modulus** & **Global Scale** & **Averaged Matching Decimal Points** \\ \hline \({}^{*}\) & 54 & 2048 & 2\({}^{**}\)16 & 0.0 \\ \({}^{*}\) & 75 & 4096 & 2\({}^{**}\)20 & 0.0 \\ \({}^{*}\) & 90 & 4096 & 2\({}^{**}\)25 & 0.0 \\ \({}^{*}\) & 122 & 8192 & 2\({}^{**}\)21 & 0.0 \\ \({}^{*}\) & 206 & 8192 & 2\({}^{**}\)40 & 0.25 \\ \({}^{*}\) & 206 & 8192 & 2\({}^{**}\)21 & 0.375 \\ \({}^{*}\) & 300 & 32768 & 2\({}^{**}\)40 & 5.0 \\ \({}^{*}\) & 200 & 16384 & 2\({}^{**}\)40 & 6.0 \\ \({}^{*}\) & 160 & 8192 & 2\({}^{**}\)40 & 6.5 \\ \({}^{*}\) & 200 & 8192 & 2\({}^{**}\)40 & 6.5 \\ \hline \({}^{*}\) & 54 & 2048 & 2\({}^{**}\)16 & 0.0 \\ \({}^{+}\) & 75 & 4096 & 2\({}^{**}\)20 & 2.0 \\ \({}^{+}\) & 206 & 8192 & 2\({}^{**}\)21 & 2.438 \\ \({}^{+}\) & 122 & 8192 & 2\({}^{**}\)21 & 3.0 \\ \({}^{+}\) & 90 & 4096 & 2\({}^{**}\)25 & 4.0 \\ \({}^{+}\) & 300 & 32768 & 2\({}^{**}\)40 & 4.0 \\ \({}^{+}\) & 206 & 8192 & 2\({}^{**}\)40 & 4.25 \\ \({}^{+}\) & 200 & 16384 & 2\({}^{**}\)40 & 5.0 \\ \({}^{+}\) & 160 & 8192 & 2\({}^{**}\)40 & 5.438 \\ \hline \({}^{-}\) & 122 & 8192 & 2\({}^{**}\)21 & 1.0 \\ \({}^{-}\) & 75 & 4096 & 2\({}^{**}\)20 & 2.0 \\ \({}^{-}\) & 206 & 8192 & 2\({}^{**}\)21 & 2.75 \\ \({}^{-}\) & 90 & 4096 & 2\({}^{**}\)25 & 4.0 \\ \({}^{-}\) & 300 & 32768 & 2\({}^{**}\)40 & 4.0 \\ \({}^{-}\) & 200 & 16384 & 2\({}^{**}\)40 & 4.0 \\ \({}^{-}\) & 160 & 8192 & 2\({}^{**}\)40 & 4.875 \\ \({}^{-}\) & 200 & 8192 & 2\({}^{**}\)40 & 5.167 \\ \({}^{-}\) & 206 & 8192 & 2\({}^{**}\)40 & 6.0 \\ \hline \end{tabular}
\end{table}
Table 1. Comparison of Accuracy among different trends of CKKS parameters
Figure 1. Mean Square Error and Euclidean Distance
computation in sensitive areas like healthcare or finance. The slight decrease in \(R^{2}\) for encrypted models may warrant further investigation but does not overshadow the potential benefits of using encrypted data.
#### 4.2.3. Accuracy Comparison Over Epochs
The accuracy rate over different epochs was analyzed, comparing Encrypted Linear Regression, Linear Regression, and Decision Tree models, as depicted in Figure 3. The accuracy of the Encrypted Linear Regression model starts at around 51.6% and experiences a slight drop before a drastic increase to approximately 94.9% in the fourth epoch, stabilizing thereafter. This behavior may signify the model learning crucial features during the fourth epoch, leading to a substantial accuracy improvement and suggesting convergence.
In contrast, the plain Linear Regression model's accuracy shows a steady increase from 60% to 80% across five epochs, indicating a well-behaved and continuous convergence pattern.
The Decision Tree model also converges steadily but more slowly, with accuracy increasing from 50% to 70%. This more gradual increase may suggest that linear models are better suited for this particular dataset or problem.
**Impacts of Encryption:** Comparing the plain and Encrypted Linear Regression reveals surprising dynamics. The encrypted model's convergence pattern is more erratic, with a sudden jump in accuracy in the fourth epoch, while the plain model converges more steadily. The noise or characteristics introduced by encryption may create these dynamics, but the final higher accuracy of the encrypted model is promising.
The experiment uncovers insights into the convergence behavior of different models in plain and encrypted forms. The erratic behavior of the Encrypted Linear Regression may warrant further investigation, but the result emphasizes the feasibility of using encrypted computation in machine learning. The experiment underscores the potential to achieve competitive performance with encrypted data without significant compromises on accuracy, enhancing security and privacy.
#### 4.2.4. Overhead Comparison and Optimization Insights
Detailed insights into the overhead of plain baseline experiments versus encrypted experiments were extracted. The evaluation revealed that while the plain Linear Regression experiment may take less than 10 minutes, encrypted Linear Regression operations and subsequent decryption can require 45 minutes of training.
We observe that encrypting only the independent variables \(X,Y\) significantly reduced the time overhead. The typical time for this experiment was reduced to 45 minutes on our node, as illustrated in Figure 4. The evaluation of encrypted computation both on multidimensional vectors and real-world datasets, such as the wildfire dataset, has been a comprehensive and insightful undertaking. By embracing the complexity inherent in encrypted computations, significant insights were derived, demonstrating the efficacy of this approach in secure data processing. The exploration of the wildfire dataset went beyond theory to emphasize the applicability of the techniques and findings across various scenarios. This not only underlines the robustness of the methodologies but also opens up promising avenues for future research and applications in the field of Super Computing. Also, the use of GPU acceleration could further enhance the findings of this study. By leveraging the parallel processing capabilities of GPUs, the computational time for encrypted models like Tenseal Encrypted Linear Regression can be significantly reduced. This improvement aligns with the real-world need for data privacy without sacrificing efficiency, presenting a promising avenue for advancing encrypted computation paradigms.
Figure 4. Overhead Comparison and Optimization Insights
Figure 3. Accuracy Comparison Over Epochs
Figure 2. Comparison of Model Accuracy
## 5. Challenges and Insights
Noise Budget in CKKS.A significant challenge encountered in our study was the computation of the noise budget in the CKKS (Cheon-Kim-Kim-Song) scheme. While the noise budget is a critical parameter to understand the available room for computations before decryption errors occur, TenSEAL, the library used, does not provide a built-in method to compute it. The lack of this functionality posed limitations on our ability to gauge and control the noise during the experiment, potentially influencing the accuracy of the results.
Operational CharacteristicsOur findings reveal distinct characteristics for different arithmetic operations. A compromise had to be made to ensure accurate operations at the expense of time and computation cost. The nature of the CKKS scheme made the implementation of division infeasible without compromising encryption.
Underlying MechanismsIn CKKS, real numbers are encoded into polynomials, with the roots representing the input plaintext vector. The encryption of the polynomial into two new polynomials, further incorporating randomized elements, adds more noise. The polynomial operations of addition and multiplication introduce varying degrees of noise, with multiplication being particularly noisy. These inherent characteristics of the CKKS scheme shape the experiment's outcomes and present both challenges and opportunities for optimization.
Future work might focus on developing methods to compute noise budgets in TenSEAL, optimizing the handling of multiplication, and exploring feasible ways to implement division. This exploration contributes to the understanding of the interplay between encryption, arithmetic operations, and the constraints of existing cryptographic tools, illuminating paths for further research and development.
## 6. Conclusion
We conduct the extensive examination of encrypted computations through two meticulously designed experiments. We test the practical applications of scientific research within the context of the CKKS scheme. By applying CKKS on various multi-dimensional vector operations, we provide insights to the community on whether CKKS can be useful in the scientific domain, and how much accuracy loss one might expect. Our research sets a valuable first step or ongoing exploration and innovation in Super Computing, aligning with the contemporary emphasis on data privacy and computational efficiency. The insights gained lay a robust foundation for future endeavors, including the exciting possibility of GPU acceleration in CKKS computations within TenSEAL, potentially influencing broader technological advances.
|
2309.16389 | A Universal Framework for Holographic MIMO Sensing | This paper addresses the sensing space identification of arbitrarily shaped
continuous antennas. In the context of holographic multiple-input
multiple-output (MIMO), a.k.a. large intelligent surfaces, these antennas offer
benefits such as super-directivity and near-field operability. The sensing
space reveals two key aspects: (a) its dimension specifies the maximally
achievable spatial degrees of freedom (DoFs), and (b) the finite basis spanning
this space accurately describes the sampled field. Earlier studies focus on
specific geometries, bringing forth the need for extendable analysis to
real-world conformal antennas. Thus, we introduce a universal framework to
determine the antenna sensing space, regardless of its shape. The findings
underscore both spatial and spectral concentration of sampled fields to define
a generic eigenvalue problem of Slepian concentration. Results show that this
approach precisely estimates the DoFs of well-known geometries, and verify its
flexible extension to conformal antennas. | Charles Vanwynsberghe, Jiguang He, Mérouane Debbah | 2023-09-28T12:37:37Z | http://arxiv.org/abs/2309.16389v1 | # A Universal Framework for
###### Abstract
This paper addresses the sensing space identification of arbitrarily shaped continuous antennas. In the context of holographic multiple-input multiple-output (MIMO), a.k.a. large intelligent surfaces, these antennas offer benefits such as super-circuitity and near-field operability. The sensing space reveals two key aspects: (a) its dimension specifies the maximally achievable spatial degrees of freedom (DoFs), and (b) the finite basis spanning this space accurately describes the sampled field. Earlier studies focus on specific geometries, bringing forth the need for extendable analysis to real-world conformal antennas. Thus, we introduce a universal framework to determine the antenna sensing space, regardless of its shape. The findings underscore both spatial and spectral concentration of sampled fields to define a generic eigenvalue problem of Slepian concentration. Results show that this approach precisely estimates the DoFs of well-known geometries, and verify its flexible extension to conformal antennas.
Holographic MIMO, large intelligence surface, degrees of freedom, Helmholtz equation, plane waves, conformal antennas, Slepian functions
## I Introduction
The core principle of holographic multiple-input multiple-output (MIMO) is based on the concept of packing elements more densely than what the Nyquist criterion requires to build an antenna. Also referred to as large intelligent surface (LIS) [1], such an antenna is ultimately thought as a continuous object when element packing is extremely dense, resulting in infinitesimal inter-element spacing. Higher directivity, signal-to-noise ratio, and near-field communication performance are expected from the deployment of LISs [2].
Holographic MIMO communication is fundamentally characterized by the continuous analogue of the singular value decomposition of the channel propagation model between a pair of transmitting and receiving LISs [3]. Two important aspects stem from it. First, the highest number of independent data streams between the two LISs, named _spatial degrees of freedom_ (DoFs), dictates the maximal multiplexing gain of MIMO communications. The DoFs correspond to the number of non-degenerate singular values of the decomposition [1]. Despite dense packing, physical-based limitation exists as the DoFs do not simply scale with the number of elements in the LIS. Second, MIMO communication performance is optimal when the transmitted (resp. received) data streams are multiplied by the right (resp. left) singular functions1 of the decomposition.
Footnote 1: With continuous analogue of the singular value decomposition [4], we refer to (left and right) singular _functions_ rather than singular _vectors_, for the well-justified reason that the resulting basis is infinite-dimensional.
Performance analysis based on this singular value decomposition has a caveat, as it evidences the performance of the modeled channel only. In this paper, we rather focus on the _intrinsic_ performance of the receiving antenna, irrespective of the propagation channel that could be involved. Earlier works based on sampling theory follow that logic, to extract the maximal reachable DoFs of linear, square and cubic LISs [5, 6]. They also show how the proper choice of functions from the Fourier basis offers a fair representation of the sensing space for these particular geometries, when the aperture is large with respect to the wavelength. Similar works focus on the ball or disk geometries [7, 8]. However, they cannot be applied universally to LISs of arbitrary geometry, whereas real-world applications may necessitate the use of a conformal antenna that fits the shape of its supporting structure [9].
In this paper, we propose a universal framework to identify the sensing space structure of a continuous LIS. By _universal_ we mean that:
1. the LIS shape is arbitrary, and
2. the identified sensing space is valid provided that the propagation media is homogeneous and isotropic in a star-shaped volume that contains the LIS, but the propagation conditions are arbitrary outside that volume.
We focus on deterministic scalar electric fields. Starting from the homogeneous Helmholtz equation, the approach is founded on the fact that any Helmholtz solution can be described by an equivalent decomposition into plane waves [10, 11]. This plays a central role in this paper: it implies that sensed electric fields have a spectral support, in the wavevector domain, limited to a sphere with a radius equal to the wavenumber. Considering also that the sampling operation by the LIS is also volume-limited in the spatial domain, we show how these properties can be reformulated into an eigenvalue problem of Slepian concentration in integral form [12, 13]. Numerical simulations of this paper show that
* by extension from the one-dimensional case [14], the non-degenerate eigenvalues of this problem also provide the maximal DoFs that can be achieved,
* the space spanned by their related eigenfunctions (also named Slepian functions2) offers an accurate approximation of the sampled signals. Footnote 2: Slepian functions are also referred to as prolate spheroidal functions; in this paper we opt for the former name.
The rest of the paper is organized as follows. Sec. II describes the scalar electric field by its propagation model, and its general approximation by plane waves decomposition from the literature. Sec. III exposes the eigenvalue problem of Slepian concentration to identify the structure of the universal sensing space. Finally, sec. IV provides a numerical analysis: first we verify that the numerical DoFs and sensing space corroborate with well-known cases from [5, 8], and then the asset of the proposed approach is illustrated with a paraboloid conformal antenna.
## II Propagation Model and Its Approximated Solutions
### _Propagation of Electric Field in Homogeneous Media_
We consider the propagation of the scalar electric field \(u\) in a bounded domain \(\Omega\) of the three-dimensional space. The waves propagate at the speed \(c\) in this domain, assumed to be homogeneous and isotropic. The propagation is described at time \(t\) and space \(\mathbf{r}=[r_{1},r_{2},r_{3}]\in\Omega\) by the wave equation
\[\nabla^{2}\,u(\mathbf{r},t)-\frac{1}{c^{2}}\frac{\partial^{2}}{\partial t^{2} }u(\mathbf{r},t)=0\,, \tag{1}\]
where \(\nabla^{2}=\sum_{n=1}^{3}\frac{\partial^{2}}{\partial r^{2}}\) is the Laplacian operator. The wave equation remains general for fields with arbitrary frequency content, but can be rewritten for the case of narrowband field at frequency \(\omega\) in radians/second by considering harmonic solutions of the form \(u(\mathbf{r},t)=u(\mathbf{r})e^{\jmath\omega t}\). It leads to the Helmholtz equation
\[\nabla u(\mathbf{r})+\kappa^{2}u(\mathbf{r})=0\,, \tag{2}\]
where \(\kappa=2\pi/\beta\) is the wavenumber, and \(\beta=2\pi c/\omega\) is the wavelength.
A general solution of the homogeneous equation is, up to a constant,
\[u(\mathbf{r})=e^{\jmath\mathbf{d}(\mathbf{\Theta}).\mathbf{r}} \tag{3}\]
with \(\mathbf{\Theta}=[\theta,\phi]^{\mathsf{T}}\) being the angles from the polar coordinate system, and \(\mathbf{d}(\mathbf{\Theta})=[\sin\theta\cos\phi,\sin\theta\sin\phi,\cos\theta] ^{\mathsf{T}}\) the unit-norm vector pointing towards \(\mathbf{\Theta}\). Solution (3) describes a plane wave propagating towards \(\mathbf{d}(\mathbf{\Theta})\). By extension, any function derived from a continuous expansion of plane waves from the sphere also satisfies the Eq. (2). Such functions, known as _Herglotz wave functions_, they are of the form:
\[u(\mathbf{r})=\mathcal{H}\sigma=\iint_{4\pi}\sigma(\mathbf{\Theta})e^{\jmath \mathbf{r}\mathbf{d}(\mathbf{\Theta}).\mathbf{r}}\mathrm{d}\mathbf{\Theta} \tag{4}\]
with \(\mathrm{d}\mathbf{\Theta}=\sin(\theta)\mathrm{d}\theta\mathrm{d}\phi\) the differential solid angle, and \(\sigma(\mathbf{\Theta})\) the wave density. The function \(\mathcal{H}\) is defined when \(\sigma(\mathbf{\Theta})\) is square integrable on the sphere, _i.e._ when the wave density has finite energy.
### _Approximation of Helmholtz Solutions by Herglotz Wave Functions_
As Herglotz wave functions form a subset of the solutions to the differential equation (2), a natural question arise: _can any solution be approximated by a Herglotz function with a corresponding wave density \(\sigma(\mathbf{\Theta})\)?_ Earlier studies claim so by invoking the density property of polynomial functions in Sobolev spaces [11, 15], provided that the domain \(\Omega\subset\mathbb{R}^{3}\) is star-shaped [16]. Although functional analysis goes beyond the scope of this paper, some details are provided below to grasp the relation between Eq. (2) and the function \(\mathcal{H}\). Let the \(H_{1}\) norm of \(u\) be defined on \(\Omega\) as:
\[\|u\|_{H_{1}}^{2}=\iiint_{\Omega}|u(\mathbf{r})|^{2}+\sum_{n=1}^{3}\left|\frac {\partial u(\mathbf{r})}{\partial\mathbf{r}_{n}}\right|^{2}\mathrm{d}\mathbf{r}\,. \tag{5}\]
The virtue of such a norm is to establish a metric between two functions, both in terms of scalar gradient values. Colton _et al._ develop the following theorem.
**Theorem 1** (Th. 2.3 [11]).: _Assume \(\sigma\) to be square integrable on the sphere. Then, the set of Herglotz wave functions is dense in the space of solutions to the Helmholtz equation in the volume \(\Omega\), with respect to the \(H_{1}\) norm._
In other words, for any given solution \(u(\mathbf{r})\) with \(\mathbf{r}\in\Omega\), there exists a wave density \(\sigma\) such that the metric \(\|u-\mathcal{H}\sigma\|_{H_{1}}^{2}\) vanishes. The approximation holds for both the electric field and its gradient, although the term \(u\) usually plays the most important role in the context of wireless communication and array processing, as we mostly rely on measurements of \(u\).3
Footnote 3: We assume that the field is measured by conventional scalar sensors other than a vector-sensor array [17].
_Remark_. For sake of simplicity, the paper focuses on scalar electric fields. However, Colton _et al._ extended **Theorem 1** by starting from the system of Maxwell equations [18], and showed that the vectorized form of Herglotz functions approximate electric vector fields. Without loss of generality, the presented results can be generalized to encompass wireless communication scenarios including polarization.
**Theorem 1** is a fundamental milestone to represent any scalar field as an "infinite" sum of plane waves. Nevertheless, the abstract concept of set density does not indicate anything about the approximation error in practice. On the other hand, Moiola _et al._[10] successfully derive an upper bound of this error, even when the number of plane waves is finite. If \(P\) vectors \(\mathbf{d}\) are regularly distributed over the unit sphere, they show that a set of complex coefficients \(\alpha_{1},\alpha_{2},\dots,\alpha_{P}\) exist such that
\[u(\mathbf{r})\approx\sum_{n=1}^{P}\alpha_{n}e^{\jmath\mathbf{r}.\mathbf{d}( \mathbf{\Theta}_{n})}, \tag{6}\]
provided that \(P\) is sufficiently large, and \(\Omega\) is star-shaped. In the three-dimensional case, the approximation error decays exponentially with respect to \(P\)[10, Corollary 5.5]. This statement is stronger than **Theorem 1**, as the bound guarantees that the error vanishes rapidly.
### _Take-Home Message_
Approximations in Eqs. (4) and (6) provide a generalized way to describe the electric field in a homogeneous and isotropic volume. Unlike the reported works from [19, 20, 21]
dealing with inverse problems, we do not focus on the values that \(\sigma(\mathbf{\Theta})\)'s or \(\alpha_{n}\)'s take to reconstruct the field. In the current context, the central role of these approximations is that the set of plane waves \(\left\{e^{\jmath\mathbf{r}\cdot\mathbf{d}(\mathbf{\Theta})}|\mathbf{\Theta}\in [0,\pi]\times[0,2\pi],\mathbf{r}\in\Omega\right\}\) spans the space containing all scalar electric fields in \(\Omega\).
Suppose then that a LIS is contained inside \(\Omega\), with no additional restriction on its geometry - see Fig. 1. Then the sensing space in which its noiseless measurements lie also falls into the space spanned by plane waves. Interestingly, the required conditions do not impose any restriction _outside_\(\Omega\). It offers a high flexibility in the possible scenarios to consider: the presence of non-linearities or objects leading to reflection, diffraction, refraction or scattering outside \(\Omega\) does not alter the sensing space structure of the LIS located inside \(\Omega\). Based on these elements, the next section introduces a general approach to obtain the structure of the sensing space of a LIS of finite aperture.
## III Universal Subspace of Deterministic Fields
From a signal processing perspective, this section shows how to identify the space that electrical fields sensed by a given LIS occupy, the LIS volume being defined by \(\mathcal{M}\subset\Omega\) - cf Fig. 1. To do so, the approach consists in finding a set of orthogonal functions \(\psi_{1},\psi_{2},\ldots,\psi_{N}\) such that \(u(\mathbf{r})\) can be regarded as a linear combination of them for \(\mathbf{r}\in\mathcal{M}\), that is:
\[u(\mathbf{r})\approx\sum_{i=1}^{N}\gamma_{i}\psi_{i}(\mathbf{r}),\ \mathbf{r}\in \mathcal{M} \tag{7}\]
with \(\gamma_{i}=\iiint_{\mathcal{M}}u(\mathbf{r})\psi_{i}^{*}(\mathbf{r})\mathrm{d}\mathbf{r}\) complex coefficients. Two properties of the sampled field are highlighted and leveraged in the following.
First, Sec. II reveals the spectral structure of \(u\) in the wavevector domain. With \(\mathbf{k}\in\mathbb{R}^{3}\) being the wavevector, and \(\hat{u}\) being the spectrum of \(u\), the inverse Fourier transform (IFT) \(\mathcal{F}^{-1}\) in the volume
\[u(\mathbf{r})=\mathcal{F}^{-1}\hat{u}=\frac{1}{(2\pi)^{3}}\iiint_{\mathbb{R} ^{3}}\hat{u}(\mathbf{k})e^{j\mathbf{k}\cdot\mathbf{r}}d\mathbf{k} \tag{8}\]
has a direct relation with the Herglotz wave function. Indeed, by changing the integral of Eq. (8) from Cartesian to spherical coordinate system one can evidence that
\[\mathcal{H}\!\!\left[\sigma(\mathbf{\Theta})\right]=\frac{(2\pi)^{3}}{\kappa^{ 2}}\mathcal{F}^{-1}\{\delta(\|\mathbf{k}\|-\kappa)\sigma(\mathbf{\Theta})\} \tag{9}\]
with \(\delta\) being the Delta Dirac function. Therefore, fields are bandlimited in the wavevector domain on the sphere \(\mathcal{S}\) of radius \(\kappa\):
\[\mathcal{S}=\{\mathbf{k}\in\mathbb{R}^{3},\|\mathbf{k}\|=\kappa\}. \tag{10}\]
Second, the sampling operation in the volume \(\mathcal{M}\) can be regarded as capturing a signal whose value equals \(u(\mathbf{r})\) for \(\mathbf{r}\in\mathcal{M}\), and zero elsewhere. The signal is thus also limited in the spatial domain.
### _Slepian Functions of Sampled Fields in the Volume_
The representation of signals that are limited in both spatial and wavenumber domains belongs to a class of problems that were addressed first by Slepian, Landau and Pollak [14] for one-dimensional signals. Here, we leverage the approach in the Cartesian volume to derive a basis of functions \(\psi_{i}\)'s - also named Slepian functions.
Slepian's concentration problem can be formulated as follows: we search the function \(\psi\) which maximizes the concentration of its spectrum \(\hat{\psi}\) in \(\mathcal{S}\), under the constraint that its support is limited in \(\mathcal{M}\):
\[\begin{split}\lambda=\max_{\psi=\mathcal{F}^{-1}}& \frac{\iiint_{\mathcal{S}}|\hat{\psi}(\mathbf{k})|^{2}\mathrm{d} \mathbf{k}}{\iint\!\int_{\mathbb{R}^{3}}|\hat{\psi}(\mathbf{k})|^{2}\mathrm{ d}\mathbf{k}}\\ &\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \
we simplify the integral by rotating the coordinate system such that \(\mathbf{r}-\mathbf{r}^{\prime}=[0,0,\|\mathbf{r}-\mathbf{r}^{\prime}\|]\), which yields:
\[\begin{split}\iint_{4\pi}\mathrm{e}^{\jmath\kappa\mathbf{d}( \mathbf{\Theta})\cdot\|\mathbf{r}-\mathbf{r}^{\prime}\|}\,\mathrm{d}\mathbf{ \Theta}&=\int_{2\pi}\mathrm{d}\phi\int_{\pi}\mathrm{e}^{\jmath \kappa\|\mathbf{r}-\mathbf{r}^{\prime}\|\cos\theta}\sin(\theta)\mathrm{d} \theta\\ &=2\pi\left[-\frac{1}{\jmath\kappa\|\mathbf{r}-\mathbf{r}^{ \prime}\|}\right]\mathrm{e}^{\jmath\kappa\|\mathbf{r}-\mathbf{r}^{\prime}\| \cos\theta}\right]^{\pi}_{0}\\ &=2\pi.2\frac{\sin(\kappa\|\mathbf{r}-\mathbf{r}^{\prime}\|)}{ \kappa\|\mathbf{r}-\mathbf{r}^{\prime}\|}.\end{split} \tag{15}\]
Combining the integral terms together finally gives
\[\begin{split} K(\mathbf{r},\mathbf{r}^{\prime})&=( 2\pi)^{-3}\kappa^{2}4\pi\operatorname{sinc}(\kappa\|\mathbf{r}-\mathbf{r}^{ \prime}\|)\\ &=\frac{2}{\beta^{2}}\operatorname{sinc}(\kappa\|\mathbf{r}- \mathbf{r}^{\prime}\|).\end{split} \tag{16}\]
where \(\operatorname{sinc}(x)=\sin(x)/x\).
### _Connection with Bandlimited Signals on the Line, and Degrees of Freedom_
With the kernel (16), Eq. (12) describes a spatial-domain convolutional integral. Interestingly, this problem for three-dimensional scalar fields is closely related to the one for one-dimensional signals \(g(t)\)'s that are bandlimited on the interval \([-W,W]\) and sampled on the segment \([-T/2,T/2]\), i.e., with \(\hat{g}\) the spectrum of \(g\),
\[g(t)=\frac{1}{2\pi}\int_{-W}^{W}\hat{g}(\omega)\,\mathrm{e}^{\jmath\omega t} \,\mathrm{d}\omega\text{ and }\hat{g}(\omega)=\int_{-\frac{T}{2}}^{\frac{T}{2}}g(t)\, \mathrm{e}^{-\jmath\omega t}\,\mathrm{d}\omega. \tag{17}\]
The related Fredholm integral becomes in that case the time-domain convolution4[14]:
Footnote 4: Eigenvalue is noted here \(\lambda^{\prime}\) to differentiate it from \(\lambda\) in Eq. (12).
\[\int_{-T/2}^{T/2}\frac{W}{\pi}\operatorname{sinc}(W(t-t^{\prime}))g(t^{ \prime})\mathrm{d}t^{\prime}=\lambda^{\prime}g(t), \tag{18}\]
the kernel being derived by the one-dimensional IFT of the segment of length \(2W\) and centered at the origin. It turns out to be equal to the kernel (16) - up to a constant - when \(W=\kappa\). This comes from the general fact the IFT of a ball in dimension \(n\) equals the IFT of the sphere with same radius in dimension \(n+2\)[22] - still up to a constant.
Sorted in the decreasing order, eigenvalues \(\lambda_{i}^{\prime}\)'s from Eq. (18) are known to decay at super-exponential rate [23] beyond the threshold \(i=\lfloor WT/\pi\rfloor\), revealing that signals \(g(t)\) roughly lie in a subspace of finite dimension. Eigenproblems (16) and (18) coincide when field is sampled by a linear LIS. The straightforward consequence is that the signals roughly lie in a space of dimension \(\lfloor\kappa L/\pi\rfloor\), where \(L\) is the aperture of the linear LIS. In other words, the decomposition Eq. (7) holds when \(N\) equals - at least - this value.
In the context of holographic MIMO, the sensing space dimension indicates the maximal number of independent data streams that the LIS can receive, i.e. its maximal DoFs. By using different approaches, deriving the DoFs analytically remains possible as long as the geometry remains convenient for calculus [5, 7, 8]. However, the generalization is not possible for complex geometries, e.g. for conformal antennas. In that case, solving Eq. (12) becomes helpful to obtain the DoFs numerically, by identifying the index \(i\) from which \(\lambda_{i}\) decays rapidly.
## IV Numerical Analysis of the Sensing Space Structure
### _Eigenvalue Analysis and Numerical DoFs_
This section provides numerical results for four LIS geometries characterized by an equivalent aperture \(L\). First, we verify that numerical DoFs coincide with their analytical counterparts, denoted by \(\mathrm{DoF}_{\mathrm{th}}\). To do so, we choose the linear (L-LIS), circular (C-LIS) and square (S-LIS) cases. Second, we demonstrate the flexibility of solving the Slepian problem with the more complex geometry of a conformal antenna. We choose the paraboloid of revolution (P-LIS), adapted to be mounted on an aircraft nose [9, chap. 3.3.1]. To the best of our knowledge, \(\mathrm{DoF}_{\mathrm{th}}\) is unknown for the P-LIS. Table I summarizes the geometries and the \(\mathrm{DoF}_{\mathrm{th}}\) of the four LISs.
The vector \(\mathbf{r}\) is parameterized with \(r_{1}\) only for the L-LIS, so the kernel \(K(\mathbf{r},\mathbf{r}^{\prime})\) - reduced to \(K(r_{1},r_{1}^{\prime})\) - can be regarded as a continuous analogue of a matrix with a real-valued row and column indexes \(r_{1}\) and \(r_{1}^{\prime}\)[4]. The same conclusion holds in the case of the C-LIS, with \(K\) indexed by angular parameters. Eigenvalue decomposition of continuous-like matrices are solved up to 15-digit accuracy with the toolbox _chebfun2_[4]. For the S-LIS (resp. P-LIS), the integral is discretized with a mesh of 4096 (resp. 4500) cells. The scaled eigenvalues \((\lambda_{i}/\lambda_{1})\)'s are plotted for different parameters \(\kappa L\) in Fig. 1(a). We define \(\mathrm{DoF}_{90\%}\) (resp. \(\mathrm{DoF}_{99\%}\)) as the minimal number of eigenvalues whose sum accounts for at least \(90\%\) (resp. \(99\%\)) of the full sum. Plots confirm that analytical and numerical DoFs coincide: \(\mathrm{DoF}_{\mathrm{th}}\) is close to \(\mathrm{DoF}_{90\%}\) for the S-LIS, and close to \(\mathrm{DoF}_{99\%}\) for the L- and C-LISs. Note that the decay of \(\lambda_{i}\) is slower for surfaces (P- and S-LIS), especially when \(\kappa L\) is larger. Because P-LIS is a surface, plots of \(\mathrm{DoF}_{90\%}\) and \(\mathrm{DoF}_{99\%}\) exhibit a quadratic trend with respect to \(\kappa L\).
### _Stepian Functions, and Their Impact on the Wave Density Measurement_
The nine first Slepian functions \(\psi_{i}\)'s and their Fourier transform magnitudes \(|\hat{\psi}_{i}|\)'s on \(\mathcal{S}\) are illustrated in Fig. 1(b) for
the four geometries, with \(\kappa L=4\pi\). On the first hand, \(u\) can be decomposed as a sum of \(\psi_{i}\) as in Eq. (7). The smooth aspect of \(\psi_{i}\) scales with the physical parameter \(\kappa\), since \(u\) results from the physical-based propagation. On the other hand, \(|\hat{\psi}_{i}(\mathbf{k})|\) restricted to \(\mathbf{k}\in\mathcal{S}\) can be interpreted as the radiation pattern of \(\psi_{i}\), and reveals the plane wave "content" of each Slepian function in the wavevector domain. As \(\psi_{i}\)'s are orthogonal, the Plancherel theorem states that \(\hat{\psi}_{i}\)'s are also orthogonal, so the sensed field decomposition in Eq. (7) relates to the equivalent decomposition into the wavenumber domain:
\[\hat{u}(\mathbf{k})\approx\sum_{i=1}^{N}\gamma_{i}\hat{\psi}_{i}(\mathbf{k}). \tag{19}\]
In the case where the field is exclusively excited at the pulsation \(\omega\), the plots of \(|\hat{\psi}_{i}|\) indicate how the LIS geometry (i.e. \(\mathcal{M}\)) impacts the sampling resolution of the wave density \(\sigma\) for \(N=\lfloor\mathrm{DoF}_{\mathrm{th}}\rfloor\). In Fig. 2 (b), the smooth aspect of \(\psi_{i}\) scales with both \(\kappa\) and the geometry aperture.
### _Slepian Functions for Describing Holographic MIMO Channel Propagation_
Finally, this section opens a perspective about the potential of Slepian functions to model holographic MIMO channels. Recent works proved that channels involving a receiving L-LIS are well described by the Fourier plane-wave series expansion [6, Eq. (43)] as
\[u(r_{1})\approx\sum_{i=-\lfloor N/2\rfloor-1}^{\lceil N/2\rceil-1}\gamma_{i}^ {\prime}e^{j2\pi\frac{r_{1}}{L}}\,r_{1}\in[0,L], \tag{20}\]
when \(N=\lfloor\kappa L/\pi\rfloor\), and \(\gamma_{i}^{\prime}=\int_{0}^{L}u(r_{1})e^{-j2\pi\frac{r_{1}}{L}}\mathrm{d}r_ {1}\). Here, we investigate how the expansion given by Slepian functions in
Fig. 3: Comparing the expansion of \(u(\mathbf{r})\) in the scenario of a LoS channel propagation between two L-LISs, with Fourier [6] and Slepian basis. Plots are average (dots), minimal and maximal (whiskers) values of normalized error. (a) parallel LISs (\(\theta_{\mathrm{Tx}}=\theta_{\mathrm{Rx}}=0\)), (b) randomly rotated LISs, (\(0\leq\theta_{\mathrm{Tx}_{\mathrm{x}}},\theta_{\mathrm{Rx}}<2\pi\)). For (a) and (b): distance \(d\) is uniformly sampled between \(5\) and \(25\) cm.
Fig. 2: Signal subspace eigenstructure of four LIS shapes, from left to right: the L-LIS, C-LIS, S-LIS and P-LIS.
Eq. (7) is capable of describing channels, in comparison with the model (20). To do so, a line of sight (LoS) propagation is simulated between two L-LISs of aperture \(L=5\beta\) in the plane, tilted by angles \(\theta_{\mathrm{Tx}}\) and \(\theta_{\mathrm{Rx}}\), and whose centers are separated by a distance \(d\). The model describing this propagation can be found e.g. in [3, Eq. (8)]. In each experiment, the current distribution of the transmitting L-LIS is randomly generated as a smooth polynomial function. With a Rayleigh distance of \(12.5\) cm for \(\beta=1\) cm, the distance \(d\) is uniformly sampled between \(5\) and \(25\) cm to cover both near and far field LoS propagation.
Results are provided for 2 scenarios: when the LISs remain parallel (\(\theta_{\mathrm{Tx}}=\theta_{\mathrm{Rx}}=0\), cf. Fig. 3 (a)), or are both randomly tilted (\(0\leq\theta_{\mathrm{Tx}},\theta_{\mathrm{Rx}}<2\pi\), cf. Fig. 3 (b)) at each experiment. We plot average (dots), minimal and maximal (whiskers) values of normalized errors over 1000 experiments. The normalized error is computed as
\[\frac{\int_{0}^{L}|u(r_{1})-\sum_{i=1}^{N}\gamma_{i}\psi_{i}(r_{1})|^{2}\mathrm{ d}r_{1}}{\int_{0}^{L}|u(r_{1})|^{2}\mathrm{d}r_{1}} \tag{21}\]
with the Slepian basis, and as
\[\frac{\int_{0}^{L}|u(r_{1})-\sum_{i=-\lfloor N/2\rfloor}^{\lceil N/2\rceil-1 }\gamma_{i}^{\prime}e^{j2\pi\frac{r_{1}}{L}}|^{2}\mathrm{d}r_{1}}{\int_{0}^{L} |u(r_{1})|^{2}\mathrm{d}r_{1}} \tag{22}\]
with the Fourier basis. Trends show that both Slepian and Fourier basis have an equivalent accuracy for \(N=\lfloor\mathrm{DoF}_{\mathrm{th}}\rfloor\). Including more functions (i.e. choosing \(N>\lfloor\mathrm{DoF}_{\mathrm{th}}\rfloor\)) provides a better description of \(u(r_{1})\). Accuracy with the Slepian basis decays faster (\(\approx 10^{-3}\) for \(N=15\)) than with the Fourier basis (\(\approx 10^{-1}\) for \(N=15\)). As expected from the theory [6], increasing \(N>\lfloor\mathrm{DoF}_{\mathrm{th}}\rfloor\) brings a marginal amelioration to model channels with the Fourier plane-wave series expansion, but Slepian functions seem to be an interesting candidate for a better accuracy.
## V Conclusion
We have derived the general Slepian concentration problem of scalar electric fields sampled in the volume. Solving this problem captures the sensing space of a continuous LIS: analysis of eigenvalues gives a fair approximation of the maximal reachable DoFs, and the related Slepian functions provide an accurate basis to represent measured fields. The proposed approach becomes interesting to unlock the performance analysis of complex geometries (e.g. conformal antennas) in the context of holographic MIMO.
|
2309.04955 | Resolvents of Bochner Laplacians in the semiclassical limit | We introduce a new class of pseudodifferential operators, called Heisenberg
semiclassical pseudodifferential operators, to study the space of sections of a
power of a line bundle on a compact manifold, in the limit where the power is
large. This class contains the Bochner Laplacian associated with a connection
of the line bundle, and when the curvature is nondegenerate, its resolvent and
some associated spectral projections, including generalized Bergman kernels. | Laurent Charles | 2023-09-10T08:02:14Z | http://arxiv.org/abs/2309.04955v1 | # Resolvents of Bochner Laplacians in the semiclassical limit
###### Abstract
We introduce a new class of pseudodifferential operators, called Heisenberg semiclassical pseudodifferential operators, to study the space of sections of a power of a line bundle on a compact manifold, in the limit where the power is large. This class contains the Bochner Laplacian associated to a connection of the line bundle, and when the curvature is nondegenerate, its resolvent and some associated spectral projections, including generalised Bergman kernels.
## 1 Introduction
The spectral analysis of the Bochner Laplacians acting on sections of a line bundle with a large curvature, has many applications ranging from complex geometry to mathematical physics: holomorphic Morse inequalities and Bergman kernels [20], dynamical systems [13], geometric quantization [2] or large magnetic field limit of Schrodinger operators [23], [22], [17] to quote just a few references. In this paper we introduce an algebra of pseudodifferential operators, shaped to study the bottom of the spectra of these Laplacian at small scale.
To understand how the scales matter, let us state two Weyl laws corresponding to two different regimes. Let \((M^{n},g)\) be a closed Riemannian manifold and \(L\to M\) a Hermitian line bundle with a connection \(\nabla\). For any integer \(k\), the Bochner Laplacian \(\Delta_{k}=\frac{1}{2}\nabla^{*}\nabla\) acting on sections of \(L^{k}\) is an elliptic differential operator. Hence \(\Delta_{k}\) with domain \(\mathcal{C}^{\infty}(M,L^{k})\), is essentially self-adjoint. Its spectrum is a subset of \(\mathbb{R}_{\geqslant 0}\) and consists only of eigenvalues with finite multiplicity.
When \(k\) is large, the structure of this spectrum depends essentially on the curvature \(\frac{1}{i}\omega\) of \(\nabla\). Since \(\nabla\) is assumed to preserve the Hermitian structure,
\(\omega\) is a real closed 2-form of \(M\). For the first regime we will need the twisted symplectic form \(\Omega\) of \(T^{*}M\) defined by
\[\Omega=\sum d\xi_{i}\wedge dx_{i}+p^{*}\omega \tag{1}\]
where \(p\) is the projection \(T^{*}M\to M\). For the second one, we will assume that \(\omega\) and \(g\) are compatible in the sense that \(\omega(x,y)=g(jx,y)\) for an (almost) complex structure \(j\). The Weyl laws at the energy scales \(k^{2}\) and \(k\) are respectively:
1. For any \(\lambda>0\), the number \(N_{k}(\lambda)\) of eigenvalues of \(k^{-2}\Delta_{k}\) smaller than \(\lambda\) and counted with multiplicities satisfies \[N_{k}(\lambda)\sim\Big{(}\frac{k}{2\pi}\Big{)}^{n}\operatorname{vol}\{\xi\in T ^{*}M,\;\tfrac{1}{2}|\xi|_{x}^{2}\leqslant\lambda\}\quad\text{ as }k\to\infty\] (2) with the volume computed with respect to the Liouville form \(\tfrac{1}{n!}\Omega^{n}\).
2. if \(\omega\) and \(g\) are compatible, then for any \(M>0\) there exists \(C>0\) such that for any \(k\), \[\operatorname{spec}(k^{-1}\Delta_{k})\cap]-\infty,M]\subset(\tfrac{1}{2}+ \mathbb{N})+Ck^{-\tfrac{1}{2}}]-1,1[,\] (3) and for any \(m\in\mathbb{N}\), \[\sharp\Big{(}\operatorname{spec}(k^{-1}\Delta_{k})\cap\big{(}\tfrac{1}{2}+m+ ]-\tfrac{1}{4},\tfrac{1}{4}[\big{)}\Big{)}\sim\Big{(}\frac{k}{2\pi}\Big{)}^{n/ 2}\binom{m+n-1}{m}\] (4)
The estimate (2) does not appear in the literature, but it is actually a small variation of the Weyl law of semiclassical pseudodifferential operators on a compact manifold with semiclassical parameter \(h=k^{-1}\), as we will explain below. The cluster structure (3) and the estimate (4) of the number of eigenvalues in each cluster have been proved in [13]. This eigenvalue number is actually given by a Riemann-Roch formula when \(k\) is sufficiently large [6], [7].
These results rely on a good description of the resolvents \((k^{-\epsilon}\Delta_{k}-z)^{-1}\), which allows to study the \(f(k^{-\epsilon}\Delta_{k})\) for \(f\) a smooth compactly supported function, where \(\epsilon=2\) or \(1\) according to the regime. In the first case, \(f(k^{-2}\Delta_{k})\) and \((k^{-2}\Delta_{k}-z)^{-1}\) are semiclassical twisted pseudodifferential operators, a class of operators which was introduced in [4, Chapitre 4] and used recently in [14]. The local theory of these operators is exactly the same as the one of the standard semiclassical pseudodifferential operators with
\(h=k^{-1}\), whereas the global theory involves the twisted symplectic form (1). Similar operators have been studied as well on \(\mathbb{R}^{n}\) under the name of magnetic pseudodifferential operators, cf. [19] for an introduction and many references.
For the second regime, the theory is much less developed. The proof of (3) is based on an approximation of the resolvent of \(k^{-1}\Delta_{k}\), which is obtained by gluing local resolvents of some model operators deduced from \(\Delta_{k}\) by freezing the coordinates in an appropriate way. The main difficulty in this construction is that the number of terms we have to glue increases with \(k\). Still these approximations have been used successfully to prove (3) and later to describe the spectral projections onto the various clusters as generalizations of Bergman kernels [6], [7], [18].
In this paper, we introduce a new class of pseudodifferential operators, and we prove that it contains the resolvent of \(k^{-1}\Delta_{k}\) when \(\omega\) is nondegenerate, as well as the spectral projector associated to the cluster (3). This operator class is in a sense a semiclassical version of the class of Heisenberg pseudodifferential operators, which has been developed for the study of the \(\overline{\partial}_{b}\)-complex [1], [12]. For this reason, we call our operators semiclassical Heisenberg pseudodifferential operators.
Just as the usual Heisenberg operators have a symbol of type \((\frac{1}{2},\frac{1}{2})\), the semiclassical ones have an exotic symbol of type \(\frac{1}{2}\): at each derivative, a power of \(h^{\frac{1}{2}}\) is lost. Recall that the semiclassical operators with a symbol of type \(\frac{1}{2}\) form a limit class: they are closed under product, the usual operator norm estimates hold, but the standard expansions in the symbolic calculus do not hold because all the terms in these expansions have the same magnitude, cf. for instance [10, Proposition 7.7, Theorem 7.9 and Theorem 7.11]. As we will see in the sequel [3] of this paper, the semiclassical Heisenberg operators are closed under product. However the composition of their principal symbols, which are functions on \(T^{*}M\), is not the usual product. Instead it is a fiberwise product, whose restriction to each fiber \(T^{*}_{x}M\) depends on the curvature \(\omega_{x}\): when \(\omega_{x}\) is nondegenerate, it is essentially the Weyl product whereas when \(\omega_{x}=0\), this is the usual function product. So in general the principal symbol composition is not commutative.
Our definition of the Heisenberg operators is based on the usual semiclassical pseudodifferential operators, from which we deduce easily many of their properties, except what regards their composition. In this paper, we only address the Heisenberg composition of differential operators with pseudodifferential ones, because this suffices to show that the resolvent and cluster spectral projectors of \(k^{-1}\Delta_{k}\) are Heisenberg operators. We have also
included an exposition of the theory of twisted pseudodifferential operators, because this is not really standard material, and this helps to understand the specificity of the Heisenberg pseudodifferential operators. In the remainder of the introduction, we state our main result.
### Twisted pseudodifferential operators
As the usual Laplace-Beltrami operator, \(\Delta_{k}\) has the following local expression in a coordinate chart \((U,x_{i})\)
\[k^{-2}\Delta_{k}=-\tfrac{1}{2\sqrt{g}}\sum_{j,\ell=1}^{n}\pi_{j}g^{j\ell} \sqrt{g}\,\pi_{\ell}\]
where for any \(j=1\), \(\pi_{j}\) is the dynamical moment \(\pi_{j}:=(ik)^{-1}\nabla_{j}^{L^{k}}\) with \(\nabla_{j}^{L^{k}}\) the covariant derivative of \(L^{k}\) with respect to \(\partial_{x_{j}}\).
Let \(s\in\mathcal{C}^{\infty}(U,L)\) be such that \(|s|=1\) and let \(\beta\in\Omega^{1}(U,\mathbb{R})\) be the corresponding connection 1-form, \(\nabla s=-i\beta\otimes s\). Identifying \(\mathcal{C}^{\infty}(U,L^{k})\) with \(\mathcal{C}^{\infty}(U)\) through the frame \(s^{k}\),
\[\pi_{j}=(ik)^{-1}\partial_{x_{j}}-\beta_{j}\]
where \(\beta_{j}(x)=\beta(x)(\partial_{x_{j}})\). Under these local identifications, the \(\pi_{j}\)'s and consequently \(k^{-2}\Delta_{k}\) are semiclassical differential operators with \(h=k^{-1}\), their symbols being respectively \(\xi_{j}-\beta_{j}\) and \(\tfrac{1}{2}|\xi-\beta(x)|^{2}\).
These symbols become independent of \(s\) if we pull-back them by the momentum shift \(T_{\beta}:T^{*}U\to T^{*}U\), \((x,\xi)\to(x,\xi+\beta(x))\). The same method will be used to define the symbol of a twisted pseudodifferential operator, cf. (8). Similarly, the shift \(T^{*}_{\beta}\) is used in classical mechanic to write the motion equations of a particle in a magnetic field in an invariant way: \(T_{\beta}\) sends the Hamiltonian \(\tfrac{1}{2}|\xi-\beta|^{2}\) with the standard symplectic form \(\sum d\xi_{i}\wedge dx_{i}\) to the usual kinetic energy \(\tfrac{1}{2}|\xi|^{2}\) with the twisted symplectic form \(\Omega\).
Let us briefly introduce the twisted pseudodifferential operators of \(L\), the complete definition will be given in Section 2. Let us start with the residual class. An operator family
\[P=(P_{k}:\mathcal{C}^{\infty}(M,L^{k})\to\mathcal{C}^{\infty}(M,L^{k}),\;k\in \mathbb{N}) \tag{5}\]
belongs to \(k^{-\infty}\Psi^{-\infty}(L)\) if for each \(k\), the Schwartz kernel of \(P_{k}\) is smooth, its pointwise norm is in \(\mathcal{O}(k^{-\infty})\) uniformly on \(M\) and the same holds for its successive derivatives. For any local data \(\delta=(U,s,\rho)\) consisting of an
open set \(U\) of \(M\), a frame \(s\in{\cal C}^{\infty}(U,L)\) such that \(|s|=1\) and a function \(\rho\in{\cal C}^{\infty}_{0}(U)\), we define the local form of a family \(P\) as in (5) as:
\[P^{\delta}_{k}:{\cal C}^{\infty}(U)\to{\cal C}^{\infty}(U),\qquad(P^{\delta}_{k }f)s^{k}=\rho P_{k}(\rho fs^{k}),\qquad k\in{\mathbb{N}} \tag{6}\]
A twisted pseudodifferential operator of order \(m\) is a family \(P\) as in (5) such that for any \(\rho_{1}\), \(\rho_{2}\in{\cal C}^{\infty}(M)\) with disjoint supports, \((\rho_{1}P_{k}\rho_{2})\) belongs to \(k^{-\infty}\Psi^{-\infty}(L)\) and for any local data \(\delta=(U,s,\rho)\), \(P^{\delta}_{k}=Q^{\delta}_{k^{-1}}\) where \((Q^{\delta}_{h}:{\cal C}^{\infty}(U)\to{\cal C}^{\infty}(U),\ h\in(0,1])\) is a semiclassical pseudodifferential operator of order \(m\). So in terms of coordinates \((x_{i})\) on \(U\) the Schwartz kernel of \(P^{\delta}_{k}\) has the form
\[P^{\delta}_{k}(x,y)=\left(\frac{k}{2\pi}\right)^{n}\int e^{ik\xi\cdot(x-y)}a( k^{-1},\tfrac{1}{2}(x+y),\xi)\;d\xi \tag{7}\]
for some semiclassical polyhomogeneous symbol \((a(h,\cdot)\in{\cal C}^{\infty}(U\times{\mathbb{R}}^{n}),\ h\in(0,1])\) of order \(m\). The (principal) symbol of \(P\) is the function \(\sigma\in{\cal C}^{\infty}(T^{*}M)\) such that for any local data \(\delta\) as above
\[a(h,x,\xi+\beta(x))=\rho(x)\sigma(x,\xi)+{\cal O}(h), \tag{8}\]
where \(\beta\in\Omega^{1}(U,{\mathbb{R}})\) is the connection one-form of \(s\).
Examples of twisted (pseudo)differential operators of respective order \(0\), \(1\) and \(2\) are the multiplications by the functions of \({\cal C}^{\infty}(M)\), the covariant derivatives \((ik)^{-1}\nabla^{L^{k}}_{X}\) where \(X\in{\cal C}^{\infty}(M,TM)\), the symbol being \((x,\xi)\to\langle\xi,X(x)\rangle\), and the normalised Laplacian \(k^{-2}\Delta_{k}\), its symbol is \((x,\xi)\to\frac{1}{2}|\xi|_{x}^{2}\).
It is not difficult to adapt the standard results [8], [27] on the resolvents of elliptic operators and their functional calculus in this setting. Denoting by \(\Psi^{m}_{\rm tsc}(L)\) the space of twisted pseudodifferential operators of order \(m\), we have
* for any \(z\in{\mathbb{C}}\setminus{\mathbb{R}}_{\geqslant 0}\), the resolvent \((z-k^{-2}\Delta_{k})^{-1}\) belongs to \(\Psi^{-2}_{\rm tsc}(L)\) and its symbol is \((z-\frac{1}{2}|\xi|^{2})^{-1}\).
* for any \(f\in{\cal C}^{\infty}_{0}({\mathbb{R}})\), \(f(k^{-2}\Delta_{k})\) belongs to \(\Psi^{-\infty}_{\rm tsc}(L)\) and its symbol is \(f(\frac{1}{2}|\xi|^{2})\).
In particular \({\rm tr}\,f(k^{-2}\Delta_{k})=\left(\frac{k}{2\pi}\right)^{n}\int_{T^{*}M}f( \frac{1}{2}|\xi|^{2})\frac{1}{n!}\Omega^{n}+{\cal O}(k^{-1+n})\). The Weyl law (2) follows.
### Heisenberg pseudodifferential operators
A (semiclassical) Heisenberg pseudodifferential operator of \((L,\nabla)\) of order \(m\) is by definition a family \(P\) of operators of the form (5) such that for any \(\rho_{1}\), \(\rho_{2}\in{\cal C}^{\infty}(M)\) with disjoint supports, \((\rho_{1}P_{k}\rho_{2})\) belongs to \(k^{-\infty}\Psi^{-\infty}(L)\) and for any local data \(\delta=(U,s,\rho)\) as above with a coordinate set \((x_{i})\) on \(U\), the Schwartz kernel of \(P_{k}^{\delta}\) has the form
\[e^{ik\beta\left(\frac{x+y}{2}\right)\cdot(x-y)}\Big{(}\frac{\sqrt{k}}{2\pi} \Big{)}^{n}\int_{\mathbb{R}^{n}}e^{i\sqrt{k}\;\xi\cdot(x-y)}a(k^{-\frac{1}{2}}, \tfrac{1}{2}(x+y),\xi)\;d\xi \tag{9}\]
where
* \(\beta=\sum\beta_{i}(x)dx_{i}\) is the connection one-form of \(s\) defined as above and viewed as the \(\mathbb{R}^{n}\)-valued function \(x\to(\beta_{1}(x),\dots,\beta_{n}(x))\)
* \((a(h,\cdot)\in{\cal C}^{\infty}(U\times\mathbb{R}^{n}),\;h\in(0,1])\) is a semiclassical polyhomogeneous symbols of order \(m\), so in particular \(\partial_{x}^{\alpha}\partial_{\xi}^{\beta}a={\cal O}_{\alpha,\beta}(\langle \xi\rangle^{m-|\beta|})\) and \(a\sim\sum_{\ell=0}^{\infty}\hbar^{\ell}a_{\ell}(x,\xi)\) with polyhomogeneous coefficients \(a_{\ell}\) of order \(m-\ell\).
As we see, the Schwartz kernel (9) is the product of an oscillatory factor depending on the frame \(s\) with the Schwartz kernel of a semiclassical operator where the semiclassical parameter is \(k^{-\frac{1}{2}}\). We will prove that this formula is consistent with change of frame and that we can define a (principal) symbol \(\sigma\in{\cal C}^{\infty}(T^{*}M)\) such that for any local data as above,
\[a(h,x,\xi)=\rho(x)\sigma(x,\xi)+{\cal O}(h).\]
Let us denote by \(\Psi^{m}_{\rm Heis}(L,\nabla)\) the space of Heisenberg pseudodifferential operators of order \(m\), and by \(\Psi^{-\infty}_{\rm Heis}(L,\nabla)\) the intersection \(\cap_{m}\Psi^{m}_{\rm Heis}(L,\nabla)\).
To state our main result, we need to introduce some symbols \(R_{d,z}\) and \(\pi_{d,E}\). Recall that for any tempered distribution \(a\in{\cal S}^{\prime}(\mathbb{R}^{d}_{s}\times\mathbb{R}^{d}_{\zeta})\), the Weyl quantization of \(a\) is the operator \(a^{w}:{\cal S}(\mathbb{R}^{d})\to{\cal S}^{\prime}(\mathbb{R}^{d})\) with Schwartz kernel at \((s,t)\):
\[(2\pi)^{-d}\int e^{i\varsigma\cdot(s-t)}a(\tfrac{1}{2}(s+t),\varsigma)\;d\varsigma.\]
The (quantum) harmonic oscillator is \(H^{w}\) with \(H(s,\varsigma)=\frac{1}{2}\sum_{i=1}^{d}(s_{i}^{2}+\varsigma_{i}^{2})\). As an operator of \(L^{2}(\mathbb{R}^{d})\) with domain the Schwartz space, \(H^{w}\) is essentially self-adjoint with spectrum \({\rm sp}\,H^{w}=\frac{d}{2}+\mathbb{N}\). Then for any \(z\in\mathbb{C}\setminus{\rm sp}\,H^{w}\) and \(E\in{\rm sp}\,H^{w}\), \(R_{d,z}\) and \(\pi_{d,E}\) are the tempered distributions such that
\[(H^{w}-z)^{-1}=R^{w}_{d,z},\qquad 1_{\{E\}}(H^{w})=\pi^{w}_{d,E} \tag{10}\]
By Weyl calculus, \(R_{d,z}\) belongs to the symbol class \(S^{-2}(\mathbb{R}^{2d})\) and \(R_{d,z}=(H-z)^{-1}\) modulo \(S^{-3}(\mathbb{R}^{2d})\). Moreover \(\pi_{d,E}\) belongs to the Schwartz space, being the Weyl symbol of an orthogonal projector onto a finite dimensional subspace of \(\mathcal{S}(\mathbb{R}^{d})\). The analytic Fredholm theory can be developed in this setting and it says that the function \(z\to R_{d,z}\) with values in \(\mathcal{C}^{\infty}(\mathbb{R}^{2d})\), or better the symbol space \(S^{-2}(\mathbb{R}^{2d})\), is meromorphic on \(\mathbb{C}\) with simple poles at \(\frac{d}{2}+\mathbb{N}\) whose residues are the \(\pi_{d,E}\).
**Theorem 1.1**.: _Assume \(\omega\) and \(g\) are compatible so that \(n=2d\) with \(d\in\mathbb{N}\). Then_
1. _For any_ \(z\in\mathbb{C}\setminus(\frac{d}{2}+\mathbb{N})\)_, there exists_ \(Q(z)\in\Psi^{-2}_{\rm Heis}(L,\nabla)\) _such that_ * \((k^{-1}\Delta_{k}-z)Q_{k}(z)\equiv{\rm id}\) _and_ \(Q_{k}(z)(k^{-1}\Delta_{k}-z)\equiv{\rm id}\) _mod_ \(k^{-\infty}\Psi^{-\infty}(L)\)_._ * \((k^{-1}\Delta_{k}-z)Q_{k}(z)=Q_{k}(z)(k^{-1}\Delta_{k}-z)={\rm id}\) _when_ \(k\) _is large._ * _the symbol of_ \(Q(z)\) _restricted to_ \(T_{x}^{*}M\) _is the Weyl symbol_ \(R_{d,z}\) _of the resolvent of the harmonic oscillator with symbol_ \(\frac{1}{2}|\xi|^{2}\)_, cf. (_10_)._
2. _For any_ \(E\in\frac{d}{2}+\mathbb{N}\)_, the spectral projector family_ \[(1_{[E-1/2,E+1/2]}(k^{-1}\Delta_{k}),k\in\mathbb{N})\] _belongs to_ \(\Psi^{-\infty}_{\rm Heis}(L,\nabla)\)_. The restriction of its symbol to_ \(T_{x}^{*}M\) _is the Weyl symbol_ \(\pi_{d,E}\) _of the spectral projector on the_ \(E\)_-eigenspace of the harmonic oscillator with symbol_ \(\frac{1}{2}|\xi|^{2}\)_, cf. (_10_)_
In this statement, we view \(H\), \(R_{d,z}\) and \(\pi_{d,E}\) as functions on \(T_{x}^{*}M\) as follows: choose an orthosymplectic basis \((e_{i},f_{i})\) of \(T_{x}M\), that is \((e_{i},f_{i})\) is an orthonormal basis and for any \(i,j\), \(\omega(x)(e_{i},e_{j})=0=\omega(x)(f_{i},f_{j})\), \(\omega(x)(e_{i},f_{j})=\delta_{ij}\). Let \((s_{i},\varsigma_{i})\) be the associated coordinates of \(T_{x}^{*}M\), so \(s_{i}(\xi):=\xi(e_{i})\) and \(\varsigma_{i}(\xi):=\xi(f_{i})\) for any \(\xi\in T_{x}^{*}M\). Then any function \(f:\mathbb{R}^{2d}\to\mathbb{C}\) identifies with the function of \(T_{x}^{*}M\)
\[\xi\in T_{x}^{*}M\to f(s(\xi),\varsigma(\xi)). \tag{11}\]
In particular \(H(s(\xi),\varsigma(\xi))=\frac{1}{2}|\xi|^{2}\) because \((e_{i},f_{i})\) is orthonormal. The fact that \(R_{d,z}(s(\xi),\varsigma(\xi))\) and \(\pi_{d,E}(s(\xi),\varsigma(\xi))\) are independent of the choice of the basis \((e_{i},f_{i})\) follows from the symplectic invariance of the Weyl quantization.
It follows as well from the symplectic invariance of Weyl quantization and the \(O(n)\) invariance of \(H\) that \(R_{d,z}\) and \(\pi_{d,E}\) are radial functions. A
computation from Mehler formula leads to [9]
\[R_{d,z}=\int_{0}^{1}(1-\tfrac{1}{2}s)^{\frac{d}{2}-z-1}(1+\tfrac{1}{2}s)^{\frac{d }{2}+z-1}e^{-sH}ds,\quad\text{ if }\operatorname{Re}z<d \tag{12}\]
We can also compute \(\pi_{d,E}\) in terms of Laguerre polynomial [26] :
\[\pi_{d,E}(\xi)=2^{d}(-1)^{m}e^{-|\xi|^{2}}L_{m}^{d-1}(2|\xi|^{2}),\qquad\text{ where }m=E-\tfrac{d}{2} \tag{13}\]
and \(L_{m}^{\alpha}(x)=\frac{1}{m!}e^{x}x^{-\alpha}\partial_{x}^{m}(e^{-x}x^{m+ \alpha})\).
**Remark 1.2**.: __
1. The proof of the first part of Theorem 1.1 is an adaptation of the standard parametrix construction of an elliptic pseudodifferential operator, the main change being in the symbolic calculus: if \((P_{k})\) belongs to \(\Psi_{\operatorname{Heis}}^{m}(L,\nabla)\) and has symbol \(\sigma\), then \((k^{-1}\Delta_{k}P_{k})\) belongs to \(\Psi_{\operatorname{Heis}}^{m-2}(L,\nabla)\) and its symbol restricted to \(T_{x}^{*}M\) is the Weyl product of \(\frac{1}{2}|\xi|_{x}^{2}\) and \(\sigma(x,\cdot)\). By Weyl product, we mean the product of symbols in Weyl quantization, and the identification of functions of \(T_{x}^{*}M\) with symbols is done through (11). This explains how the symbols \(R_{d,z}\) and \(\pi_{d,E}\) appear.
2. A remarkable fact is that the proof of the second assertion of Theorem 1.1 is a direct application of Cauchy formula for the spectral projector of an operator with compact resolvent. This part is much simpler than the proof that \(f(k^{-2}\Delta_{k})\in\Psi_{\operatorname{tsc}}^{-\infty}(L)\), even with the modern approach through Helffer-Sjostrand formula.
3. The Schwartz kernel of the cluster spectral projectors was described in [18] and [7] as a generalisation of the Bergman kernel. The advantage of considering these projectors as Heisenberg pseudodifferential operators is merely that it connects them directly to the Laplacian and its resolvent. Moreover, in [3], we will explain how can use the Heisenberg calculus instead of the algebra \(\mathcal{L}(A)\) of [6] to compute the dimension of each cluster and develop the theory of Toeplitz operators as it was done in [7].
#### Outline of the paper
In Section 2, we introduce notations and basic analytical tools to address the large \(k\) limit of the space of sections of the \(k\)-th power of \(L\), including the theory of semiclassical twisted pseudodifferential operators with their Sobolev spaces. The study of Heisenberg pseudodifferential operators starts in Section 3, from their Schwartz kernel asymptotic to their mapping properties. In Section 4, we introduce the symbol product, which is then used
in Section 5 for the composition of differential operators with pseudodifferential operators. This is applied to resolvents and spectral projections in Section 6. In Section 7, we explain how we can add auxiliary bundles to the theory, which provides some important examples.
#### Acknowledgment
I would like to thank Clotilde Fermanian Kammerer, Colin Guillarmou and Thibault Lefeuvre for useful discussions.
## 2 Twisted pseudodifferential operators
#### Symbols
We will use the class of semiclassical polyhomogeneous symbols introduced in [11, Section E.1.2], cf. also [8, Section 6.1]. Let \(V\) be an open set of \(\mathbb{R}^{p}\) and \(m\in\mathbb{R}\). For any \(\xi\in\mathbb{R}^{n}\), let \(|\xi|\) and \(\langle\xi\rangle\) be the Euclidean norm and Japanese bracket, so \(|\xi|^{2}=\sum\xi_{i}^{2}\), \(\langle\xi\rangle^{2}=1+|\xi|^{2}\). Let \(S^{m}(V,\mathbb{R}^{n})\), \(S^{m}_{\mathrm{ph}}(V,\mathbb{R}^{n})\) and \(S^{m}_{\mathrm{sc}}(V,\mathbb{R}^{n})\) be the spaces of symbols (resp. polyhomogeneous symbols, semiclassical polyhomogeneous symbols) of order \(m\). By definition
* \(S^{m}(V,\mathbb{R}^{n})\) consists of the families \((a(h,\cdot),\ h\in(0,1])\) of \(\mathcal{C}^{\infty}(V\times\mathbb{R}^{n})\) such that for any compact set \(K\) of \(V\), \(\alpha\in\mathbb{N}^{p},\beta\in\mathbb{N}^{n}\), there exists \(C>0\) such that \[|\partial_{x}^{\alpha}\partial_{\xi}^{\beta}a(h,x,\xi)|\leqslant C\langle\xi \rangle^{m-|\beta|},\qquad\forall x\in K,\;\xi\in\mathbb{R}^{n},\;h\in(0,1]\]
* \(b\in S^{m}_{\mathrm{ph}}(V,\mathbb{R}^{n})\) if \(b\in S^{m}(V,\mathbb{R}^{n})\), \(b\) is independent on \(h\) and for every \(N\), \(b=\sum_{j=0}^{N-1}b_{j}\mod S^{m-N}(V,\mathbb{R}^{n})\) with coefficients \(b_{j}\in\mathcal{C}^{\infty}(V\times\mathbb{R}^{n})\) such that \(b_{j}(x,t\xi)=t^{m-j}b_{j}(x,\xi)\) when \(|\xi|\geqslant 1\) and \(t>0\).
* \(a\in S^{m}_{\mathrm{sc}}(V,\mathbb{R}^{n})\) if \(a\in S^{m}(V,\mathbb{R}^{n})\) and \(a-\sum_{\ell=0}^{N-1}h^{\ell}a_{\ell}\in h^{N}S^{m-N}(V,\mathbb{R}^{n})\) for some coefficients \(a_{\ell}\in S^{m-\ell}_{\mathrm{ph}}(V,\mathbb{R}^{n})\).
More generally these definitions make sense for a real vector bundle \(E\to N\) instead of the product \(V\times\mathbb{R}^{n}\). We denote by \(S^{m}_{*}(N,E)\) the corresponding spaces and set \(S^{\infty}_{*}(N,E)=\bigcup_{m}S^{m}_{*}(N,E)\) for \(*=\emptyset,\mathrm{ph},\mathrm{sc}\). An easy remark is that for any section \(u\) of \(E\), the translation \(T_{u}\) of \(\mathcal{C}^{\infty}(E,\mathbb{C})\) given by \(T_{u}f(x,v)=f(x,v-u(x))\) preserves \(S^{m}_{*}(N,E)\). When \(V\) is reduced to a point, we set \(S^{m}_{*}(\mathbb{R}^{n}):=S^{m}_{*}(\{\cdot\},\mathbb{R}^{n})\).
#### Negligible families
We say that a family \((f_{h},h\in(0,1])\) of \(\mathcal{C}^{\infty}(N)\) is negligible, and we write \(f_{h}=\mathcal{O}_{\infty}(h^{\infty})\), if all its \(\mathcal{C}^{\infty}\)-seminorms are in \(\mathcal{O}(h^{\infty})\). This definition is meaningful if \((f_{h})\) is only defined for \(h\in D\) where \(D\) is any subset of \((0,1]\) whose closure contains \(0\).
Let \(L\to M\) be a Hermitian line bundle and \(A\to M\) a complex vector bundle with rank \(r\). A family \((t_{k}\in\mathcal{C}^{\infty}(M,L^{k}\otimes A),\;k\in\mathbb{N})\) is said to be _negligible_ if for any open set \(U\) of \(M\), any \(s\in\mathcal{C}^{\infty}(U,L)\) with pointwise norm \(|s|=1\) and any frame \((a_{i}\in\mathcal{C}^{\infty}(U,A),i=1,\ldots,r)\), \(t_{k}=\sum f_{i,k^{-1}}s^{k}\otimes a_{i}\) on \(U\) where each coefficient \(f_{i,h}\in\mathcal{O}_{\infty}(h^{\infty})\). We denote by \(\mathcal{O}_{\infty}(k^{-\infty})\) the space of negligible families.
Let \(P\) be a family of operators
\[P=(P_{k}:\mathcal{C}^{\infty}(M,L^{k})\to\mathcal{C}^{\infty}(M,L^{k}),\;k\in \mathbb{N}), \tag{14}\]
The Schwartz kernel of each \(P_{k}\) is a section of \((L^{k}\boxtimes\overline{L}^{k})\otimes(\mathbb{C}_{M}\boxtimes|\Lambda|(M))\), where we denote by \(\boxtimes\) the external tensor product of vector bundles, by \(\mathbb{C}_{M}\) the trivial line bundle over \(M\) and by \(|\Lambda|(M)\) the density bundle. Since \(L^{k}\boxtimes\overline{L}^{k}=(L\boxtimes\overline{L})^{k}\), the previous definition of a negligible family applies to the family \((P_{k})\) of Schwartz kernels.
We denote by \(k^{-\infty}\Psi^{-\infty}(L)\) the space consisting of operator families of the form (14) such that each \(P_{k}\) is smoothing with a Schwartz kernel family in \(\mathcal{O}_{\infty}(k^{-\infty})\). As we will see, \(k^{-\infty}\Psi^{-\infty}(L)\) is both the residual space of twisted pseudodifferential operators and of Heisenberg pseudodifferential operators.
#### Semiclassical pseudodifferential operators
Let \(\Psi^{m}_{\mathrm{sc}}(M)\) be the space of semiclassical pseudodifferential operators of order \(m\) acting on smooth functions of \(M\). By definition \(P\in\Psi^{m}_{\mathrm{sc}}(M)\) is a family of operators \((P_{h}:\mathcal{C}^{\infty}(M)\to\mathcal{C}^{\infty}(M),h\in(0,1])\) with a Schwartz kernel \(K_{h}(x,y)\) satisfying for any \(\rho\in\mathcal{C}^{\infty}(M^{2})\),
1. if \(\operatorname{supp}\rho\cap\operatorname{diag}M=\emptyset\), then \(\rho K_{h}\) is smooth and negligible.
2. if \(\operatorname{supp}\rho\subset U^{2}\) where \((U,x_{i})\) is a coordinate chart of \(M\), then on \(U^{2}\) \[(\rho K_{h})(x,y)=(2\pi h)^{-n}\int e^{ih^{-1}\xi\cdot(x-y)}a(h,x,y,\xi)\ d\xi\] (15) with \(a\in S^{m}_{\mathrm{sc}}(U^{2},\mathbb{R}^{n})\).
Here and in the sequel, when the Schwartz kernel is written in a coordinate chart, we implicitly use the density \(|dx_{1}\ldots dx_{n}|\). The principal symbol of \(P\) is the function \(\sigma\in S^{m}_{\mathrm{ph}}(M,T^{*}M)\) such that \(a(h,x,x,\xi)=\rho(x)\sigma(x,\xi)+\mathcal{O}(h)\) on \(U\).
#### Twisted pseudodifferential operators
Let \(L\to M\) be a Hermitian line bundle.
**Definition 2.1**.: _A semiclassical twisted pseudodifferential operator \(P\) of \(L\) is a family having the form (14) such that for any \(\rho\in\mathcal{C}^{\infty}(M^{2})\),_
1. _if_ \(\operatorname{supp}\rho\cap\operatorname{diag}M=\emptyset\)_, then_ \(\rho P_{k}\) _is smooth and negligible._
2. _if_ \(\operatorname{supp}\rho\subset U^{2}\) _where_ \((U,x_{i})\) _is a coordinate chart of_ \(M\) _and_ \(s\in\mathcal{C}^{\infty}(U,L)\) _is such that_ \(|s|=1\)_, then on_ \(U^{2}\)__ \[(\rho P_{k})(x,y)=\left(\frac{k}{2\pi}\right)^{n}\int e^{ik\xi\cdot(x-y)}a(k^{ -1},x,y,\xi)\;d\xi\;s^{k}(x)\otimes\overline{s}^{k}(y)\] _with_ \(a\in S^{m}_{\mathrm{sc}}(U^{2},\mathbb{R}^{n})\)_._
To understand the dependence of the oscillatory integral with respect to the choice of the frame \(s\), consider a new frame \(t=e^{i\varphi}s\) where \(\varphi\in\mathcal{C}^{\infty}(U,\mathbb{R})\). Then \(\varphi(x)-\varphi(y)=\sum_{j}\psi_{j}(x,y)(x_{j}-y_{j})\) where \(\psi_{j}\in\mathcal{C}^{\infty}(U^{2},\mathbb{R})\) is such that \(\psi_{j}(x,x)=\partial_{x_{j}}\varphi(x)\). Using that \(s(x)\otimes\overline{s}(y)=\exp(-i\psi(x,y)\cdot(x-y))t(x)\otimes\overline{t} (y))\) and changing the variable \(\xi\) into \(\xi+\psi(x,y)\), we obtain
\[\begin{split}&\int e^{ik\xi\cdot(x-y)}a(k^{-1},x,y,\xi)\;d\xi \;s^{k}(x)\otimes\overline{s}^{k}(y)\\ &=\int e^{ik\xi\cdot(x-y)}a(k^{-1},x,y,\xi+\psi(x,y))\;d\xi\;t^{ k}(x)\otimes\overline{t}^{k}(y)\end{split} \tag{16}\]
So multiplying \(s\) by \(e^{i\varphi}\) amounts to change the amplitude \(a\) to \(b\) such that \(b(h,x,y,\xi)=a(h,x,y,\xi+\psi(x,y))\). This relation writes on the diagonal
\[b(h,x,x,\xi)=a(h,x,x,\xi+d\varphi(x)).\]
Let \(\nabla\) be a connection on \(L\) preserving the metric. Then \(\nabla s=\frac{1}{i}\beta_{s}\otimes s\), where \(\beta_{s}\) is a real one-form of \(U\). Observe that \(\nabla t=\frac{1}{i}\beta_{t}\otimes t\) where \(\beta_{t}=\beta_{s}-d\varphi\). So we can define the principal symbol as follows.
**Definition 2.2**.: _The principal symbol \(\sigma_{\nabla}(P)\) of \(P\in\Psi^{m}_{\rm tsc}(L)\) is the element of \(S^{m}_{\rm ph}(M,T^{*}M)\) such that for any local data \((\rho,U,s,a)\) as in Definition 2.1, we have_
\[a(h,x,x,\xi+\beta_{s}(x))=\rho(x)\sigma_{\nabla}(P)(x,\xi)+\mathcal{O}(h)\]
If \(\nabla^{\prime}\) is another connection of \(L\) preserving the metric, then \(\nabla^{\prime}=\nabla+\frac{1}{i}\alpha\) with \(\alpha\in\Omega^{1}(M,\mathbb{R})\) and \(\sigma_{\nabla^{\prime}}(P)(x,\xi)=\sigma_{\nabla}(P)(x,\xi+\alpha(x))\). It is easy to extend the basic properties of pseudodifferential operators to our setting:
* If \(P\in\Psi^{m}_{\rm tsc}(L)\), then \(\sigma_{\nabla}(P)=0\) if and only if \(P\in k^{-1}\Psi^{m-1}_{\rm tsc}(L)\).
* \(\bigcap_{m}k^{-m}\Psi^{-m}_{\rm tsc}(L)=k^{-\infty}\Psi^{-\infty}(L)\).
* if \(P\in\Psi^{m}_{\rm tsc}(L)\) and \(Q\in\Psi^{p}_{\rm tsc}(L)\), then 1. \((P_{k}Q_{k})\) belongs to \(\Psi^{m+p}_{\rm tsc}(L)\) and its principal symbol is the product of the principal symbols of \(P\) and \(Q\). 2. \(ik[P_{k},Q_{k}]\) belongs to \(\Psi^{m+p-1}_{\rm tsc}(L)\) and its symbol is the Poisson bracket for the twisted symplectic form (1) where \(\frac{1}{i}\omega\) is the curvature of \(\nabla\).
It is possible to define the twisted pseudodifferential operators without using local frames. Recall that the Schwartz kernel of an operator acting on \(\mathcal{C}^{\infty}(M,L^{k})\) is a section of \((L\boxtimes\overline{L})^{k}\otimes(\mathbb{C}_{M}\boxtimes|\Lambda|^{1}(M))\). Introduce an open neighborhood \(V\) of the diagonal of \(M^{2}\) and a section \(F\in\mathcal{C}^{\infty}(V,L\boxtimes\overline{L})\) such that \(|F|=1\) on \(V\) and \(F(x,x)=1\) for any \(x\in M\), in the sense that \(F(x,x)=u\otimes\overline{u}\) for any \(u\in L_{x}\) with norm 1. We claim that \(P\in\Psi^{m}_{\rm tsc}(L)\) if and only if its Schwartz kernel has the form
\[F^{k}(x,y)\phi(x,y)K_{k^{-1}}(x,y)+\mathcal{O}_{\infty}(k^{-\infty}) \tag{17}\]
where \(\phi\in\mathcal{C}^{\infty}_{0}(V)\) is equal to 1 on a neighborhood of the diagonal and \((K_{h},\ h\in(0,1])\) is the Schwartz kernel family of a semiclassical pseudodifferential operator \(Q\in\Psi^{m}_{\rm sc}(M)\). If furthermore \(\nabla\) is a connection of \(L\) such that the corresponding covariant derivative of \(F\) is zero on the diagonal, then \(\sigma_{\nabla}(P)=\sigma(Q)\).
These facts follow from a computation similar to (16), by writing \(F(x,y)=\exp(i\varphi(x,y))s(x)\otimes\overline{s}(y)\) where \(s\in\mathcal{C}^{\infty}(U,L)\) is such that \(|s|=1\) on \(U\) and \(\varphi\in\mathcal{C}^{\infty}(U^{2},\mathbb{R})\) satisfies \(\varphi(x,x)=0\).
#### Semiclassical Sobolev norms
Let \(m\in\mathbb{R}\). Denote by \(H^{m}(M,L^{k})\) the Sobolev space of sections of \(L^{k}\) of order \(m\). Let us give three equivalent definitions of the semiclassical Sobolev norms of a section \(u\) of \(L^{k}\). First the norm of \(H^{0}(M,L^{k})=L^{2}(M,L^{k})\) is defined by
\[\|u\|_{L^{2}(M,L^{k})}^{2}=\int_{M}|u(x)|^{2}d\mu(x) \tag{18}\]
where \(\mu\) is a volume element of \(M\) independent of \(k\).
1. only for integral exponent \(m\in\mathbb{N}\): choose a connection \(\nabla\) of \(L\), vector fields \((X_{i})_{i=1}^{N}\) of \(M\) which generates \(T_{x}M\) at each \(x\), and set \[\|u\|_{m}:=\sum_{|\alpha|\leqslant m}k^{-|\alpha|}\|\nabla_{X}^{\alpha}u\|_{L^{ 2}(M,L^{k})}\] where for any \(\alpha\in\mathbb{N}^{N}\), \(\nabla_{X}^{\alpha}=\nabla_{X_{1}}^{\alpha(1)}\dots\nabla_{X_{N}}^{\alpha(N)}\).
2. based on local semi-norms: for any chart \((U,\chi)\) of \(M\), frame \(s\in\mathcal{C}^{\infty}(U,L)\) such that \(|s|=1\) and \(\rho\in\mathcal{C}_{o}^{\infty}(U)\) we set \[\|u\|_{m,U,\chi,s,\rho}=\|\langle k^{-1}\xi\rangle^{m}\hat{v}(\xi)\|_{L^{2}( \mathbb{R}^{n})}\qquad\text{where }\rho u=(\chi^{*}v)s^{k}\] and \(\hat{v}\) is the Fourier transform of \(v\). Choose a finite family \((U_{i},\chi_{i},s_{i},\rho_{i})\) of local data such that \(M\) is covered by the \(\{\rho_{i}=1\}\) and set \(\|u\|_{m}:=\sum_{i}\|u\|_{m,U_{i},\chi_{i},s_{i},\rho_{i}}\).
3. based on twisted pseudodifferential operators: choose \(E\in\Psi_{\text{tsc}}^{m}(L)\) which is elliptic and invertible for any \(k\), and set \(\|u\|_{m}=\|Eu\|_{L^{2}(M,L^{k})}\).
The ellipticity condition is as usual that the principal symbol satisfies for some \(C>0\), \(|\sigma_{\nabla}(L)(x,\xi)|\geqslant C^{-1}|\xi|^{m}\) when \(|\xi|\geqslant C\). It does not depend on the choice of \(\nabla\).
We claim that all these norms are equivalent with constants uniform in \(k\). Furthermore for any twisted pseudodifferential operator \(P\in\Psi_{\text{tsc}}^{p}(L)\) and any \(m\in\mathbb{R}\), there exists \(C\) such that for any \(k\),
\[\|P_{k}u\|_{m}\leqslant C\|u\|_{m+p},\qquad\forall\;u\in\mathcal{C}^{\infty}(M,L^{k}). \tag{19}\]
Heisenberg semiclassical operator
In the introduction, we defined the Heisenberg pseudodifferential operators by expressing locally their Schwartz kernels as oscillatory integrals. Here we will start with a global definition which has the advantage that we can deduce some basic properties of these operators directly from the ones of the semiclassical pseudodifferential operators.
Let \(L\to M\) be a Hermitian line bundle with a connection \(\nabla\) preserving the metric. The line bundle \(L\boxtimes\overline{L}\) inherits from \(L\) a Hermitian metric and a connection. Its restriction to the diagonal is the flat trivial bundle with a natural trivialisation obtained by sending \(u\otimes\overline{v}\in L_{x}\otimes\overline{L}_{x}\) to the scalar product of \(u\) and \(v\). In the sequel we will use a particular extension of this trivialisation.
**Lemma 3.1**.: _There exist a tubular neighborhood \(V\) of the diagonal of \(M^{2}\) and \(F\in\mathcal{C}^{\infty}(V,L\boxtimes\overline{L})\) such that \(|F|=1\) on \(V\) and_
\[F(x,x)=1,\quad\nabla F(x,x)=0,\quad\nabla_{Y}\nabla_{Y}F(x,x)=0\qquad\forall x\in M\]
_for any vector field \(Y\) of \(M^{2}\) having the form \(Y(x,y)=(X(x),-X(y))\) with \(X\in\mathcal{C}^{\infty}(M,TM)\). If \((V^{\prime},F^{\prime})\) satisfies the same conditions, then \(F=F^{\prime}\exp(i\psi)\) where \(\psi\in\mathcal{C}^{\infty}(V\cap V^{\prime},\mathbb{R})\) vanishes to third order along the diagonal_
Proof.: Consider more generally a closed submanifold \(N\) of \(M\), a flat section \(E\) of \(L|_{N}\), and a subbundle \(\mathcal{D}\) of \(TM|_{N}\) such that \(\mathcal{D}\oplus TN=TM|_{N}\). Then we can extend \(E\) to a neighborhood of \(N\) in such a way that it satisfies on \(N\): \(\nabla E=0\) and \(\nabla_{X}\nabla_{X}E=0\) for any vector field \(X\) of \(M\) such that \(X|_{N}\) is a section of \(\mathcal{D}\). To see this, introduce a coordinate chart \((U,x_{i},y_{j})\) of \(M\) and a unitary frame \(s:U\to L\) such that \(N\cap U=\{x_{1}=\ldots=x_{k}=0\}\), \((\partial_{x_{1}},\ldots,\partial_{x_{k}})\) is a frame of \(\mathcal{D}\) and \(s\) extends \(E\). Then the section we are looking for is \(e^{i\varphi}s\) with
\[\varphi=\sum_{i=1}^{k}\beta_{i}(0,y)x_{i}+\tfrac{1}{2}\sum_{i,j=1}^{k}( \partial_{x_{j}}\beta_{i})(0,y)x_{i}x_{j}+\mathcal{O}(|x|^{3})\]
where the \(\beta_{i}\)'s are the functions in \(\mathcal{C}^{\infty}(U)\) such that \(\nabla_{\partial_{x_{i}}}s=\tfrac{1}{i}\beta_{i}\,s\). Applying this to \(M^{2}\), \(L\boxtimes\overline{L}\) and \(\operatorname{diag}M\) instead of \(M\), \(L\), \(N\) concludes the proof.
**Definition 3.2**.: _A semiclassical Heisenberg pseudodifferential operator of order \(m\in\mathbb{R}\) is a family of operators \((P_{k}:\mathcal{C}^{\infty}(M,L^{k})\to\mathcal{C}^{\infty}(M,L^{k}),\ k\in \mathbb{N})\)_
_whose Schwartz kernels have the form_
\[F^{k}(x,y)\phi(x,y)K_{k^{-\frac{1}{2}}}(x,y)+\mathcal{O}_{\infty}(k^{-\infty}) \tag{20}\]
_where \((V,F)\) satisfies the conditions of Lemma 3.1, \(\phi\in\mathcal{C}_{0}^{\infty}(V)\) is equal to \(1\) on a neighborhood of the diagonal and \((K_{h},\ h\in(0,1])\) is the Schwartz kernel family of a semiclassical pseudodifferential operator \((Q_{h})\in\Psi^{m}_{\mathrm{sc}}(M)\)._
_The principal symbol \(\sigma(P)\) of \((P_{k})\) is defined as the principal symbol of \((Q_{h})\)._
We denote by \(\Psi^{m}_{\mathrm{Heis}}(L,\nabla)\) the space of semiclassical Heisenberg pseudodifferential operators of order \(m\). For any \(P\in\Psi^{m}_{\mathrm{Heis}}(L,\nabla)\), for any fixed \(k\), \(P_{k}\) is a pseudo-differential operator of order \(m\), so \(P_{k}\) act on \(\mathcal{C}^{\infty}(M,L^{k})\) and on \(\mathcal{C}^{-\infty}(M,L^{k})\). The definition clearly does not depend on the choice of the cutoff function \(\phi\). It neither doesn't depend on the choice of \(F\) as will be explained below. To compare with the twisted pseudodifferential operators, observe first that the section \(F\) in (17) satisfies a weaker condition than in Definition 3.2 and second in (17), the Schwartz kernel of \(Q\) is evaluated at \(h=k^{-1}\), whereas in (20) we have \(h=k^{-1/2}\).
By defining globally the Heisenberg pseudodifferential operators in terms of scalar pseudodifferential operators as in Definition 3.2 instead of the local oscillatory integrals (9), we avoid the usual discussions on the coordinate changes and the principal symbol and we deduce easily the following three facts:
* If \(P\in\Psi^{m}_{\mathrm{Heis}}(L,\nabla)\), then \(\sigma(P)=0\) if and only if \(P\in k^{-\frac{1}{2}}\Psi^{m-1}_{\mathrm{Heis}}(L,\nabla)\).
* \(\bigcap_{m}k^{-\frac{m}{2}}\Psi^{-m}_{\mathrm{Heis}}(L,\nabla)=k^{-\infty} \Psi^{-\infty}(L)\).
* If \(P\in\Psi^{m}_{\mathrm{Heis}}(L)\) and \(\rho\in\mathcal{C}^{\infty}(M^{2})\) is such that \(\operatorname{supp}\rho\cap\operatorname{diag}M=\emptyset\), then the kernel \((x,y)\to\rho(x,y)P_{k}(x,y)\) is smooth and negligible.
Unfortunately, the definition 3.2 does not allow to deduce the composition properties of the Heisenberg operators from the one of the semiclassical pseudodifferential operators.
By Lemma 3.1, \(F\) is uniquely defined modulo a factor \(e^{i\psi}\) with \(\psi\in\mathcal{C}^{\infty}(U^{2})\) vanishing to third order along the diagonal. Write
\[\psi=\sum_{|\alpha|=3}\psi_{\alpha}(x,y)(x-y)^{\alpha}\]
with smooth coefficients \(\psi_{\alpha}\). For any symbol \(a\in S^{\infty}(U^{2},\mathbb{R}^{n})\), let \(I(a)\) be the oscillatory integral
\[I(a)(h,x,y)=\int e^{ih^{-1}\xi\cdot(x-y)}a(h,x,y,\xi)\ d\xi.\]
**Lemma 3.3**.: _For all \(a\in S^{m}(U^{2},\mathbb{R}^{n})\), \(e^{ih^{-2}\psi(x,y)}I(a)(h,x,y)=I(b)(h,x,y)\) with \(b\in S^{m}(U^{2},\mathbb{R}^{n})\) having the asymptotic expansion_
\[b=\sum_{\ell=0}^{\infty}\frac{h^{\ell}}{\ell!}L^{\ell}(a),\qquad\text{ with }\quad L=\sum_{|\alpha|=3}\psi_{\alpha}(x,y)\partial_{\xi}^{\alpha}. \tag{21}\]
_In particular, if \(a\in S^{m}_{\rm sc}(U^{2},\mathbb{R}^{n})\), then \(b\in S^{m}_{\rm sc}(U^{2},\mathbb{R}^{n})\)._
This proves that Definition 3.2 does not depend on the choice of \(F\). Moreover, since \(b=a+\mathcal{O}(h)\), the principal symbol of \((Q_{k})\) is also independent of \(F\).
Proof.: By integration by part, \((x_{i}-y_{i})I(a)=ihI(\partial_{\xi_{i}}a)\), so \(ih^{-2}\psi I(a)=hI(L(a))\) with \(L\) given by (21). By Taylor formula, it comes that
\[e^{ih^{-2}\psi}I(a)=\sum_{\ell=0}^{N}\frac{h^{\ell}}{\ell!}I(L^{\ell}(a))+h^{ N+1}r_{N}I(L^{N+1}(a))\]
with
\[r_{N}(h,x,y)=\frac{1}{N!}\int_{0}^{1}e^{ith^{-2}\psi(x,y)}(1-t)^{N}\;dt.\]
Observe that \(r_{N}\) is smooth and \(h^{2|\alpha|}\partial_{x,y}^{\alpha}r_{N}=\mathcal{O}(1)\). Furthermore, since \(I(a)\) is a genuine integral for \(m<-n\), by derivating under the integral sign, when \(k\in\mathbb{N}\) satisfies \(k+m<-n\), \(I(a)\in\mathcal{C}^{k}\) and for any \(|\alpha|=k\), \(h^{|\alpha|}\partial_{x,y}^{\alpha}I(a)=O(1)\). Since \(L^{N+1}(a)\) is a symbol of order \(m-3(N+1)\), it comes that for any \(k\), when \(N\) is sufficiently large, \(r_{N}I(L^{N+1}(a))\) is of class \(\mathcal{C}^{k}\) and for any \(|\alpha|=k\), \(h^{2|\alpha|}\partial_{x,y}^{\alpha}(r_{N}I(L^{N+1}(a)))=\mathcal{O}(1)\).
So for any \(b\in S^{m}(U^{2},\mathbb{R}^{n})\) having the asymptotic expansion (21), we have that \(e^{ih^{-2}\psi}I(a)=I(b)+\rho\) with \(\rho\in h^{\infty}\mathcal{C}^{\infty}(U^{2})\), and we can absorb \(\rho\) in \(I(b)\) by modifying \(b\) by a summand in \(h^{\infty}S^{-\infty}(U^{2},\mathbb{R}^{n})\).
Let us explain how we recover the local expression (9) of the introduction. Let \((U,x_{i})\) be a local chart of \(M\) and \(s\in\mathcal{C}^{\infty}(U,L)\) such that \(|s|=1\) on \(U\). Let \(\beta\in\Omega^{1}(U,\mathbb{R})\) be the connection form, \(\nabla s=\frac{1}{i}\beta\otimes s\). Then we easily check that the section \(F\in\mathcal{C}^{\infty}(U^{2},L\boxtimes\overline{L})\) given by
\[F(x,y)=e^{i\beta\big{(}\frac{x+y}{2}\big{)}\cdot(x-y)}s(x)\otimes\overline{s}(y) \tag{22}\]
satisfies the condition of Lemma 3.1. Consequently, the Schwartz kernel of an operator in \(\Psi^{m}_{\rm Heis}(L)\) has the form \(K_{k}s^{k}\boxtimes\overline{s}^{k}\) on \(U^{2}\) with
\[K_{k}(x,y)=e^{ik\beta\big{(}\frac{x+y}{2}\big{)}\cdot(x-y)}\Big{(}\frac{ \sqrt{k}}{2\pi}\Big{)}^{n}\int e^{i\sqrt{k}\;\xi\cdot(x-y)}a(k^{-\frac{1}{2}},x,y,\xi)\;d\xi \tag{23}\]
with \(a\in S^{m}_{\rm sc}(U^{2},\mathbb{R}^{n})\). Of course, we can assume that \(a\) does not depend on \(y\) (resp. \(x\)) or that it is on the Weyl form \(a(h,x,y,\xi)=b(h,\frac{1}{2}(x+y),\xi)\) with \(b\in S^{m}_{\rm sc}(U,\mathbb{R}^{n})\). In this last case, we recover exactly the expression (9).
Another interesting expression is obtained by rescaling the variable \(\xi\) by a square root of \(k\) in (23) and absorbing the \(\beta\) factor into the amplitude:
\[\begin{split} K_{k}(x,y)&=e^{ik\beta\big{(}\frac{ x+y}{2}\big{)}\cdot(x-y)}\Big{(}\frac{k}{2\pi}\Big{)}^{n}\int e^{ik\;\xi\cdot(x-y)}a(k ^{-\frac{1}{2}},x,y,\sqrt{k}\;\xi)\;d\xi\\ &=\Big{(}\frac{k}{2\pi}\Big{)}^{n}\int e^{ik\;\xi\cdot(x-y)}a \big{(}k^{-\frac{1}{2}},x,y,\sqrt{k}\big{(}\xi-\beta\big{(}\frac{x+y}{2}\big{)} \big{)}\big{)}\;d\xi\end{split} \tag{24}\]
Assume that \(a\) is on the Weyl form, \(a(h,x,y,\xi)=b(h,\frac{1}{2}(x+y),\xi)\), then we have that
\[K_{k}(x,y)=\Big{(}\frac{k}{2\pi}\Big{)}^{n}\int e^{ik\;\xi\cdot(x-y)}\tilde{b} \big{(}k^{-1},\tfrac{1}{2}(x+y),\xi\big{)}\;d\xi \tag{25}\]
where \(\tilde{b}(h,x,\xi)=b(\sqrt{h},x,h^{-\frac{1}{2}}(\xi-\beta(x)))\). So we recognise a semiclassical pseudodifferential operator at \(k=h^{-1}\) with a Weyl symbol \(\tilde{b}\).
We call \(\tilde{b}\) the _effective_ symbol. As we will see, it satisfies some exotic estimates. Let us introduce the symbol semi-norms of \(S^{m}(U,\mathbb{R}^{n})\),
\[\|a\|_{m,\ell,K}=\max_{|\alpha|+|\beta|\leqslant\ell}\sup_{x\in K,\xi\in \mathbb{R}^{n}}|\partial^{\alpha}_{x}\partial^{\beta}_{\xi}a(x,\xi)|\langle \xi\rangle^{-m+|\beta|}\]
where \(K\) is a compact subset of \(U\).
**Lemma 3.4**.: _For any \(m\in\mathbb{R}\), \(\alpha,\beta\in\mathbb{N}^{n}\) and compact subset \(K\) of \(U\), there exists \(C>0\) such that for any \(a\in\mathcal{C}^{\infty}(U\times\mathbb{R}^{n})\), the function \(\tilde{a}(h,x,\xi)=a(x,h^{-\frac{1}{2}}(\xi-\beta(x))\) satisfies_
\[|\partial^{\alpha}_{x}\partial^{\beta}_{\xi}\tilde{a}(h,x,\xi)| \leqslant C\|a\|_{m,\ell,K}h^{-\frac{1}{2}(m_{+}+\ell)}\langle\xi\rangle^{m-| \beta|}\]
_for all \(0<h\leqslant 1\), \(x\in K\), \(\xi\in\mathbb{R}^{n}\) with \(\ell=|\alpha|+|\beta|\), \(m_{+}=\max(m,0)\)._
Proof.: For any \(0<\epsilon\leqslant 1\), we have \(\langle\eta\rangle\leqslant\langle\epsilon^{-1}\eta\rangle\leqslant\epsilon^{- 1}\langle\eta\rangle\). Furthermore, if \(x\in K\), \(C^{-1}\langle\xi\rangle\leqslant\langle\xi-\beta(x)\rangle\leqslant C\langle\xi\rangle\). So for any \(m\in\mathbb{R}\),
\[\langle\epsilon^{-1}(\xi-\beta(x))\rangle^{m}\leqslant C_{m}\epsilon^{-m_{+} }\langle\xi\rangle^{m}. \tag{26}\]
The derivatives of \(\tilde{a}_{\epsilon}(x,\xi)=a(x,\epsilon^{-1}(\xi-\beta(x)))\) have the form \(\partial^{\alpha}_{x}\partial^{\beta}_{\xi}\tilde{a}_{\epsilon}=\tilde{b}_{\epsilon}\) with
\[b=\sum_{\alpha^{\prime},\beta^{\prime}}\epsilon^{-|\beta^{\prime}|}f_{\alpha^{ \prime},\beta^{\prime}}\partial^{\alpha^{\prime}}_{x}\partial^{\beta^{\prime}}_ {\xi}a \tag{27}\]
where the coefficients \(f_{\alpha^{\prime},\beta^{\prime}}\) are in \(\mathcal{C}^{\infty}(U)\) and don't depend on \(a\), and we sum over the multi-indices satisfying \(\beta\leqslant\beta^{\prime}\) and \(|\alpha^{\prime}|+|\beta^{\prime}|\leqslant|\alpha|+|\beta|\). So for \(x\in K\),
\[|\partial_{x}^{\alpha}\partial_{\xi}^{\beta}\tilde{a}_{\epsilon}(x,\xi)| \leqslant C\sum_{\alpha^{\prime},\beta^{\prime}}\epsilon^{-|\beta ^{\prime}|}\|a\|_{m,\ell,K}\langle\epsilon^{-1}(\xi-\beta(x))\rangle^{m-| \beta^{\prime}|}\quad\text{ by \eqref{eq:K}}\] \[\leqslant C\sum_{\alpha^{\prime},\beta^{\prime}}\epsilon^{-|\beta ^{\prime}|}\|a\|_{m,\ell,K}\langle\xi\rangle^{m-|\beta^{\prime}|}\epsilon^{-m _{+}}\quad\text{ by \eqref{eq:K}}\] \[\leqslant C\epsilon^{-(|\alpha|+|\beta|)}\|a\|_{m,\ell,K}\langle \xi\rangle^{m-|\beta|}\epsilon^{-m_{+}}\]
because \(|\beta|\leqslant|\beta^{\prime}|\leqslant|\alpha|+|\beta|\) and we conclude by setting \(\epsilon=h^{\frac{1}{2}}\).
So \(\tilde{b}\) belongs to the class \(S_{\delta}\) with exponent \(\delta=1/2\), that is at each derivative we loose a factor \(h^{-\delta}\). Recall that \(\delta=1/2\) is the critical exponent: the space of pseudodifferential operators with symbol in \(S_{\delta}\) is an algebra for \(\delta\in[0,1/2]\), but the standard asymptotic expansions of the symbolic calculus only hold for \(\delta\in[0,1/2[\), cf. for instance [10, Proposition 7.7]. As we will see in Section 5 and in [5], the Heisenberg pseudodifferential operators form an algebra and have an associated symbol calculus, but this can not be deduced form the usual composition rules of pseudodifferential operators. Nevertheless, Lemma 3.4 has some useful consequences, the first of them being the \(L^{2}\) mapping property. Recall the definition (18) of the \(L^{2}\)-norm with a volume element independent of \(k\).
**Theorem 3.5**.: _For any \(Q\in\Psi^{0}_{\mathrm{Heis}}(L)\), there exists \(C>0\) such that for any \(k\), \(\|Q_{k}\|_{\mathcal{L}(L^{2}(M,L^{k}))}\leqslant C\)._
Proof.: Introduce a finite atlas \((U_{i},\phi_{i})\) of \(M\) with functions \(\varphi_{i},\psi_{i}\in\mathcal{C}^{\infty}_{0}(U_{i})\) such that \(\sum\varphi_{i}=1\) and \(\operatorname{supp}\varphi_{i}\subset\operatorname{int}\{\psi_{i}=1\}\). Write
\[P=\sum\psi_{i}P\varphi_{i}+Q. \tag{28}\]
Since \(\sum\psi_{i}(x)\varphi_{i}(y)=1\) when \(x\) is close to \(y\), \(Q\) is in \(k^{-\infty}\Psi^{-\infty}\). Identifying \(U_{i}\) with \(\phi_{i}(U_{i})\), the Schwartz kernel of \(\psi_{i}P\varphi_{i}\) has the form (25) with a symbol \(\tilde{b}_{i}\) satisfying
\[|\partial_{x}^{\alpha}\partial_{\xi}^{\beta}\tilde{b}_{i}(h,x,\xi)|\leqslant h ^{-\frac{1}{2}(|\alpha|+|\beta|)}C_{\alpha,\beta}.\]
By [10, Theorem 7.11], \(\psi_{i}P\varphi_{i}=\mathcal{O}(1):L^{2}(\mathbb{R}^{n})\to L^{2}( \mathbb{R}^{n})\).
Another consequence of Lemma 3.4 is the following important fact.
**Lemma 3.6**.: \(k^{-\infty}\Psi^{-\infty}(L)\) _is a bilateral ideal of \(\Psi_{\mathrm{Heis}}(L,\nabla)\)._
Proof.: Consider a pseudodifferential operator \(A(h)\) of \(\mathbb{R}^{n}\) with the Schwartz kernel \((2\pi h)^{-n}\int e^{ih^{-1}\xi(x-y)}a(h,x,y,\xi)d\xi\) where the amplitude \(a(h,x,y,\xi)\) is zero if \(|x|+|y|\geqslant C\) and satisfies \(|\partial_{x,y}^{\alpha}a(h,x,y,\xi)|\leqslant h^{-|\alpha|}C_{\alpha}\langle \xi\rangle^{m}\) for any \(\alpha\). Then, with the usual regularisation of oscillatory integrals by integration by part, one proves that
\[h^{|\alpha|}\sup|\partial^{\alpha}A(h)u|\leqslant C^{\prime}_{\alpha}\max_{| \beta|\leqslant m+n+1+|\alpha|}\sup h^{|\beta|}|\partial^{\beta}u|,\qquad \forall h\in(0,1]\]
So for any family of operators \(B(h):\mathcal{C}^{\infty}(\mathbb{R}^{n})\to\mathcal{C}^{\infty}(\mathbb{R}^{ n})\), \(h\in(0,1]\), if \(B(h)\) has a compactly supported smooth kernel in \(\mathcal{O}_{\infty}(h^{\infty})\) then the same holds for \(A(h)\circ B(h)\). Applying this to the \(\psi_{i}P\varphi_{i}\) of (28) proves the result.
To end this section, let us extend the mapping property to the Sobolev space. We denote by \(\|\cdot\|_{m}\) the \(m\)-th semiclassical Sobolev norm of sections of \(L^{k}\), defined as in Section 2.
**Theorem 3.7**.: _For any \(m,p\in\mathbb{R}\) and any \(Q\in\Psi^{m}_{\mathrm{Heis}}(L)\), there exists \(C>0\) such that for any \(k\),_
\[\|Q_{k}u\|_{p}\leqslant Ck^{\frac{1}{2}m_{+}}\|u\|_{p+m},\qquad\forall u\in \mathcal{C}^{\infty}(M,L^{k}).\]
Since for any \(k\), \(Q_{k}\) is a pseudodifferential operator of order \(m\) of \(L^{k}\), we already know that \(P_{k}\) is continuous \(H^{p+m}(M,L^{k})\to H^{m}(M,L^{k})\). Theorem 3.7 gives a uniform estimate with respect to \(k\).
Proof.: It suffices to prove that for any \(E\in\Psi^{p-m}_{\mathrm{tsc}}(L)\) and \(E^{\prime}\in\Psi^{-p}_{\mathrm{tsc}}(L)\) one has
\[E^{\prime}_{k}P_{k}E_{k}=\mathcal{O}(k^{\frac{1}{2}m_{+}}):L^{2}(M,L^{k})\to L ^{2}(M,L^{k}). \tag{29}\]
For this it suffices to prove that for any chart domain \(U\) of \(M\) and functions \(\rho_{j}\in\mathcal{C}^{\infty}_{0}(U)\), \(j=1,...,4\), one has
\[\rho_{1}E^{\prime}_{k}\,\rho_{2}P_{k}\,\rho_{3}E_{k}\,\rho_{4}=\mathcal{O}(k^ {\frac{1}{2}m_{+}}):L^{2}(M,L^{k})\to L^{2}(M,L^{k}). \tag{30}\]
To show that (30) implies (29), write \(P\) on the form (28), \(E^{\prime}\psi_{i}=\tilde{\psi}_{i}E^{\prime}\psi_{i}+(1-\tilde{\psi}_{i})E^{ \prime}\psi_{i}\) with \(\tilde{\psi}_{i}\in\mathcal{C}^{\infty}_{0}(U_{i})\) such that \(\mathrm{supp}\,\psi_{i}\subset\mathrm{int}\{\tilde{\psi}_{i}=1\}\) and similarly for \(E\), and use that \(k^{-\infty}\Psi^{-\infty}(L)\) is an ideal of both \(\Psi^{\infty}_{\mathrm{Heis}}(L,\nabla)\) and \(\Psi^{\infty}_{\mathrm{tsc}}(L)\).
As in [10, Definition 7.5], for \(\delta\in[0,1]\) and \(m:\mathbb{R}^{n}\to[0,\infty)\) an order function, let \(S_{\delta}(m)\) be the space of families \((a(h),\ h\in(0,1])\) of \(\mathcal{C}^{\infty}(\mathbb{R}^{n})\) such that \(|\partial^{\alpha}a(h,x)|\leqslant C_{\alpha}h^{-\delta|\alpha|}m(x)\). Identify \(U\) with \(\phi(U)\) and denote by \(\mathrm{Op}_{k}(\tilde{b})\) the operator with kernel (25).
Then for any \(\rho,\rho^{\prime}\in\mathcal{C}_{0}^{\infty}(U)\), \(\rho E_{k}^{\prime}\,\rho^{\prime}\), \(k^{-\frac{1}{2}m_{+}}\rho P_{k}\,\rho^{\prime}\) and \(\rho E_{k}\,\rho^{\prime}\) are equal to \(\operatorname{Op}_{k}(\tilde{b})\) with \(\tilde{b}\) in \(S_{0}(\langle\xi\rangle^{-p})\), \(S_{1/2}(\langle\xi\rangle^{m})\) and \(S_{0}(\langle\xi\rangle^{p-m})\) respectively. By [10, Proposition 7.7, Theorem 7.9], their product is equal to \(\operatorname{Op}_{k}(c)\) with \(c\in S_{1/2}(1)\), which proves (30) by [10, Theorem 7.11].
Actually, Theorem 3.7 can be improved if we use Sobolev norms associated to the covariant derivative \(\nabla\) instead of the semiclassical Sobolev norms. For instance, for any \(m\in\mathbb{N}\), any \(Q\in\Psi_{\operatorname{Heis}}^{-m}(L,\nabla)\) and any vector fields \(X_{1}\),..., \(X_{m}\) of \(M\), we will see in Proposition 5.1, that \(P=(k^{-m/2}\nabla_{X_{1}}\dots\nabla_{X_{m}}Q_{k})\) belongs to \(\Psi_{\operatorname{Heis}}^{0}(L,\nabla)\), so by Theorem 3.5,
\[k^{-m/2}\nabla_{X_{1}}\dots\nabla_{X_{m}}Q_{k}=\mathcal{O}(1):L^{2}(M,L^{k}) \to L^{2}(M,L^{k}) \tag{31}\]
To compare, Theorem 3.7 only implies that the norm of \(P_{k}\) in \(\mathcal{L}(L^{2}(M,L^{k}))\) is in \(\mathcal{O}(k^{m/2})\). The generalisation of (31) to fractional exponents \(m\) not necessarily nonnegative will be given in [5].
## 4 A product associated to an antisymmetric bilinear form
Let \(E\) be a \(n\)-dimensional real vector space and \(A\in\wedge^{2}E^{*}\). Later, we will choose \(E=T_{x}M\) with \(A=\omega(x)\). Introduce the covariant derivative of \(E\)
\[\nabla^{A}=d+\tfrac{1}{i}\beta,\qquad\text{ where }\beta\in\Omega^{1}(E, \mathbb{R}),\quad\beta(x)(Y)=\tfrac{1}{2}A(x,Y). \tag{32}\]
The curvature of \(\nabla^{A}\) is \(\frac{1}{i}A\), that is \([\nabla^{A}_{X},\nabla^{A}_{Y}]=\frac{1}{i}A(X,Y)\) for any \(X,Y\in E\). We will define for any tempered distribution \(g\in\mathcal{S}^{\prime}(E^{*})\) an operator \(g(\frac{1}{i}\nabla^{A})\).
We assume first that \(E=\mathbb{R}^{n}\). For any \(g\in\mathcal{S}^{\prime}(\mathbb{R}^{n})\), we denote by \(\widehat{g}\) and \(g^{\vee}\) its Fourier transform and inverse Fourier transform, with the normalisation
\[\widehat{g}(\xi)=\int_{\mathbb{R}^{n}}e^{-ix\cdot\xi}g(x)\;dx,\qquad g^{\vee} (x)=(2\pi)^{-n}\,\widehat{g}(-x).\]
Let \(g(\frac{1}{i}\partial)\) be the operator from \(\mathcal{S}(\mathbb{R}^{n})\) to \(\mathcal{S}^{\prime}(\mathbb{R}^{n})\) such that \(g(\frac{1}{i}\partial)u=v\) if and only if \(g(\xi)\widehat{u}(\xi)=\widehat{v}(\xi)\). The Schwartz kernel of \(g(\frac{1}{i}\partial)\) is \(g^{\vee}(x-y)\).
Then for any antisymmetric bilinear form \(A\) of \(\mathbb{R}^{n}\), define \(g(\frac{1}{i}\nabla^{A})\) as the operator with Schwartz kernel
\[K_{g}(x,y)=e^{-\frac{i}{2}A(x,y)}g^{\vee}(x-y). \tag{33}\]
Since \(g^{\vee}(x-y)\) is a tempered distribution of \(\mathbb{R}^{n}_{x}\times\mathbb{R}^{n}_{y}\), the same holds for \(K_{g}\), so \(g(\frac{1}{i}\nabla^{A})\) is continuous from \(\mathcal{S}(\mathbb{R}^{n})\) to \(\mathcal{S}^{\prime}(\mathbb{R}^{n})\). We claim that this definition
has an intrinsic meaning for \(A\in\wedge^{2}E^{*}\) if we consider that \(g\in\mathcal{S}^{\prime}(E^{*})\) and \(g(\frac{1}{i}\nabla^{A})\) is an operator \(\mathcal{S}(E)\to\mathcal{S}^{\prime}(E)\). One way to see this is to write for \(g\) and \(u\) in \(\mathcal{S}(\mathbb{R}^{n})\)
\[(g(\tfrac{1}{i}\nabla^{A})u)(x)=(2\pi)^{-n}\int_{\mathbb{R}^{n}\times\mathbb{ R}^{n}}e^{-\tfrac{i}{2}A(x,y)+i\xi\cdot(x-y)}g(\xi)u(y)\;dy\,d\xi \tag{34}\]
and to notice that the product \(\xi\cdot(x-y)\) is well-defined for \(\xi\in E^{*}\), \(x,y\in E\), and the measure \(dy\,d\xi\) can be interpreted as the canonical volume form of \(E\times E^{*}\simeq\mathbb{R}^{n}_{y}\times\mathbb{R}^{n}_{\xi}\).
Assume again that \(E\simeq\mathbb{R}^{n}_{x}\), \(E^{*}\simeq\mathbb{R}^{n}_{\xi}\) and let
\[\nabla^{A}_{j}:=\nabla^{A}_{\partial_{x_{j}}}=\partial_{x_{j}}+\tfrac{1}{2i}{ \sum_{k}}x_{k}A_{kj} \tag{35}\]
where \((A_{ij})\) is the matrix of \(A\), so \(A_{ij}=A(e_{i},e_{j})\) with \((e_{i})\) the canonical basis of \(\mathbb{R}^{n}\).
**Lemma 4.1**.: _For any \(f\in\mathcal{S}^{\prime}(E^{*})\), we have \(\tfrac{1}{i}\nabla^{A}_{j}\circ f(\tfrac{1}{i}\nabla^{A})=(\xi_{j}\sharp_{A}f)( \tfrac{1}{i}\nabla^{A})\) where_
\[\xi_{j}\sharp_{A}f=(\xi_{j}+\tfrac{i}{2}{\sum_{k}}A_{jk}\partial_{\xi_{k}})f. \tag{36}\]
Proof.: Simply use the identity
\(\tfrac{1}{i}(\partial_{x_{j}}+\tfrac{1}{2i}{\sum_{k}}x_{k}A_{kj})e^{-\tfrac{i }{2}A(x,y)+i\xi\cdot(x-y)}=(\xi_{j}+\tfrac{1}{2i}\sum_{k}A_{jk}\partial_{\xi_{ k}})e^{-\tfrac{i}{2}A(x,y)+i\xi\cdot(x-y)}\) in (34) and integrate by part with respect to the variables \(\xi_{k}\).
The reason for the notation \(g(\tfrac{1}{i}\nabla^{A})\) is that when \(g\) is a monomial, \(g(\tfrac{1}{i}\nabla^{A})\) is merely a symmetrization of covariant derivatives. The precise result is the following proposition which is not really needed in the sequel. Notice first that for \(g\equiv 1\), \(g(\tfrac{1}{i}\nabla^{A})=\operatorname{id}\) as a direct consequence of the definition.
**Proposition 4.2**.: _For any \(N\geqslant 1\) and \(X_{1},\dots X_{N}\in E\), if \(g=\prod_{i=1}^{N}f_{i}\) with \(f_{i}(\xi)=\xi\cdot X_{i}\), then_
\[g(\tfrac{1}{i}\nabla^{A})=\frac{(-i)^{N}}{N!}\sum_{\sigma\in\mathfrak{S}_{N}} \nabla^{A}_{X_{\sigma(1)}}\dots\nabla^{A}_{X_{\sigma(N)}},\]
_where \(\mathfrak{S}_{N}\) is the group of permutations of \(1,\dots,N\)._
Proof.: For any \(X\in E\) we have by (36) with \(f(\xi)=\xi\cdot X\) that
\[\tfrac{1}{i}\nabla^{A}_{X}\circ g(\tfrac{1}{i}\nabla^{A})=(f\sharp_{A}\,g)( \tfrac{1}{i}\nabla^{A}) \tag{37}\]
where \(f\sharp_{A}\,g=(f+\tfrac{i}{2}A(X,\partial_{\xi}))g\). Choosing \(g=1\), we obtain the result for \(N=1\). We now proceed by induction over \(N\) and assume the result holds for \(N-1\) with \(N\geqslant 2\). Thus
\[\frac{(-i)^{N}}{N!}\sum_{\sigma\in\mathfrak{S}_{N}}\nabla^{A}_{X_{\sigma(1)}} \dots\nabla^{A}_{X_{\sigma(N)}}=\frac{1}{N}\sum_{j=1}^{N}\tfrac{1}{i}\nabla^{ A}_{X_{j}}\circ g_{j}(\tfrac{1}{i}\nabla^{A}_{X})\]
with \(g_{j}=g/f_{j}\). By (37), \(f\sharp_{A}g_{j}=fg_{j}+\tfrac{i}{2}\sum_{\ell\neq j}A(X,X_{\ell})g_{j\ell}\) where \(g_{j\ell}=g/(f_{j}f_{\ell})\). So we have
\[\frac{(-i)^{N}}{N!}\sum_{\sigma\in S_{N}}\nabla^{A}_{X_{\sigma(1)}}\dots \nabla^{A}_{X_{\sigma(N)}}=g(\tfrac{1}{i}\nabla^{A})+\frac{1}{N}\sum_{j\neq \ell}A(X_{j},X_{\ell})g_{j\ell}(\tfrac{1}{i}\nabla^{A})\]
and the sum in the right-hand side is zero because \(A\) is antisymmetric whereas \(g_{j\ell}=g_{\ell j}\).
Let \(\mathcal{D}^{\infty}_{\mathrm{is}}(A)\) be the filtered algebra generated by the covariant derivatives \(\nabla^{A}_{X}\) where \(X\in E\). More explicitly, \(\mathcal{D}^{\infty}_{\mathrm{is}}(A)=\cup_{m\in\mathbb{N}}\mathcal{D}^{m}_{ \mathrm{is}}(A)\) with
\[\mathcal{D}^{m}_{\mathrm{is}}(A)=\mathrm{Span}(\nabla^{A}_{X_{1}}\dots\nabla^ {A}_{X_{\ell}}/\ 0\leqslant\ell\leqslant m,X_{1},\dots X_{\ell}\in E).\]
Let \(\mathbb{C}_{\leqslant m}[E^{*}]\) be the space of complex polynomial functions of \(E^{*}\) with degree less than or equal to \(m\). By Lemma 4.1 and Proposition 4.2,
\[\mathcal{D}^{m}_{\mathrm{is}}(A)=\big{\{}f(\tfrac{1}{i}\nabla^{A}),\ f\in \mathbb{C}_{\leqslant m}[E^{*}]\big{\}}. \tag{38}\]
By Lemma 4.1 again, the left composition by any element of \(\mathcal{D}^{\infty}_{\mathrm{is}}(A)\) preserves \(\{g(\tfrac{1}{i}\nabla),\ g\in\mathcal{S}^{\prime}(E^{*})\}\). This defines the product
\[\sharp_{A}:\mathbb{C}[E^{*}]\times\mathcal{S}^{\prime}(E^{*})\to\mathcal{S}^{ \prime}(E^{*}),\qquad(f\sharp_{A}g)(\tfrac{1}{i}\nabla)=f(\tfrac{1}{i}\nabla) \circ g(\tfrac{1}{i}\nabla) \tag{39}\]
In the sequel we will use the basis \((\nabla^{\alpha},|\alpha|\leqslant m)\) of \(\mathcal{D}^{m}_{\mathrm{is}}(A)\), defined by \(\nabla^{\alpha}:=(\nabla^{A}_{1})^{\alpha(1)}\dots(\nabla^{A}_{n})^{\alpha(n)}\), \(\alpha\in\mathbb{N}^{n}\). Clearly
\[i^{-|\alpha|}\nabla^{\alpha}=f(\tfrac{1}{i}\nabla)\qquad\text{ with }f=\xi^{\sharp\alpha}:=\xi^{\sharp\alpha(1)}_{1}\sharp\dots\sharp\xi^{ \sharp\alpha(n)}_{n}, \tag{40}\]
where we have not written the \(A\) dependence to lighten the notations. Furthermore, if \(|\gamma|=m\),
\[\xi^{\sharp\gamma}\sharp_{A}f=\xi^{\gamma}f+\sum_{|\alpha|+|\beta|\leqslant m,\ |\alpha|\leqslant m-1}a_{\alpha,\beta,\gamma}\xi^{\alpha}\partial^{\beta}_{\xi}f \tag{41}\]
where the coefficients \(a_{\alpha,\beta,\gamma}\in\mathbb{C}\) depends smoothly (even polynomially) on \(A\), which follows from Lemma 4.1 again. Actually there is a closed formula for \(\sharp_{A}\), cf. (42), but (41) is enough for our purpose.
Introduce the space \(\Psi^{m}_{\mathrm{is}}(A):=\{f(\frac{1}{i}\nabla),\;f\in S^{m}_{\mathrm{ph}}(E ^{*})\}\). We have
\[\mathcal{D}^{m}_{\mathrm{is}}(A)\subset\Psi^{m}_{\mathrm{is}}(A),\qquad \mathcal{D}^{m}_{\mathrm{is}}(A)\circ\Psi^{p}_{\mathrm{is}}(A)\subset\Psi^{m+p }_{\mathrm{is}}(A),\]
the second assertion being a consequence of (41). This is all what we need to define in the next section the symbolic calculus corresponding to the composition of differential Heisenberg operators with Heisenberg pseudodifferential operators. In the case where \(A=0\), \(\sharp_{A}\) is the usual pointwise product of functions. In Lemma 6.1, we will see that when \(A\) is nondegenerate so that \(n=2d\), \(\Psi^{\infty}_{\mathrm{is}}(A)\) is an algebra isomorphic to the Weyl algebra of \(\mathbb{R}^{2d}\).
In the companion paper [5], we will prove that for any \(A\), \(\Psi^{\infty}_{\mathrm{is}}(A)\) is a filtered algebra, that is \(\Psi^{m}_{\mathrm{is}}(A)\circ\Psi^{p}_{\mathrm{is}}(A)\subset\Psi^{m+p}_{ \mathrm{is}}(A)\). Moreover
\[(f\sharp_{A}\,g)(\xi)=\left[e^{\frac{i}{2}A(\partial_{\xi},\partial_{\eta})}f( \xi)g(\eta)\right]_{\xi=\eta} \tag{42}\]
So \(\Psi^{\infty}_{\mathrm{is}}(A)\) is isomorphic with the algebra called the \(A\)-isotropic algebra in [12, Chapter 4, section 2].
Recall the standard and Weyl quantization maps which associate to any \(f\in\mathcal{S}^{\prime}(\mathbb{R}^{2n})\) the operators \(f(x,\frac{1}{i}\partial)\) and \(f^{w}(x,\frac{1}{i}\partial)\) with Schwartz kernels
\[(2\pi)^{-n}\int e^{i\xi\cdot(x-y)}f(x,\xi)\;d\xi\quad\text{ and }\quad(2\pi)^{-n} \int e^{i\xi\cdot(x-y)}f(\tfrac{1}{2}(x+y),\xi)\;d\xi\]
respectively.
**Lemma 4.3**.: _For any \(g\in\mathcal{S}^{\prime}(\mathbb{R}^{n})\), we have_
\[g(\tfrac{1}{i}\nabla^{A})=f(x,\tfrac{1}{i}\partial)=f^{w}(x,\tfrac{1}{i}\partial)\]
_where \(f(x,\xi)=g(\xi-\beta(x))\) and \(\beta(x)\) is defined in (34), equivalently \(f(x,\xi)=g(\xi_{1}-\tfrac{1}{2}A(x,e_{1}),\dots,\xi_{n}-\tfrac{1}{2}A(x,e_{n}))\)._
Proof.: By the same computation as in (24),
\[\int e^{i\xi\cdot(x-y)}g(\xi-\beta(x))\;d\xi=e^{i\beta(x)(x-y)}\int e^{i\xi \cdot(x-y)}g(\xi)\;d\xi.\]
\(A\) being antisymmetric, \(\beta(x)(x-y)=-\tfrac{1}{2}A(x,y)\), which proves that \(g(\tfrac{1}{i}\nabla^{A})=f(x,\tfrac{1}{i}\partial)\). The same proof by using this time that \(\beta(\tfrac{1}{2}(x+y))(x-y)=-\tfrac{1}{2}A(x,y)\) shows that \(g(\tfrac{1}{i}\nabla^{A})=f^{w}(x,\tfrac{1}{i}\partial)\).
## 5 Heisenberg differential operators
The algebra \(\mathcal{D}^{\infty}_{\mathrm{Heis}}(L,\nabla)\) of Heisenberg differential operators consists of families of differential operators
\[P=(P_{k}:\mathcal{C}^{\infty}(M,L^{k})\to\mathcal{C}^{\infty}(M,L^{k}),\ k\in \mathbb{N}), \tag{43}\]
satisfying some conditions given below. It includes the multiplications by any \(f\) in \(\mathcal{C}^{\infty}(M)\), the normalised covariant derivatives \(k^{-1/2}\nabla_{X}\) where \(X\) is any vector field of \(M\) and the multiplication by \(k^{-1/2}\). It is actually generated by these operators but it will be easier to use the following definition.
For any \(m\in\mathbb{N}\), \(\mathcal{D}^{m}_{\mathrm{Heis}}(L,\nabla)\) consists of the families \(P\) of differential operators of the form (43) such that for any coordinate chart \((U,x_{i})\) and frame \(s\in\mathcal{C}^{\infty}(U,L)\) with \(|s|=1\), we have on \(U\),
\[P_{k}=\sum_{\begin{subarray}{c}\ell\in\mathbb{N},\ \alpha\in\mathbb{N}^{n}\\ \ell+|\alpha|\leqslant m\end{subarray}}k^{-\frac{\ell}{2}}f_{\ell,\alpha} \tilde{\pi}^{\alpha} \tag{44}\]
where \(f_{\ell,\alpha}\in\mathcal{C}^{\infty}(U)\), \(\tilde{\pi}^{\alpha}=\tilde{\pi}^{\alpha(1)}_{1}\ldots\tilde{\pi}^{\alpha(n)}_ {n}\) and
\[\tilde{\pi}_{i}=\tfrac{1}{i\sqrt{k}}\nabla_{i}=\tfrac{1}{i\sqrt{k}}\partial_{i }-\sqrt{k}\beta_{i}\qquad\text{ with }\quad\nabla s=\tfrac{1}{i}\sum\beta_{i}dx_{i}\otimes s \tag{45}\]
Set
\[\mathcal{D}^{\infty}_{\mathrm{Heis}}(L,\nabla)=\bigcup_{m\in\mathbb{N}} \mathcal{D}^{m}_{\mathrm{Heis}}(L,\nabla).\]
In the sequel to lighten the notations, we omit \((L,\nabla)\) and write \(\mathcal{D}^{m}_{\mathrm{Heis}}\), \(\mathcal{D}^{\infty}_{\mathrm{Heis}}\). Since \([\tilde{\pi}_{i},\tilde{\pi}_{j}]=\tfrac{1}{i}(\partial_{i}\beta_{j}-\partial _{j}\beta_{i})\) and \([\tilde{\pi}_{i},f]=\tfrac{1}{i\sqrt{k}}\partial_{i}f\), we see that
\[\mathcal{D}^{m}_{\mathrm{Heis}}\circ\mathcal{D}^{p}_{\mathrm{Heis}}\subset \mathcal{D}^{m+p}_{\mathrm{Heis}}.\]
Notice that \(\mathcal{D}^{\infty}_{\mathrm{Heis}}\) has two filtrations: one ascending \(\mathcal{D}^{m}_{\mathrm{Heis}}\subset\mathcal{D}^{m+1}_{\mathrm{Heis}}\) and the other descending \(k^{-\ell/2}\mathcal{D}_{\mathrm{Heis}}\), \(\ell\in\mathbb{N}\). The generators \(f\), \(k^{-1/2}\nabla_{X}\) and \(k^{-1/2}\) have orders \(0\), \(1\), \(1\) for the former and \(0\), \(0\), \(1\) for the latter.
By the next proposition, \(\mathcal{D}^{\infty}_{\mathrm{Heis}}\) is contained in \(\Psi^{\infty}_{\mathrm{Heis}}(L,\nabla)\) and acts on it. Being Heisenberg pseudodifferential operators, the elements of \(\mathcal{D}^{\infty}_{\mathrm{Heis}}\) have a principal symbol, cf Definition 3.2. As we will see, the product of symbols is the fiberwise product \(\sharp\) of \(T^{*}M\) defined from \(\omega\). Precisely, we denote by \(\sharp_{x}\) the product
\[\mathbb{C}_{\leqslant m}[T^{*}_{x}M]\times S^{p}(T^{*}_{x}M)\to S^{m+p}(T^{*}_ {x}M) \tag{46}\]
associated to \(\omega(x)\in\wedge^{2}T^{*}_{x}M\) defined in (39). We will need as well the polynomials \(\xi^{\sharp_{x}\alpha}\) defined in (40).
**Proposition 5.1**.:
* _for any_ \(m\in\mathbb{N}\)_,_ \(\mathcal{D}^{m}_{\mathrm{Heis}}\subset\Psi^{m}_{\mathrm{Heis}}\)__
* _the principal symbols of the operators of_ \(\mathcal{D}^{m}_{\mathrm{Heis}}\) _are the functions_ \(f\in\mathcal{C}^{\infty}(T^{*}M)\) _such that_ \(f(x,\cdot)\in\mathbb{C}_{\leqslant m}[T^{*}_{x}M]\) _for any_ \(x\)_. If (_44_) holds on_ \(U\)_, then_ \(\sigma(P)(x,\xi)=\sum_{|\alpha|\leqslant m}f_{0,\alpha}(x)\xi^{\sharp_{x} \alpha}\)_._
* _for any_ \(P\in\mathcal{D}^{m}_{\mathrm{Heis}}\)_,_ \(\sigma(P)=0\) _if and only if_ \(P\in k^{-\frac{1}{2}}\mathcal{D}^{m-1}_{\mathrm{Heis}}\)_._
* _for any_ \(m\in\mathbb{N}\) _and_ \(p\in\mathbb{R}\)_,_ \(\mathcal{D}^{m}_{\mathrm{Heis}}\circ\Psi^{p}_{\mathrm{Heis}}\subset\Psi^{m+p}_ {\mathrm{Heis}}\)_. Furthermore_ \[\sigma(P\circ Q)(x,\cdot)=\sigma(P)(x,\cdot)\,\sharp_{x}\,\sigma(Q)(x,\cdot)\] _for any_ \(P\in\mathcal{D}^{m}_{\mathrm{Heis}}\)_,_ \(Q\in\Psi^{p}_{\mathrm{Heis}}\)_._
Proof.: We start with the computation of \(\tilde{\pi}_{i}\circ\mathrm{Op}_{\mathrm{Heis}}(a)\) where \(\mathrm{Op}_{\mathrm{Heis}}(a)\) is the operator with Schwartz kernel (24). Using (45), we get first that \(\tilde{\pi}_{i}\circ\mathrm{Op}_{\mathrm{Heis}}(a)=\mathrm{Op}_{\mathrm{Heis}} (b)\) with
\[b(h,x,y,\xi)=\left(h^{-1}\psi_{i}(x,y)+\xi_{i}+\tfrac{h}{i}\partial_{x_{i}} \right)a(h,x,y,\xi)\]
where \(\psi_{i}(x,y)=(\partial_{i}\beta)(\frac{1}{2}(x+y))\cdot(x-y)+\beta_{i}(\frac {1}{2}(x+y))-\beta_{i}(x)\). Taylor expanding along \(x=y\), we get
\[\psi_{i}(x,y)=\tfrac{1}{2}\sum_{j}\omega_{ij}(x)(x_{j}-y_{j})+\sum_{i,j}r_{ij }(x,y)(x_{i}-y_{i})(x_{j}-y_{j})\]
with \(\omega_{ij}=\partial_{x_{i}}\beta_{j}-\partial_{x_{j}}\beta_{i}\). Integrating by part, \(\tilde{\pi}_{i}\circ\mathrm{Op}_{\mathrm{Heis}}(a)=\mathrm{Op}_{\mathrm{Heis }}(c)\) with
\[c(h,x,y,\xi)=(\xi_{i}+\tfrac{i}{2}\sum_{j}\omega_{ij}(x)\partial_{\xi_{i}}+ \tfrac{h}{i}\partial_{x_{i}}-h\sum_{ij}r_{ij}(x,y)\partial_{\xi_{i}}\partial_{ \xi_{j}})a(h,x,y,\xi).\]
Notice that if \(a\in S^{m}_{\mathrm{sc}}(U^{2},\mathbb{R}^{n})\), then the same holds for \(c\). Furthermore, if \(a(h,x,x,\xi)=\sigma(x,\xi)+\mathcal{O}(h)\), then \(c(h,x,x,\xi)=(\xi_{i}\sharp\sigma)(x,\xi)+\mathcal{O}(h)\).
We claim that everything can be deduced easily from these preliminary observations. Starting from the fact that \(\mathrm{Op}_{\mathrm{Heis}}(1)\) is the identity, we deduce by induction on \(|\alpha|\) that \(\tilde{\pi}^{\alpha}=\mathrm{Op}_{\mathrm{Heis}}(a_{\alpha})\) with \(a_{\alpha}(h,x,x,\xi)=\xi^{\sharp\alpha}+\mathcal{O}(h)\). The first two assertions follow. The third assertion is a consequence of the fact that the \(\xi^{\sharp\alpha}|_{x}\), \(\alpha\in\mathbb{N}^{n}\) are linearly independent so that \(\sum f_{0,\alpha}\xi^{\sharp\alpha}=0\) implies that \(f_{0,\alpha}=0\). Last assertion follows again from the preliminaries by induction on \(m\)
## 6 Resolvent
Let \((F,\lambda)\) be a real symplectic vector space with dimension \(2d\). The Weyl product of the Schwartz space \(\mathcal{S}(F)\) is defined by
\[(a\circ_{\lambda}b)(\xi)=(\pi)^{-2d}\int e^{-2i\lambda(\eta,\zeta)}a(\xi+\eta)b (\xi+\zeta)\;d\mu_{F}(\eta)\;d\mu_{F}(\zeta)\]
where \(\mu_{F}=\lambda^{\wedge d}/d!\) is the Liouville measure of \(F\). For \(F=\mathbb{R}_{t}^{d}\times\mathbb{R}_{\tau}^{d}\) with \(\lambda(t,\tau;s,\zeta)=\tau\cdot s-\varsigma\cdot t\), it is the composition law of the Weyl symbols of pseudodifferential operators of \(\mathbb{R}^{d}\), cf. for instance [16, page 152].
This product extends continuously from \(S^{m}(F)\times S^{p}(F)\) to \(S^{m+p}(F)\) by preserving the subspaces of polyhomogeneous symbols. So the corresponding pseudodifferential operators, \(f^{w}(x,\frac{1}{i}\partial)\), with \(f\in S^{\infty}(\mathbb{R}^{2d})\), form an algebra, called sometimes the Shubin class or isotropic class. This algebra is one of the most studied in microlocal analysis, cf. [24, Chapter IV], [15], [21, Chapter 4], [12, Chapter 4], [25, Appendix A] for lecture note references.
The Weyl product appears naturally in our context as the product of the operators \(f(\frac{1}{i}\nabla^{A})\) defined in Section 4 when \(A\) is nondegenerate.
**Lemma 6.1**.: _If \(A\in\wedge^{2}E^{*}\) is nondegenerate, then for any \(f\), \(g\) in \(S^{\infty}(E^{*})\),_
\[f(\tfrac{1}{i}\nabla^{A})\circ g(\tfrac{1}{i}\nabla^{A})=(f\circ_{\lambda}g)( \tfrac{1}{i}\nabla^{A})\]
_where \(\lambda\) is the symplectic form of \(E^{*}\) dual to \(A\)._
Proof.: Introduce a symplectic basis \((e_{i},f_{i})\) of \((E,A)\) and denote by \((x_{i},y_{i})\) the associated linear coordinates, so that \(E=\mathbb{R}_{x}^{d}\times\mathbb{R}_{y}^{d}\). Then the operators
\[\tfrac{1}{i}\nabla_{e_{i}}=\tfrac{1}{i}\partial_{x_{i}}+\tfrac{1}{2}y_{i}, \quad\tfrac{1}{i}\nabla_{f_{i}}=\tfrac{1}{i}\partial_{y_{i}}-\tfrac{1}{2}x_{i },\quad\tfrac{1}{i}\partial_{y_{i}}+\tfrac{1}{2}x_{i},\quad\tfrac{1}{i} \partial_{x_{i}}-\tfrac{1}{2}y_{i}\]
satisfy the same commutation relations as the operators \(s_{i}\), \(\tfrac{1}{i}\partial_{s_{i}}\), \(t_{i}\), \(\tfrac{1}{i}\partial_{t_{i}}\) of \(\mathbb{R}_{s}^{d}\times\mathbb{R}_{t}^{d}\). So the linear isomorphism \(\Phi:\mathbb{R}^{4d}\to\mathbb{R}^{4d}\),
\[\Phi(x,\xi,y,\eta)=(\xi+\tfrac{1}{2}y,\eta-\tfrac{1}{2}x,\eta+\tfrac{1}{2}x, \xi+\tfrac{1}{2}y).\]
is a symplectomorphism. Its metaplectic representation \(U:L^{2}(E)\to L^{2}(\mathbb{R}^{2d})\) satisfies \(f^{w}=U(f\circ\Phi)^{w}U^{*}\) for any \(f\in\mathcal{S}^{\prime}(\mathbb{R}^{4d})\), cf. [16, Theorem 18.5.9]. Applying this to \(f(x,\xi,y,\eta)=g(\xi+\tfrac{1}{2}y,\eta-\tfrac{1}{2}x)=((g\boxtimes 1) \circ\Phi)(x,\xi,y,\eta)\), we obtain
\[f^{w}(x,\tfrac{1}{i}\partial_{x},y,\tfrac{1}{i}\partial_{y})=U^{*}(g^{w}(s, \tfrac{1}{i}\partial_{s})\otimes\operatorname{id}_{\mathbb{R}_{t}^{d}})U\]
and by Lemma 4.3, \(f^{w}(x,\tfrac{1}{i}\partial_{x},y,\tfrac{1}{i}\partial_{y})=g(\tfrac{1}{i} \nabla^{A})\). The result follows.
From now on, we assume that \(\omega\) is nondegenerate. By Lemma 6.1, at any \(x\in M\), the product \(\sharp_{x}\) defined in (46) extends continuously
\[S^{m}(T^{*}_{x}M)\times S^{p}(T^{*}_{x}M)\to S^{m+p}(T^{*}_{x}M).\]
Recall that \(f\in S^{m}(M,T^{*}M)\) is elliptic if \(|f(x,\xi)|\geqslant C^{-1}|\xi|^{m}\) when \(|\xi|\geqslant C\) for some positive \(C\). We say that \(f\) is invertible if at any \(x\in M\), \(f(x,\cdot)\) is invertible in \((S^{\infty}(T^{*}_{x}M),\sharp_{x})\).
**Lemma 6.2**.:
1. \(S^{\infty}_{\rm ph}(M,T^{*}M)\) _endowed with the fibered product_ \((f\sharp g)(x)=f(x)\sharp_{x}g(x)\) _is a filtered algebra._
2. _For any_ \(f\in S^{m}_{\rm ph}(M,T^{*}M)\) _which is both elliptic and invertible, the pointwise inverse of_ \(f\) _belongs to_ \(S^{-m}_{\rm ph}(M,T^{*}M)\)_._
Proof.: This holds more generally for \(S^{m}_{\rm ph}(N,E)\) where \(E\) is any symplectic vector bundle with base \(N\). When \(N\) is a point, \(S^{\infty}_{\rm ph}(N,E)\) is isomorphic with the Weyl algebra \(S^{\infty}_{\rm ph}(\mathbb{R}^{2d})\), and the result is well-known as we already mentioned it. In general, we can assume that \(E\) is the trivial symplectic bundle \(\mathbb{R}^{2d}\) over an open subset \(U\) of \(\mathbb{R}^{d}\). Viewing symbols in \(S^{m}(U,\mathbb{R}^{2d})\) as smooth maps from \(U\) to the Frechet space \(S^{m}(\mathbb{R}^{2d})\), and using that the Weyl product is continuous \(S^{m}(\mathbb{R}^{2d})\times S^{p}(\mathbb{R}^{2d})\to S^{m+p}(\mathbb{R}^{2d})\), we deduce with a little work that the fibered Weyl product \(\sharp\) is continuous
\[S^{m}(U,\mathbb{R}^{2d})\times S^{p}(U,\mathbb{R}^{2d})\to S^{m+p}(U, \mathbb{R}^{2d}). \tag{47}\]
which proves the first assertion.
Let \(f\in S^{m}_{\rm ph}(U,\mathbb{R}^{2d})\) be elliptic and invertible. Let us prove that its pointwise inverse \(g\) is in \(S^{-m}_{\rm ph}(M,T^{*}M)\). Multiplying \(f\) by \(f(x_{0})^{-1}\), we may assume that \(m=0\). Since \(S^{\infty}_{\rm ph}(U,\mathbb{R}^{2d})\) is a filtered algebra, cf. (47), and by Borel lemma, \(f\) has a parametrix \(h\in S^{0}_{\rm ph}(U,\mathbb{R}^{2d})\). Let us prove that \(g=h+S^{-\infty}(U,\mathbb{R}^{2d})\). We have \(h\sharp f=1+r\), \(f\sharp h=1+s\) with \(r\) and \(s\) in \(S^{-\infty}(U,\mathbb{R}^{2d})\). So \(g=h-r\sharp h+r\sharp g\sharp s\). By (47) again, \(r\sharp h\in S^{-\infty}(U,\mathbb{R}^{2d})\). It remains to prove that \(r\sharp g\sharp s\in S^{-\infty}(U,\mathbb{R}^{2d})\).
By Calderon-Vaillancourt theorem, the Weyl quantization \({\rm Op}:S^{0}(\mathbb{R}^{2d})\to\mathcal{L}(L^{2}(\mathbb{R}^{d}))\) is continuous. \({\rm Op}(g(x))\) being the inverse of \({\rm Op}(f(x))\) for any \(x\), \({\rm Op}(g)\in\mathcal{C}^{\infty}(U,\mathcal{L}(L^{2}(\mathbb{R}^{d}))\). Moreover, the multilinear map
\[M:S^{-\infty}(\mathbb{R}^{2d})\times\mathcal{L}(L^{2}(\mathbb{R}^{d}))\times S ^{-\infty}(\mathbb{R}^{2d})\to S^{-\infty}(\mathbb{R}^{2d}),\]
defined by \({\rm Op}(M(\sigma,A,\tau))={\rm Op}(\sigma)\circ A\circ{\rm Op}(\tau)\), being continuous, we obtain with a little work that \(r\sharp g\sharp s=M(r,{\rm Op}(g),s)\) belongs to \(S^{-\infty}(U,\mathbb{R}^{2d})\).
Consider now \(P\in\mathcal{D}^{m}_{\mathrm{Heis}}(L,\nabla)\) having an elliptic symbol. Then for any fixed \(k\), \(P_{k}\) is an elliptic differential operator of \(\mathcal{C}^{\infty}(M,L^{k})\), so for any \(s\in\mathbb{R}\), \(P_{k}\) extends to a Fredholm operator of \(\mathcal{L}(H^{s}(M,L^{k}),H^{s-m}(M,L^{k}))\). If we assume that the symbol of \(P\) is invertible, then by the following Theorem, \(P_{k}\) is invertible when \(k\) is large, and its inverse is a Heisenberg pseudodifferential operator.
**Theorem 6.3**.: _Assume that \(\omega\) is nondegenerate. Let \(P\in\mathcal{D}^{m}_{\mathrm{Heis}}(L,\nabla)\) having an elliptic and invertible symbol \(\sigma\in\mathcal{C}^{\infty}(T^{*}M)\). Then there exists \(Q\in\Psi^{-m}_{\mathrm{Heis}}(L,\nabla)\) such that_
* \(PQ-\mathrm{id}\) _and_ \(QP-\mathrm{id}\) _are in_ \(k^{-\infty}\Psi^{-\infty}(L)\)__
* _when_ \(k\) _is sufficiently large,_ \(Q_{k}P_{k}=P_{k}Q_{k}=\mathrm{id}\)__
* _the symbol of_ \(Q\) _is the inverse of_ \(\sigma\) _for the product_ \(\sharp\)_._
Proof.: This follows merely from the previous results, by the standard techniques for elliptic operators. First, using Lemma 6.2, we construct a parametrix \(Q\in\Psi^{-m}_{\mathrm{Heis}}(L,\nabla)\) of \(P\), so \(PQ=\mathrm{id}+R\) and \(QP=\mathrm{id}+S\) with \(R,S\) in the residual algebra \(k^{-\infty}\Psi^{-\infty}(L)\). Then, by the Sobolev continuity (19), \(R_{k}\) and \(S_{k}\) belongs to \(\mathcal{L}(L^{2}(M,L^{k}))\) and their operator norms are in \(\mathcal{O}(k^{-\infty})\). So when \(k\geqslant k_{0}\), \(P_{k}\) is invertible from \(H^{m}(M,L^{k})\) to \(H^{0}(M,L^{k})\), which implies by the Fredholm properties of elliptic operators [24, Theorem 8.1], that \(P_{k}\) is an invertible operator of the distribution space \(\mathcal{D}^{\prime}(M,L^{k})\).
Its inverse satisfies
\[P_{k}^{-1}=Q_{k}-R_{k}Q_{k}+R_{k}(Q_{k}P_{k})^{-1}Q_{k}S_{k} \tag{48}\]
By Lemma 3.6, \((R_{k}Q_{k})\) and \((Q_{k}S_{k})\) are in \(k^{-\infty}\Psi^{-\infty}(L)\). It is a classical fact that if \((A_{k})\), \((B_{k})\) are in \(k^{-\infty}\Psi^{-\infty}(L)\) and \(C_{k}=\mathcal{O}(1):L^{2}(M,L^{k})\to L^{2}(M,L^{k})\), then \((A_{k}C_{k}B_{k})\) is in \(k^{-\infty}\Psi^{-\infty}(L)\). So the last term in (48) belongs to \(k^{-\infty}\Psi^{-\infty}(L)\). So by adding to \(Q_{k}\) an element of the residual algebra, we have that \(Q_{k}=P_{k}^{-1}\) when \(k\) is large.
Assume \(m>0\) and consider \(P\in\mathcal{D}^{m}_{\mathrm{Heis}}(L,\nabla)\) having an elliptic symbol \(\sigma\) such that for some \(z_{0}\in\mathbb{C}\), \(\sigma-z_{0}\) is invertible. Then by Theorem 6.3, when \(k\) is sufficiently large, \(P_{k}-z_{0}\) has an inverse, which is continuous \(L^{2}(M,L^{k})\to H^{m}(M,L^{k})\). So the restriction of \(P_{k}\) to \(H^{m}(M,L^{k})\) is a closed unbounded operator of \(L^{2}(M,L^{k})\) having a compact resolvent. So its spectrum is a discrete subset of \(\mathbb{C}\) and it consists only of eigenvalues with finite multiplicity, the generalised eigenvectors being smooth [24, Theorem 8.4].
To state the next theorem, we need some spectral properties of the symbols themselves. Later we will explain these properties in terms of Weyl quantization, but since this quantization is only auxiliary in what we do, we prefer first to discuss everything intrinsically in terms of the algebra \((S^{\infty}(F),\circ_{\lambda})\) where \((F,\lambda)\) is a symplectic vector space as above.
The spectrum of \(a\in S^{\infty}(F)\) is defined by: \(z\notin\operatorname{sp}(a)\) if and only if \(z-a\) is invertible in \((S^{\infty}(F),\circ_{\sigma})\). A family \((b(z),z\in\Omega)\) of \(S^{m}(F)\) is holomorphic if \(\Omega\) is an open set of \(\mathbb{C}\), \(b\in S^{m}(\Omega,F)\) and \(\partial_{\overline{z}}b=0\). By the analytic Fredholm theory for the isotropic algebra exposed in [21, Chapter 3], for any elliptic \(a\in S^{m}_{\mathrm{ph}}(F)\) with \(m>0\), the spectrum of \(a\) is \(\mathbb{C}\) or a discrete subset of \(\mathbb{C}\). In the latter case, the resolvent \(((a-z)^{(-1)_{\circ_{\lambda}}},\;z\in\mathbb{C}\setminus\operatorname{sp}(a))\) is a holomorphic family of \(S^{-m}(F)\) and for any \(z_{0}\in\operatorname{sp}(a)\), we have on a neighborhood of \(z_{0}\) for some \(N\in\mathbb{N}\)
\[(a-z)^{(-1)_{\circ_{\lambda}}}=h(z)+\frac{r_{1}}{z-z_{0}}+\ldots+\frac{r_{N}}{ (z-z_{0})^{N}} \tag{49}\]
where \((h(z))\) is a holomorphic family of \(S^{-m}_{\mathrm{ph}}(F)\) and \(r_{1}\),..., \(r_{N}\) are in \(S^{-\infty}(F)\).
**Theorem 6.4**.: _Assume that \(\omega\) is nondegenerate. Let \(P\in\mathcal{D}^{m}_{\mathrm{Heis}}(L,\nabla)\) be elliptic with \(m\geqslant 1\) and symbol \(\sigma\). Let \(\Sigma\) be the closed set \(\bigcup_{x\in M}\operatorname{sp}(\sigma(x))\). Then_
1. _if_ \(K\) _is a compact subset of_ \(\mathbb{C}\) _disjoint from_ \(\Sigma\)_, then the spectrum of_ \(P_{k}\) _does not intersect_ \(K\) _when_ \(k\) _is large enough._
2. _if_ \(\Omega\) _is an open bounded subset of_ \(\mathbb{C}\) _with a smooth boundary disjoint from_ \(\Sigma\)_, then there exists_ \(\Pi\in\Psi^{-m}_{\mathrm{Heis}}(L,\nabla)\) _such that_ \(\Pi_{k}=(1_{\Omega}(P_{k}))\) _when_ \(k\) _is large. Furthermore the principal symbol of_ \(\Pi\) _is at_ \[\pi(x)=(2i\pi)^{-1}\int_{\partial\Omega}(\sigma(x)-z)^{(-1)_{\sharp x}}\,dz.\] (50)
3. _if for any_ \(k\)_,_ \(P_{k}\) _is formally self-adjoint for some volume element of_ \(M\)_, then for any_ \(E_{-},E_{+}\in\mathbb{R}\setminus\Sigma\) _with_ \(E_{-}<E_{+}\)_,_ \((1_{[E_{-},E_{+}]}(P_{k}))\) _belongs to_ \(\Psi^{-\infty}_{\mathrm{Heis}}(L,\nabla)\)_._
Observe that the symbol \(\pi(x)\) is the sum of the residues of the poles in \(\Omega\) of the resolvent of \(\sigma(x)\). As we will see in the proof, the third assertion is a particular case of the second one, the symbol being the sum of the residues of the poles in \([E_{-},E_{+}]\).
In [5], we will prove that \(\Psi^{m}_{\mathrm{Heis}}\circ\Psi^{p}_{\mathrm{Heis}}\subset\Psi^{m+p}_{ \mathrm{Heis}}\). So in the second assertion, \(\Pi\) being idempotent, it belongs to \(\Psi^{-\infty}_{\mathrm{Heis}}(L,\nabla)\).
Proof.: First, \(\Sigma\) is closed because the Weyl quantization is continuous from \(S^{m}(\mathbb{R}^{d})\) to \(\mathcal{L}(H^{m}_{\mathrm{iso}}(\mathbb{R}^{d}),H^{0}_{\mathrm{iso}}(\mathbb{R}^ {d}))\), so that the characterization of the spectrum given below implies that if \(z_{0}\notin\mathrm{sp}(\sigma(x_{0}))\) then \(z\notin\mathrm{sp}(\sigma(x))\) when \((z,x)\) is sufficiently close to \((z_{0},x_{0})\).
Assume that \(K\) is a compact subset of \(\mathbb{C}\) disjoint from \(\Sigma\). When \(z\in K\), \((P_{k}-z)\) satisfies the assumptions of Theorem 6.3, so there exists \(Q(z)\in\Psi^{-m}_{\mathrm{Heis}}(L,\nabla)\) such that \(Q_{k}(z)=(z-P_{k})^{-1}\) when \(k\geqslant k_{0}(z)\). This proves at least that \(P_{k}\) has a compact resolvent as explained above when \(k\) is large. Moreover we claim that everything in the proof of Theorem 6.3 can be done continuously with respect to \(z\in K\) (even holomorphically with respect to \(z\) in a neighborhood of \(K\)). More precisely, the Schwartz kernel of \(Q_{k}(z)\) is locally of the form (25) where the dependence in \(z\) in only in the symbol \(b\), which is continuous in \(z\). This proves first that we can choose \(k_{0}(z)\) independent of \(z\), which shows the first assertion. Second, if \(\Omega\) satisfies the assumptions of the second assertion of the Theorem, we can apply the previous consideration to \(K=\partial\Omega\) and it follows that \(\Pi_{k}:=(2i\pi)^{-1}\int_{\partial\Omega}Q_{k}(z)\ dz\) belongs to \(\Psi^{-m}_{\mathrm{Heis}}(L,\nabla)\) with a symbol given by (50). When \(k\) is large enough, \(Q_{k}(z)\) is the resolvent, so by Cauchy formula, \(\Pi_{k}=1_{\Omega}(P_{k})\). This concludes the proof of the second assertion.
For the last assertion, by assumption, for any fixed \(k\), \(P_{k}\) is a formally self-adjoint elliptic differential operator on a compact manifold, so its spectrum is a discrete subset of \(\mathbb{R}\) and \(1_{[E_{-},E_{+}]}(P_{k})\) is a finite rank projector onto a subspace of \(\mathcal{C}^{\infty}(M,L^{k})\), [24, Theorem 8.3]. Moreover \(\sigma\) is real valued so \(\Sigma\subset\mathbb{R}\). So there exists \(\Omega\) satisfying the previous assumptions and such that \(\Omega\cap\mathbb{R}=[E_{-},E_{+}]\). So \(\Pi=(1_{[E_{-},E_{+}]}(P_{k}),\;k\in\mathbb{N})\) belongs to \(\Psi^{-m}_{\mathrm{Heis}}(L,\nabla)\).
For any odd \(N\in\mathbb{N}\), \(\Pi=1_{[E_{-}^{N},E_{+}^{N}]}(P^{N})\) and \(P^{N}\in\mathcal{D}^{mN}_{\mathrm{Heis}}(L,\nabla)\), which implies by the previous argument that \(\Pi\) belongs to \(\Psi^{-Nm}_{\mathrm{Heis}}(L,\nabla)\), so \(\Pi\in\Psi^{-\infty}_{\mathrm{Heis}}(L,\nabla)\).
Let us discuss briefly the convertibility and resolvent of elliptic elements of \((S^{\infty}(F),\circ_{\lambda})\) from the point of view of Weyl quantization. Let \(\Psi^{\infty}_{\mathrm{iso}}(\mathbb{R}^{d})\) be the space of pseudodifferential operators of \(\mathbb{R}^{d}\) with a symbol in \(S^{\infty}_{\mathrm{ph}}(\mathbb{R}^{2d})\). Any \(A\in\Psi^{m}_{\mathrm{iso}}(\mathbb{R}^{d})\) acts continuously \(\mathcal{S}(\mathbb{R}^{d})\to\mathcal{S}(\mathbb{R}^{d})\), \(\mathcal{S}^{\prime}(\mathbb{R}^{d})\to\mathcal{S}^{\prime}(\mathbb{R}^{d})\) and \(H^{s}_{\mathrm{iso}}(\mathbb{R}^{d})\to H^{s-m}_{\mathrm{iso}}(\mathbb{R}^{d})\), where \(H^{s}_{\mathrm{iso}}(\mathbb{R}^{d})\) is the isotropic Sobolev spaces
\[H^{s}_{\mathrm{iso}}(\mathbb{R}^{d})=\{u\in\mathcal{S}^{\prime}(\mathbb{R}^{d }),\;Au\in L^{2}(\mathbb{R}^{d}),\;\forall A\in\Psi^{s}_{\mathrm{iso}}(\mathbb{ R}^{d})\},\quad s\in\mathbb{R}.\]
When \(A\) is elliptic, the following Fredholm property holds: \(\ker A\) and \(\ker A^{*}\) are finite dimensional subspaces of \(\mathcal{S}(\mathbb{R}^{d})\),
\[\mathcal{S}^{\prime}(\mathbb{R}^{d})=A(\mathcal{S}^{\prime}(\mathbb{R}^{d})) \oplus\ker A^{*}=A^{*}(\mathcal{S}^{\prime}(\mathbb{R}^{d}))\oplus\ker A\]
and the generalised inverse \(B:\mathcal{S}^{\prime}(\mathbb{R}^{d})\to\mathcal{S}^{\prime}(\mathbb{R}^{d})\) such that \(BA-\mathrm{id}\) and \(AB-\mathrm{id}\) are the orthogonal projectors onto \(\ker A\) and \(\ker A^{*}\) respectively, belongs to \(\Psi_{\mathrm{iso}}^{-m}(\mathbb{R}^{d})\). So \(A\) is invertible in the algebra \(\Psi_{\mathrm{iso}}^{\infty}(\mathbb{R}^{d})\) if and only if \(\ker A=\ker A^{*}=0\) if and only if \(A\) is invertible as an operator in \(\mathcal{S}^{\prime}\) if and only if \(A\) is invertible in \(\mathcal{L}(H_{\mathrm{iso}}^{s}(\mathbb{R}^{d}),H_{\mathrm{iso}}^{s-m}( \mathbb{R}^{d}))\).
When \(m>0\), any elliptic \(A\in\Psi_{\mathrm{iso}}^{m}(\mathbb{R}^{d})\) defines by restriction a closed unbounded operator of \(L^{2}(\mathbb{R}^{d})\) with domain \(H_{\mathrm{iso}}^{m}(\mathbb{R}^{d})\). By the previous characterization of invertibility, the spectrum of \(A\) is the same as the spectrum of its symbol \(a\) defined above. Assume it is not empty, then \(A\) has a compact resolvent, and as it was already explained, \(\mathrm{sp}(A)\) is a discrete subset of \(\mathbb{C}\) and the resolvent \((A-z)^{-1}\) is a holomorphic family of \(\Psi_{\mathrm{iso}}^{-m}(\mathbb{R}^{d})\). Furthermore, for \(A=a^{w}(x,\frac{1}{i}\partial_{x})\), the residues \(r_{\ell}^{w}(x,\frac{1}{i}\partial_{x})\) defined in (49) have finite rank and \(r_{1}^{w}(x,\frac{1}{i}\partial_{x})\) is a projector onto the space of generalised eigenvectors of \(A\) for the eigenvalue \(z_{0}\), which is a subspace of \(\mathcal{S}(\mathbb{R}^{d})\).
## 7 Auxiliary bundles
Let us first define symbols taking values in an auxiliary bundle. Recall the spaces \(S_{*}^{m}(N,E)\) introduced in Section 2 for a real vector bundle \(p:E\to N\) and \(*=\emptyset\), \(\mathrm{ph}\), \(\mathrm{sc}\). Let \(B\) be a complex vector bundle over \(N\). By definition \(S^{m}(N,E;B)\) is the space of sections \(s\in\mathcal{C}^{\infty}(E,p^{*}B)\) such that for any frame \((u_{\alpha})\) of \(B\) over an open set \(U\) of \(N\), we have over \(p^{-1}(U)\),
\[s(x,\xi)=\sum f_{\alpha}(x,\xi)u_{\alpha}(x),\qquad x\in N,\;\xi\in E_{x}\]
with coefficients \(f_{\alpha}\) in \(S^{m}(U,E)\). Since \(S^{m}(U,E)\) is a \(\mathcal{C}^{\infty}(U)\)-submodule of \(\mathcal{C}^{\infty}(U,E)\), this definition is compatible with the frame changes. Similarly, we define \(S_{*}^{m}(M,E;B)\) for \(*=\mathrm{ph}\) or \(\mathrm{sc}\) by requiring that the coefficients \(f_{\alpha}\) belong to \(S_{*}^{m}(U,E)\). More precisely, in the case of semiclassical symbols where the section \(s\) and its local coefficients depend on \(h\), we only choose frames \((u_{\alpha})\) independent of \(h\).
Let \(A_{1}\) and \(A_{2}\) be two complex vector bundles over \(M\) and let us define the pseudodifferential operator spaces \(\Psi_{\mathrm{sc}}^{m}(M;A_{1},A_{2})\), \(\Psi_{\mathrm{tsc}}^{m}(L;A_{1},A_{2})\) and \(\Psi_{\mathrm{Heis}}^{m}(L,\nabla;A_{1},A_{2})\). For \(A_{1}\), \(A_{2}\) being both the trivial line bundle, these are the spaces we introduced previously. In general, set \(B=A_{2}\boxtimes A_{1}^{*}\). Then
* \(\Psi_{\mathrm{sc}}^{m}(M;A_{1},A_{2})\) consists of the families \((P_{h}:\mathcal{C}^{\infty}(M,A_{1})\to\mathcal{C}^{\infty}(M,A_{2})\), \(h\in(0,1])\) satisfying the same conditions as before except that the amplitude \(a\) appearing in (15) belongs to \(S_{\mathrm{sc}}^{m}(U^{2},\mathbb{R}^{n};B)\).
* \(\Psi^{m}_{\rm tsc}(L;A_{1},A_{2})\) consists of the families \[P=(P_{k}:\mathcal{C}^{\infty}(M,L^{k}\otimes A_{1})\to\mathcal{C}^{\infty}(M,L^{k }\otimes A_{2}),\;k\in\mathbb{N})\] (51) satisfying the conditions of Definition 2.1 with \(a\in S^{m}_{\rm sc}(U^{2},\mathbb{R}^{n};B)\)
* \(\Psi^{m}_{\rm Heis}(L;A_{1},A_{2})\) consists of the families \(P\) of the form (51) satisfying the conditions of Definition 3.2 with \(Q_{h}\) an operator of \(S^{m}_{\rm sc}(M;A_{1},A_{2})\)
The symbol of \(P\) is defined as before. Since the restriction of \(B\) to the diagonal is isomorphic with \(\operatorname{Hom}(A_{1},A_{2})\), in the three cases, the symbol identifies with an element of \(S^{m}_{\rm ph}(M,T^{*}M;\operatorname{Hom}(A_{1},A_{2}))\).
The space \(\mathcal{D}^{m}_{\rm Heis}(L,\nabla;A_{1},A_{2})\) of Heisenberg differential operators consists of the families (51) of differential operators such that for any coordinate chart \((U,x_{i})\) of \(M\), we have on \(U\)
\[P_{k}=\sum_{\ell\in\mathbb{N},\;\alpha\in\mathbb{N}^{n},\;\ell+|\alpha|\leqslant m }k^{-\frac{\ell}{2}}f_{\ell,\alpha}\tilde{\pi}^{\alpha} \tag{52}\]
where \(f_{\ell,\alpha}\in\mathcal{C}^{\infty}(U,\operatorname{Hom}(A_{1},A_{2}))\), \(\tilde{\pi}^{\alpha}=\tilde{\pi}_{1}^{\alpha(1)}\ldots\tilde{\pi}_{n}^{\alpha (n)}\) with \(\tilde{\pi}_{i}=\frac{1}{i\sqrt{k}}\nabla_{\partial_{x_{i}}}^{L^{k}\otimes A_{ 2}}\). Here we use a connection of \(A_{2}\), which induces with the connection of \(L\) a covariant derivative of \(A_{2}\otimes L^{k}\). Proposition 5.1 still holds: Heisenberg differential operators are Heisenberg pseudodifferential operators, the symbol of (52) is \(\sum_{|\alpha|\leqslant m}f_{0,\alpha}(x)\xi^{\sharp_{x}\alpha}\),
\[\mathcal{D}^{m}_{\rm Heis}(L,\nabla;A_{2},A_{3})\circ\Psi^{p}_{\rm Heis}(L, \nabla;A_{1},A_{2})\subset\Psi^{m+p}_{\rm Heis}(L,\nabla;A_{1},A_{3}),\]
and the product of symbols is the fiberwise product \(\sharp_{x}\) tensored by the composition \(\operatorname{Hom}(A_{2,x},A_{3,x})\times\operatorname{Hom}(A_{1,x},A_{2,x}) \to\operatorname{Hom}(A_{1,x},A_{3,x}).\) It is easy to see that the definition of the Heisenberg differential operators and of their symbols do not depend on the choice of the connection of \(A_{2}\).
In the sequel we assume that \(A_{1}=A_{2}=A\) and is equipped with a Hermitian metric. We use the notation \(\mathcal{D}^{m}_{\rm Heis}(L,\nabla;A)\) instead of \(\mathcal{D}^{m}_{\rm Heis}(L,\nabla;A,A)\) and similarly for the other operator spaces. Our goal is to generalize Theorem 6.4 for \(P\in\mathcal{D}^{2}_{\rm Heis}(L,\nabla;A)\) having a symbol \(\sigma\) of the form
\[\sigma(x,\xi)=\tfrac{1}{2}|\xi|_{x}^{2}+V(x) \tag{53}\]
where \(|\cdot|\) is the norm of \(T^{*}M\) for a Riemannian metric of \(M\) not necessarily compatible with \(\omega\) and \(V\in\mathcal{C}^{\infty}(M,\operatorname{End}A)\) is Hermitian at each point. Example of such operators include Schrodinger operators with magnetic field and electric potential, holomorphic Laplacians or semiclassical Dirac
operators, cf. [7, Section 3]. Besides of the numerous examples, the interest of these operators is that we can compute explicitly the spectrum of the symbols \(\sigma(x,\cdot)\)
\[\operatorname{sp}(\sigma(x,\cdot))=\Big{\{}\sum_{i=1}^{n}B_{i}(x)(\alpha(i)+ \tfrac{1}{2})+V_{j}(x)/\ \alpha\in\mathbb{N}^{d},j=1,\ldots,r\Big{\}}\]
where \(0<B_{1}(x)\leqslant\ldots\leqslant B_{d}(x)\) are the eigenvalues of \(\omega(x)\) with respect to \(g_{x}\) and \(V_{1}(x)\leqslant\ldots\leqslant V_{r}(x)\) are the eigenvalues of \(V(x)\). Moreover, we have
\[\tfrac{1}{2}|\xi|_{x}^{2}=\sum_{i=1}^{d}B_{i}(x)h(s_{i},\sigma_{i}),\qquad h(y,\eta)=\tfrac{1}{2}(y^{2}+\eta^{2})\]
where \(s_{i}\) and \(\sigma_{i}\) are the linear coordinates of \(T_{x}^{*}M\) associated to a symplectic basis. So the analysis of \(\sigma(x,\cdot)\) boils down to the standard quantum harmonic oscillator \(h^{w}\) or the Landau Hamiltonian \(h(\tfrac{1}{i}\nabla)\).
**Theorem 7.1**.: _Let \(P\in\mathcal{D}^{2}_{\operatorname{Heis}}(L,\nabla;A)\) having a symbol \(\sigma\) of the form (53) and such that for each \(k\), \(P_{k}\) is formally selfadjoint for a volume element of \(M\). Assume \(\omega\) is nondegenerate and let \(\Sigma=\bigcup_{x\in M}\operatorname{sp}(\sigma(x,\cdot))\). Then_
* _For any_ \(z\in\mathbb{C}\setminus\Sigma\)_, there exists_ \(Q(z)\in\Psi^{-2}_{\operatorname{Heis}}(L,\nabla;A)\) _such that_ \((P_{k}-z)Q_{k}(z)=\operatorname{id}\) _and_ \(Q_{k}(z)(P_{k}-z)=\operatorname{id}\) _when_ \(k\) _is large._
* _For any_ \(E\in\mathbb{R}\setminus\Sigma\)_,_ \((1_{(-\infty,E]}(P_{k}))\) _belongs to_ \(\Psi^{-\infty}_{\operatorname{Heis}}(L,\nabla;A)\)_._
The proof is the same as the one of Theorem 6.4. The symbols \(\tau(z)\) and \(p_{E}\) of \(Q(z)\) and \(1_{(-\infty,E]}\) respectively are such that for any \(x\in M\),
\[\tau(z)(x,\cdot)^{w}=(\sigma(x,\cdot)^{w}-z)^{-1},\qquad p_{E}(x,\cdot)^{w}=1 _{(-\infty,E]}((\sigma(x,\cdot)^{w}).\]
In the case where \(B_{i}=1\) and \(V=0\), they have been studied for themselves in [9], [26], and given by the formulas (12) and (13) respectively.
|
2309.04799 | MOStream: A Modular and Self-Optimizing Data Stream Clustering Algorithm | Data stream clustering is a critical operation in various real-world
applications, ranging from the Internet of Things (IoT) to social media and
financial systems. Existing data stream clustering algorithms, while effective
to varying extents, often lack the flexibility and self-optimization
capabilities needed to adapt to diverse workload characteristics such as
outlier, cluster evolution and changing dimensions in data points. These
limitations manifest in suboptimal clustering accuracy and computational
inefficiency. In this paper, we introduce MOStream, a modular and
self-optimizing data stream clustering algorithm designed to dynamically
balance clustering accuracy and computational efficiency at runtime. MOStream
distinguishes itself by its adaptivity, clearly demarcating four pivotal design
dimensions: the summarizing data structure, the window model for handling data
temporality, the outlier detection mechanism, and the refinement strategy for
improving cluster quality. This clear separation facilitates flexible
adaptation to varying design choices and enhances its adaptability to a wide
array of application contexts. We conduct a rigorous performance evaluation of
MOStream, employing diverse configurations and benchmarking it against 9
representative data stream clustering algorithms on 4 real-world datasets and 3
synthetic datasets. Our empirical results demonstrate that MOStream
consistently surpasses competing algorithms in terms of clustering accuracy,
processing throughput, and adaptability to varying data stream characteristics. | Zhengru Wang, Xin Wang, Shuhao Zhang | 2023-09-09T13:50:45Z | http://arxiv.org/abs/2309.04799v2 | # Benne: A Modular and Self-Optimizing Algorithm for Data Stream Clustering
###### Abstract
In various real-world applications, ranging from the Internet of Things (IoT) to social media and financial systems, data stream clustering is a critical operation. This paper introduces _Benne_, a modular and highly configurable data stream clustering algorithm designed to offer a nuanced balance between clustering accuracy and computational efficiency. _Benne_ distinguishes itself by clearly demarcating four pivotal design dimensions: the summarizing data structure, the window model for handling data temporality, the outlier detection mechanism, and the refinement strategy for improving cluster quality. This clear separation not only facilitates a granular understanding of the impact of each design choice on the algorithm's performance but also enhances the algorithm's adaptability to a wide array of application contexts. We provide a comprehensive analysis of these design dimensions, elucidating the challenges and opportunities inherent to each. Furthermore, we conduct a rigorous performance evaluation of _Benne_, employing diverse configurations and benchmarking it against existing state-of-the-art data stream clustering algorithms. Our empirical results substantiate that _Benne_ either matches or surpasses competing algorithms in terms of clustering accuracy, processing throughput, and adaptability to varying data stream characteristics. This establishes _Benne_ as a valuable asset for both practitioners and researchers in the field of data stream mining.
## 1 Introduction
_Data Stream Clustering_ (_DSC_) serves as a cornerstone in the field of data stream mining, finding extensive applications across diverse real-world contexts including network intrusion detection [1], social network analysis [2], weather forecasting [3], and financial market analysis [4]. Unlike traditional batch clustering algorithms such as KMeans[5, 6] and DBSCAN[7], _DSC_ algorithms dynamically group incoming data tuples based on attribute similarities. These algorithms are specifically designed to manage unique data stream challenges, notably _cluster evolution_ and _outlier evolution_[2, 8, 9, 10, 11]. These terms refer to the shifting nature of data distributions and the emergence of new outliers over time.
In addition, _DSC_ algorithms must grapple with the imperative of processing efficiency [12, 13]. They are often deployed in environments where time is of the essence, data streams at high velocities, and real-time decision-making is crucial. To balance the need for high clustering accuracy with constraints on computational resources, memory, and latency, various strategies have been investigated. These include incremental updates, data summarization, sketching techniques, and both online and offline processing methods. The relentless growth in the volume, velocity, and variety of data streams continues to pose both challenges and opportunities, keeping the design, optimization, and evaluation of _DSC_ algorithms an area of active and pertinent research [14, 15, 2, 12, 13].
The multifaceted nature of use cases and the diversity in performance metrics have spurred the creation of a wide array of _DSC_ algorithms [14, 16, 2, 17, 18, 15, 19, 20, 21, 17, 22]. These algorithms are built on varying principles, methods, and heuristics, each tailored to meet specific needs in terms of application requirements, data characteristics, or performance objectives. For example, some algorithms are optimized for processing speed [15], making them ideal for high-velocity data streams. Others prioritize clustering accuracy [20, 14], a critical factor in applications requiring precise cluster assignments. Consequently, the task of selecting an appropriate _DSC_ algorithm for real-world, diverse workloads becomes a complex endeavour. This complexity is exacerbated by the interdependent nature of several fundamental design choices in _DSC_ algorithms, each with its own set of trade-offs and performance implications.
In a prior study [24], we rigorously examined four cornerstone design elements in data stream clustering (_DSC_) algorithms: data structure summarization, window modelling, outlier detection mechanisms, and refinement strategies. This comprehensive analysis led to the creation of _Benne_, a modular _DSC_ algorithm. The modularity of _Benne_ allows for effortless customization, making it adaptable to a variety of application domains. Whether the focus is on clustering accuracy, processing efficiency, or resilience to noise and outliers, _Benne_ can be tailored to meet these specific performance goals.
In this work, we extend our investigation into _Benne_, elaborating on its modular architecture. We introduce three key enhancements to its design:
* First, we implement a 'Regular Stream Characteristics Detection' mechanism that routinely identifies dynamic changes in stream characteristics, such as cluster and outlier evolution, as well as shifts in workload dimensions.
* Second, we add an 'Automatic Design Choice Selection' feature, enabling _Benne_ to adapt its component choices autonomously in response to changes in stream
characteristics.
* Lastly, we introduce 'Flexible Algorithm Migration,' a feature designed to minimize clustering information loss during modular adjustments. This is achieved by transferring existing clustering results to the new configuration.
These enhancements, working in concert, equip _Benne_ with the robustness needed to maintain stable performance across diverse optimization targets, even in the face of evolving stream characteristics.
We undertake a rigorous evaluation of _Benne_'s performance, comparing it with leading _DSC_ algorithms across a variety of configurations. Our evaluation encompasses a broad spectrum of real-world and synthetic workloads, each with its unique set of characteristics. This comprehensive approach ensures the generalizability of our findings. Notably, _Benne_ demonstrates superior performance in both purity and throughput metrics, excelling especially in scenarios characterized by high dimensionality and frequent cluster evolution. To make it readily accessible for both academic research and practical applications, we have also encapsulated _Benne_ into a Python library. The source code for _Benne_ is publicly available at [https://github.com/intellistream/Sesame](https://github.com/intellistream/Sesame).
The structure of the remainder of this paper is as follows: Section 2 provides an overview of data stream characteristics, clustering objectives, and essential components of _DSC_ algorithms. Section 3 details the modular design of _DSC_ and its key components. Section 4 discusses the complete algorithmic design of _Benne_, including its automatic option selection mechanism. Section 5 presents our empirical evaluation of _Benne_ and its automatic option selection mechanisms. Finally, Section 6 reviews additional related work, and Section 7 concludes the paper.
## 2 Preliminaries and Background
In this section, we discuss the data stream characteristics, clustering objectives, and essential components of _DSC_ algorithms.
### _Foundational Concepts and Notations_
In this subsection, we introduce key terms, notations, and essential components foundational to the understanding of _DSC_ algorithms.
In mathematical terms, a **data stream** is represented as a sequence of tuples, denoted as \(S=(x_{1},x_{2},\ldots,x_{t},\ldots)\), where \(x_{t}\) denotes the \(t\)-th data point arriving at time \(t\). Let \(x_{t}\) be a data point in the stream, and \(D(x_{t},C_{t})\) be the distance from \(x_{t}\) to its closest cluster in \(C_{t}\). If \(D(x_{t},C_{t})>\delta\), where \(\delta\) is a threshold, \(x_{t}\) can be considered an **outlier**. _DSC_ algorithms efficiently update \(C_{t}\) to \(C_{t+1}\) in response to changes in data distribution.
To measure the clustering quality of _DSC_ algorithms, in this paper, we apply the most widely used metric, purity [25], which assesses how well the data points within each cluster belong to the same class. We also uses CMM [26], which is specifically designed for measuring _DSC_ algorithms ability to handle the evolving activities in the stream. To evaluate the performance of the _DSC_ algorithms, we introduce the throughput metric, which represents the amount of data that the algorithm can process within a certain period of time.
_DSC_ algorithms typically consist of several key components to address the challenges posed by data streams: 1) **Summarizing Data Structure** provides a compact representation of data points, capturing essential information while minimizing memory usage, as storing the entire data stream is impractical. 2) **Window Model** handles the temporal aspect of data streams, focusing on recent data points and discarding outdated information, improving the algorithm's clustering capability. 3) **Outlier Detection Mechanism** identifies and handles noise and outliers in the data stream, preventing them from affecting the clustering quality. Detecting outliers in data streams is a challenging task for _DSC_ algorithms, particularly when outlier evolution occurs. 4) **Refinement Strategy** updates and refines the clustering model as new data points arrive, adapting to changes in the data distribution. This strategy usually applies once before getting the final clustering result, not significantly influencing efficiency but potentially improving accuracy.
### _Challenges and Objectives_
Data streams introduce unique challenges for data mining algorithms due to their continuous, high-speed, and potentially infinite nature. As such, designing effective algorithms to cluster data streams requires a deep understanding of these challenges and clear objectives to guide the development.
#### 2.2.1 Challenges
The challenges inherent to data stream clustering can be encapsulated in the four \(V\)'s:
_Volume_: Data streams can generate volumes of data that may surpass the system's storage and processing limits. Efficient data summarization techniques are crucial for managing this challenge [27, 28]. Specifically, methods like sketching [29] have been incorporated into _DSC_ algorithms to capture essential information while omitting less important details [19]. The objective is to maintain a concise yet informative representation of the data stream for efficient querying and analysis.
_Velocity_: The rapid arrival rate of data in streams necessitates real-time processing capabilities in algorithms [30, 28]. To address this, _DSC_ algorithms often employ efficient, incremental processing techniques that adapt to new data without the need for reprocessing existing data. The sliding window model serves as a notable example of such a technique, maintaining a fixed-size window over recent data points and incrementally updating clustering results as data evolves [31].
_Vercality_: The presence of noise and outliers in data streams can compromise the integrity of clustering results [27, 32]. To mitigate this, robust outlier detection mechanisms are indispensable. Various methods such as distance-based [33], density-based [34], and angle-based [35] approaches have been incorporated into _DSC_ algorithms to identify and manage such anomalous data points effectively.
_Variability:_ Data streams are subject to continuous evolution, affecting the distribution of clusters and necessitating real-time adjustments such as merging, emerging, splitting, deleting, and adjusting clusters [36, 30, 2]. To manage this dynamism, techniques like adaptive forgetting factors [37], change detection [38], and incremental updating [2] are frequently employed. Beyond cluster evolution, data streams also exhibit variabilities in workload dimensionality, referring to fluctuations in the dimensions of incoming data points, and in outlier evolution, which involves the dynamic interchange of roles between outlier clusters and temporal clusters.
#### 2.2.2 Clustering Objectives
In addressing the challenges unique to data streams, it is crucial for _DSC_ algorithms to be guided by well-defined objectives. These objectives not only shape the algorithmic design but also influence their effectiveness in clustering data streams.
_Accuracy:_ The foremost objective of any _DSC_ algorithm is to faithfully represent the underlying data distribution. This involves capturing genuine similarities among data points. The algorithm's accuracy can be assessed through internal metrics such as _cohesion_ and _separation_, or external metrics like _purity_ and _normalized mutual information_[25]. An algorithm excels in accuracy when it forms clusters that are internally cohesive and externally distinct.
_Efficiency:_ In the context of data streams, where both volume and velocity are substantial, the efficiency of a clustering algorithm is paramount. The algorithm must process data points in real-time while optimizing computational and memory resources. Efficiency is typically quantified through time and space complexity metrics. _DSC_ algorithms are specifically engineered to incrementally update clustering results as new data points arrive, obviating the need for reprocessing the entire data set, thus enhancing computational efficiency [2, 19].
_Adaptability:_ An effective clustering algorithm must demonstrate adaptability to the inherent variabilities in data streams, such as cluster evolution, outlier evolution, and fluctuating data dimensions. The algorithm should incorporate mechanisms like change detection, adaptive forgetting factors, and incremental updating to dynamically adjust its clustering model in response to these variabilities [38, 37, 2].
## 3 Design Components of DSC Algorithms
_Benne_ modularizes the implementation of _DSC_ algorithms into the aforementioned four pivotal components: Summarizing Data Structure, Window Model, Outlier Detection Mechanism, and Refinement Strategy. These components collectively address the challenges and objectives outlined in the previous section. To assist in the selection of optimal design choices tailored to specific data characteristics and requirements, Table I enumerates the advantages and disadvantages of each design aspect.
### _Summarizing Data Structure_
In _Benne_, the summarizing data structure plays a critical role in providing a compact representation of the data points within the stream. This abstraction helps to enhance computational efficiency while maintaining a low memory footprint. Depending on the clustering needs, _Benne_ offers two categories of summarizing data structures: the hierarchical category and the partitional category.
#### 3.1.1 Hierarchical Summarizing Data Structure
Hierarchical data structures are primarily used in _Benne_ when the data streams display inherent hierarchies or when the data can be logically grouped into a tree structure. As shown in Figure 1(a), three types of hierarchical data structures are supported in _Benne_: Clustering Feature Tree (CFT), Coreset Tree (CoreT), and Dependency Tree (DPT).
**Clustering Feature Tree (CFT).** CFT [14] represents a classic yet efficient choice for hierarchical data representation in _Benne_. Its structure supports a broad range of basic operations required in stream clustering, such as distance calculation and cluster updating, thus providing a solid foundation for further complex manipulations.
**Coreset Tree (CoreT).** When the data stream is characterized by high volume and density variations, _Benne_ employs CoreT [20] to extract a core subset for processing. Despite the necessity of full tree rebuilding during clustering, which could affect efficiency, the application of CoreT in _Benne_ helps in obtaining high-quality clusters from dense data streams.
**Dependency Tree (DPT).** DPT [2] is _Benne_'s choice for handling cluster evolution effectively. The tree structure of DPT, specifically designed to adapt to evolving activities in the stream, ensures that _Benne_ delivers high clustering accuracy even in dynamic environments.
#### 3.1.2 Partitional Summarizing Data Structure
Partitional data structures, as shown in Figure 1(b), are beneficial when the data streams are best represented by flat, non-overlapping clusters. In _Benne_, these structures are used to effectively handle and categorize incoming data points into appropriate clusters. The algorithm supports three types of partitional data structures: Micro Clusters (MCs), Grids (Grids), and Augmented Meyerson Sketch (AMS).
**Micro Clusters (MCs).** MCs [16] serve as a means to reduce the computational load by representing a group of closely related data points as a single entity. With the additional elements for summarizing timestamps for clusters' updates, MCs enable _Benne_ to accurately track real-time clustering activities, especially under cluster evolution.
**Grids (Grids).**_Benne_ uses Grids [15] when efficiency is paramount. The structure eliminates the need for frequent
Fig. 1: Six Types of Summarizing Data Structure in _Benne_.
distance calculations between new data and the grid, boosting performance. Additionally, the periodic removal of sparse grids optimizes resource usage, while its fixed position may limit accuracy in cases of frequent evolving activities.
**Augmented Meyerson Sketch (MS).** The AMS[19] is utilized for summarizing clustering information with a limited number of data points. Though it can be cost-intensive due to the need for sketch reconstruction to adapt to evolving data streams, it is applicable in scenarios where the exact number of clusters is known a priori. This flexibility helps _Benne_ to accommodate diverse clustering needs.
### _Window Model_
In _Benne_, the window model determines the scope of the data stream that is under consideration for clustering at any given time. The selected window model influences how _Benne_ can adapt to evolving data distributions and concept drifts. As illustrated in Figure 2, the three primary types of window models supported in _Benne_ are the Landmark Window Model (LandmarkWM), the Sliding Window Model (SlidingWM), and the Damped Window Model (DampedWM).
#### 3.2.1 Landmark Window Model
In the LandmarkWM[39], the data stream is divided into fixed-size windows starting from a predefined landmark point. This model aids in detecting concept drifts and periodic patterns in the data by facilitating comparisons of clustering results across different windows. The primary challenge with the LandmarkWM lies in deciding the spacing between landmarks, which has an impact on clustering accuracy and efficiency.
#### 3.2.2 Sliding Window Model
The SlidingWM[40, 19] focuses on a fixed-size window encompassing the most recent data points for clustering. As new data points are processed, the oldest ones are discarded, ensuring that the algorithm remains responsive to the latest trends in the data. This model may, however, compromise clustering accuracy, particularly when the window size is small.
#### 3.2.3 Damped Window Model
The DampedWM[41, 15] assigns exponentially decaying weights to data points based on their age, giving precedence to recent data points while still considering older ones. This approach enables smoother adaptation to evolving data distributions. The DampedWM model is susceptible to certain stream characteristics, such as outlier effects, due to its predefined decay function parameters.
### _Outlier Detection Mechanism_
Outlier detection mechanisms are optional design aspects in _Benne_ that contribute to ensuring the quality of clustering results by identifying data points that deviate significantly from the overall data distribution. The two independent enhancements for this mechanism are the use of a buffer
\begin{table}
\begin{tabular}{|p{56.9pt}|p{113.8pt}|p{113.8pt}|} \hline
**Aspect** & **Option** & **Pros** & **Cons** \\ \hline \multirow{4}{*}{Data Structure} & CFT & Supports a range of operations & - \\ \cline{2-4} & Corer & Can handle dense data streams & Involves tree rebuild \\ \cline{2-4} & DFT & Can handle evolving data streams & - \\ \cline{2-4} & MCs & Reduces computation load & - \\ \cline{2-4} & Grids & Computationally efficient & Limited accuracy with changing data streams \\ \cline{2-4} & AMS & Ideal for known number of clusters & Involves sketch reconstruction \\ \hline \multirow{3}{*}{Window Model} & LandmarkWM & Capable of detecting drifts & Sensitive to landmark spacing \\ \cline{2-4} & SlidingWM & Responsive to trends & Accuracy loss with small windows \\ \cline{2-4} & DampedWM & Smooth adaptation to data evolution & Sensitive to outlier effects \\ \hline \multirow{4}{*}{Outlier Detection} & NooutlierD & - & May reduce clustering quality \\ \cline{2-4} & OutlierD & Helps improve accuracy & - \\ \cline{2-4} & OutlierD-B & Preventis immediate incorporation of outliers & - \\ \cline{2-4} & OutlierD-T & Robust to outliers & Algorithm complexity \\ \cline{2-4} & OutlierD-BT & High accuracy & Requires buffer time \\ \hline \multirow{2}{*}{Refinement Strategy} & NoRefine & Saves computational resources & Less accurate with evolving data \\ \cline{2-4} & One-shotRefine & Balances efficiency and accuracy & - \\ \cline{2-4} & IncrementalRefine & Adapts to new data & Computation overhead \\ \hline \end{tabular}
\end{table} TABLE I: Design Options in _Benne_
Fig. 3: Four Types of Outlier Detection Mechanisms.
Fig. 2: Three Types of Window Model.
and a timer. We provide five variations, including not using any outlier detection mechanism (NoOutlierD). The other four variations are illustrated in Figure 3.
#### 3.3.1 Basic Outlier Detection
As depicted in Figure 3(a), the Basic Outlier Detection (OutlierD) periodically identifies and removes low-density temporal clusters, improving clustering accuracy without significantly impacting efficiency [16, 15]. Outlier detection methods can be further categorized into distance-based, density-based, and grid-based approaches, each suited to particular data stream characteristics.
#### 3.3.2 Outlier Detection with Buffer
The Buffer variant (OutlierD-B) retains potential outliers in a buffer for possible future clustering, as shown in Figure 3(b) [17, 2]. This approach improves the overall clustering performance by keeping the temporal clusters representative of the underlying data distribution and preventing the immediate incorporation of potential outliers.
#### 3.3.3 Outlier Detection with Timer
The Timer variant (OutlierD-T) uses a timer mechanism, as illustrated in Figure 3(c), to evaluate the activity of temporal clusters before transforming them into outlier clusters. This method enhances the robustness of the outlier detection process and the overall accuracy of the clustering results. The trade-off is an increase in algorithmic complexity and the risk of preserving noisy clusters.
#### 3.3.4 Outlier Detection with Buffer and Timer
As depicted in Figure 3(d), the combined Buffer and Timer variant (OutlierD-BT) retains outlier clusters in a buffer and uses a timer to evaluate their activity before transforming them into outlier clusters. Although this method improves the clustering accuracy, it increases the time required for buffer maintenance.
### _Refinement Strategy_
The refinement strategy, an optional design aspect, helps to maintain the adaptability of _DSC_ algorithms to evolving data distributions and concept drifts by updating the clustering model with new data points. Depending on computational efficiency, clustering accuracy, and adaptability to data distribution changes, there are three strategies available: NoRefine, One-shotRefine, and IncrementalRefine.
#### 3.4.1 No Refinement
The NoRefine strategy, as depicted in Figure 4 (a), outputs the temporal clusters \(C\) as the final results without further refinement. This strategy saves computational resources but may lower the clustering accuracy when handling evolving data distributions or concept drifts.
#### 3.4.2 One-shot Refinement
The One-shotRefine strategy (Figure 4 (b)) performs model refinement less frequently to balance computational efficiency and clustering accuracy. This strategy is effective when computational resources are limited or the data stream changes gradually, allowing infrequent model updates.
#### 3.4.3 Incremental Refinement
The IncrementalRefine strategy (Figure 4 (c)) updates the clustering model continuously as new data points arrive. This keeps the model current and adaptable but may increase computational overhead. The trade-off is improved clustering accuracy at the cost of computational complexity.
## 4 Design and Adaptability of _Benne_
This section elucidates the comprehensive algorithmic design of _Benne_. The algorithm is engineered for flexible adaptability, allowing it to self-adjust its design components in response to dynamically detected workload characteristics of the data stream and user-defined optimization objectives. This adaptability enables _Benne_ to capitalize on the strengths of various design choices, thereby ensuring enhanced and stable clustering performance tailored to specific optimization goals.
### _General Workflow_
_Benne_ operates according to a bifurcated execution strategy, comprising an online phase and an offline phase. The algorithm's adaptability is rooted in the flexible configuration of its core components: the summarizing data structure (\(struc.\)), window model (\(win.\)), outlier detection mechanism (\(out.\)), and refinement strategy (\(ref.\)). Guided by real-time stream characteristics, which are detected from a sample queue (\(queue\)), as well as predefined threshold values (\(thresholds\)) and a performance objective (\(o\)), _Benne_ dynamically tailors its components to meet the specialized requirements of diverse applications and data streams. Algorithm 1 outlines the high-level execution flow of _Benne_. In the online phase, the algorithm sequentially processes incoming data points, adhering to the following steps:
_(1) Automatic Design Choices Adaptation:_ Given the dynamic characteristics of data streams, _Benne_ is designed to adapt its configuration to optimize performance continually. The algorithm accumulates incoming data points in a queue (denoted as \(queue\) in Line 3 of Algorithm 1). It then performs real-time analysis of the stream's characteristics using the _Regular-detection Fun_. (executed at Line 5). Based
Fig. 4: Refinement Strategies.
on this analysis, _Benne_ dynamically selects the most suitable components for the current data stream using the _Auto-selection Fun_. (executed at Line 6). Detailed discussions of these functions are available in Sections 4.2 and 4.3. If a change in the summarizing data structure is warranted, _Benne_ transfers the temporal clusters from the old structure (\(struc\_old\)) to the new one (\(struc\_\).) using the _Flexible-migration Fun_. (executed at Line 7), as elaborated in Section 4.4.
_(2) Window Function:_ The _Window Fun_. (executed at Line 11 in Algorithm 1) manages the data points in the summarizing data structure (\(s\)), in accordance with the selected window model (\(win\).). This function ensures that the algorithm's focus remains on the most recent and relevant data points, thereby allowing it to adapt to changes in the data distribution effectively.
_(3) Outlier Detection:_ When the outlier detection mechanism (\(out\).) is activated, the algorithm invokes the Outlier Fun. to assess if the current input point (\(p\)) qualifies as an outlier. This function is executed between Lines 12-15 in Algorithm 1. If the point is determined not to be an outlier, it is subsequently inserted into the summarizing data structure (\(s\)) and the structure is updated accordingly (executed at Line 15). Conversely, if the outlier detection mechanism is deactivated, the input point is directly inserted into the summarizing data structure, as indicated at Line 17 in Algorithm 1.
_(4) Incremental Refinement:_ Should the refinement strategy (\(ref\).) be configured to IncrementalRefine, the algorithm invokes the _Refine Fun_. each time new data points are processed. This is executed between Lines 18-19 in Algorithm 1. This strategy ensures that the clustering model is continuously updated and remains responsive to dynamic changes in the data distribution. However, this approach may necessitate more frequent updates, thereby potentially increasing computational overhead.
_(5) One-shot Refinement:_ Upon completion of the input data stream processing, the algorithm transitions to the offline phase. If the refinement strategy (\(ref\).) is configured as One-shotRefine, the _Refine Fun_. is executed in a single pass at the conclusion of the stream processing, as indicated between Lines 20-21 in Algorithm 1. This strategy minimizes computational overhead by limiting the frequency of refinement operations. However, it may compromise the adaptability of the clustering model if it fails to capture timely changes in the data distribution.
```
Data:\(p\)// Input point Data:\(c\)// Temporal Clusters Data:\(struc\_\)// Type of summarizing data structure Data:\(win\).// Type of window model Data:\(of\)// Type of outlier detection mechanism Data:\(ref\).// Type of refinement strategy Data:\(queue\).// Batch of stream for detection Data:\(threshold\).\(f\)// Identifying characteristics' changes Data:\(of\).// \(year\): primary performance objective // Online Phase
1Initializing parameters and design choices;
2while\(stop\)processinginputstreamsdo
3if\(queue\)is fullthen
4\(struc\_old\_struc\_\);
5\(characteristics\)=Regular-detectionFun(\_);
6\(struc\_win\_out\_,ref\_=Auto-selectionFun(\_);
7\(Flexible\_migrationFun(\_);\)
8 Empty queue;
9
10else
11\(queue\_push(p)\);
12WindowFun.(\_);
13
14if\(out\_=No\_utilier\_then\)
15\(b\)\(\leftarrow\)OutlierFun.(\_);
16if\(b\)=falsethen
17 InsertFun.(\_)// Insert\(p\)to\(s\)andupdates
18else
19 InsertFun.(\_)// Insert\(p\)to\(s\)andupdates
20if\(ref\_=IncrementalRefine\)then
21\(Refine\)Fun.(\(ref\);)
22
23 // OfflinePhase
24if\(f\_=One-shotRefine\)then
25
26\(Reprint\_pref\_);
```
**Algorithm 1**Execution flow of _Benne_.
### _Regular Stream Characteristics Detection_
As elaborated in Section 2.2, various stream characteristics such as data dimensionality, cluster evolution, and the number of outliers are subject to change in evolving data streams. To make informed decisions about the most appropriate design choices for real-time clustering, _Benne_ must first ascertain the current characteristics of the data stream. Algorithm 2 delineates the procedure _Benne_ employs to automatically detect key attributes like dimensionality, cluster evolution, and the number of outliers in the current data stream.
Specifically, the algorithm initializes various counters and variables to store stream characteristics at Lines 3-5 of Algorithm 2. For each new data point in the \(queue\) (Line 6), _Benne_ evaluates its dimensionality. If the dimension exceeds the threshold \(T_{d}\), the counter \(high\_dim\_data\) is incremented (Line 7). The variance of the data stream is updated at Line 8. The algorithm also checks whether each data point is an outlier based on its distance to the current clustering centers (Line 9-10).
After processing all the data points in the \(queue\), _Benne_ sets the \(characteristics.high\_dimension\) attribute to "true" or "false" based on the value of \(high\_dim\_data\) (Lines 11-14). Similarly, the \(characteristics.frequent\_evolution\) attribute is determined based on the calculated variance (Lines 15-19), and the \(characteristics.many\_outliers\) attribute is set based on the number of outliers detected (Lines 20-23).
### _Automatic Design Choice Selection_
Upon receiving the stream characteristics from the _Regular-detection Function_, _Benne_ proceeds to select the most appropriate design choices based on these characteristics and the user-defined optimization objective. The detailed selection process is delineated in Algorithm 3.
```
/* objective:User's primary performance objective (Accuracy, Efficiency, or Balance); data:Input data streamcharteristics:Stron characteristics */
1FunctionAuto-Selection Fun.(objective, data, characteristics); /* Initialize four modular design components */
2\(struc\_win\_out\_ref\_=null\); /* objective:=Aconomy
3\(/*\) Select modules with high accuracy based on input data stream characteristics */
4if\(characteristics.from\_evolution=true\)then
5\(struc.=\texttt{MC5}\);
6else
7\(struc.=\texttt{CFT}\);
8if\(characteristics.many\_outliers=true\)then
9\(win.=\texttt{LandauMC5}\);
10\(out.=\texttt{OutlierD-BT}\);
11else
12\(win.=\texttt{DampedMC5}\);
13if\(characteristics.high\_dimension=true\)then
14\(out.=\texttt{OutlierD-BT}\);
15else
16\(ref.=\texttt{IncrementalRefine}\);
17else
18 /* Choose a structure with high efficiency based on input data stream characteristics */
19if\(characteristics.from\_evolution=true\)then
20\(struc.=\texttt{DPT}\);
21\(win.=\texttt{LandmarkMC5}\);
22else
23\(struc.=\texttt{Grids}\);
24\(win.=\texttt{SlidingMC5}\);
25\(out.=\texttt{NoudlierD}\);
26\(ref.=\texttt{NoRefine}\);
27
28 Return \(struc.,win.,out.,ref.\);
```
**Algorithm 3**Auto-selection Fun. of _Benne_.
If the optimization objective is set to Efficiency (Line 20), _Benne_ selects either DPT or Grids as the summarizing data structure based on the frequency of cluster evolution (Lines 22-25). The window model is also selected accordingly, with LandmarkWM chosen for frequent cluster evolution and SlidingWM otherwise (Lines 24-25). The outlier detection mechanism is set to NoOutlierD (Line 26), and the refinement strategy is set to NoRefine (Line 27).
### _Flexible Algorithm Migration_
Due to the significant structural differences among various summarizing data structures, as elaborated in Section 3.1, it is not feasible to directly transfer clustering information from the old summarizing data structure to the new one. To address this, _Benne_ employs a migration function, delineated in Algorithm 4.
Specifically, if the newly selected summarizing data structure (\(struc.\)) differs from the previously selected one (\(struc\_old.\)) (Line 2), the algorithm proceeds as follows:
* **Accuracy Objective** (Lines 4-7): If the optimization objective is Accuracy, _Benne_ extracts the clustering centers (\(c\)) from the old summarizing data structure (Line 3). Additionally, if the old outlier detection mechanism (\(out.\)) is either OutlierD-B or OutlierD-BT, outliers (\(o\)) are also extracted (Line 6). These centers and outliers are then used to initialize the new summarizing data structure (\(struc.\)) (Line 7).
* **Efficiency Objective** (Lines 8-10): If the optimization objective is Efficiency, _Benne_ extracts the clustering centers (\(c\)) from the old summarizing data structure and sinks them into the output to avoid computational overhead (Line 9). A new, empty object is created to initialize the new summarizing data structure (\(struc.\)) (Line 10).
This approach ensures that _Benne_ can smoothly transition between different summarizing data structures while optimizing for the user-defined objective.
### _Modular Composition of Clustering Functions_
The clustering process, as delineated from Lines 11 to 21 in Algorithm 1, comprises four primary functions, each responsible for a specific aspect of the algorithm's operation.
#### 4.5.1 Window Function
The _Window Fun_. (Algorithm 5) employs distinct computational logic contingent upon the selected window model. The function maintains a counter \(c\) to monitor the number of processed data points and executes specific actions based on the chosen window model.
**Landmark Window** (Lines 4-8): When the landmark window model is activated, the function assesses whether the counter \(c\) has exceeded the current landmark \(m\). If affirmative, the clustering results in the summarizing data structure \(s\) are sunk, either stored or output, and all existing clustering information is purged. The function then updates the current landmark to \(m_{\text{net}}\) for the subsequent window. This model is particularly beneficial in contexts where the data distribution experiences significant fluctuations, requiring the algorithm to adapt by periodically resetting its clustering information.
**Damped Window** (Lines 9-10): In the damped window model, the function adjusts the weights of data points and clusters in the summarizing data structure \(s\) using decay parameters \(dc=(\alpha,\lambda)\). This model is tailored for situations where older data points should exert less influence on the clustering results. By implementing a decay function, the algorithm can incrementally discard outdated information and adapt to the current data distribution.
**Sliding Window** (Lines 11-13): If the sliding window model is selected, the function verifies whether the counter \(c\) has surpassed the sliding window size \(ws\). If so, the earliest data point in the window is removed from the summarizing data structure \(s\). This model maintains a fixed-size window of the most recent data points, ensuring that the algorithm concentrates on the current data distribution while disregarding older, potentially irrelevant, data points.
```
/*c:counter (initialized with 0), m: landmark, ws: sliding window size, \(dc=(\alpha,\lambda)\): decay parameters */
1FunctionWindowFun. \((s,m,ws,dc)\):
2\(c+\): if\(win\) = LandmarkWTthen
3if\(c>m\)then
4 Sink the clustering results from \(s\);
5 Clear all of the clustering information;
6\(m\gets m_{\text{mext}}\); /* update current landmark */
7
8else
9if\(win\) = DampedIMthen
10 Update weight with \(dc\);
11else/\(win\) = slidingIMthen
12if\(c>ws\)then
13 Remove the earliest point from window;
14
```
**Algorithm 5**Window Fun. of _Benne_.
#### 4.5.2 Outlier Function
When outlier detection is activated, the _Outlier Fun_. (Algorithm 6) comes into play. This function is pivotal in the outlier detection mechanism of the _Benne_ algorithm, managing outliers through various strategies depending on the selected outlier detection type (\(out\).). Below is an in-depth discussion of the steps involved in _Outlier Fun_.:
**Buffer Optimization** (Lines 2-8): When either buffer or buffertimer is the chosen outlier detection mechanism, the function evaluates whether the input point \(p\) is an outlier. If \(p\) qualifies as an outlier, it is inserted into the outlier buffer (\(Buffer\)) and allocated to the nearest cluster (\(cl\)). Subsequently, the cluster within the buffer is updated. The function then ascertains if the cluster has sufficient density, exceeding the predefined density threshold \(d\). If the cluster meets the density criteria, it is transferred from the outlier buffer to the summarizing data structure \(s\).
**Regular Check** (Lines 9-24): This check is initiated at predetermined intervals to scrutinize the clusters in both the summarizing data structure and the outlier buffer. For each cluster in the summarizing data structure, the function evaluates its density against the threshold \(d\) and, if a time-based mechanism (timer or buffertimer) is in use, its activity against the timer threshold \(t\). If a cluster is neither sufficiently dense nor active, it is either relocated to the outlier buffer (if enabled) or excised from the summarizing data structure.
**Buffer Timer Check** (Lines 20-24): In cases where buffertimer is the selected outlier detection mechanism, the function assesses the activity level of each cluster in the outlier buffer using the timer threshold \(t\). Clusters deemed inactive are purged from the buffer.
**Outlier Determination** (Line 25): The function concludes by returning a boolean value indicating whether the input point \(p\) is an outlier, based on the outcomes of the preceding
steps.
#### 4.5.3 Insert Function
The _Insert Fun._ (Algorithm 7) is invoked to insert the input point \(p\) into the designated summarizing data structure \(struc\).. This function is designed to be versatile, accommodating both hierarchical and partitional types of summarizing data structures. This adaptability ensures that _Benne_ can efficiently manage a range of summarizing data structures, thereby meeting diverse application needs.
When the selected data structure is of the hierarchical category, the function incorporates the input point \(p\) into the appropriate cluster within the hierarchical structure and subsequently updates the hierarchy (Lines 2-3). Conversely, if the data structure is partitional, the function initially identifies the closest cluster to the input point \(p\), inserts \(p\) into this cluster, and then updates the partitional structure accordingly (Lines 4-5).
#### 4.5.4 Refine Function
The _Refine Fun._ (Algorithm 8) is designed to refine the clustering results based on the chosen refinement strategy, denoted as \(ref\).. The algorithm offers two refinement strategies: IncrementalRefine and One-shotRefine, each catering to specific application requirements and data stream dynamics.
In the case of IncrementalRefine, the function invokes the ExtractModel and UpdateModel steps (see Algorithm 8). It first extracts the current clustering model from the summarizing data structure \(s\) and then applies a suitable batch clustering algorithm to update it. Following this, the UpdateStructure and CleanStructure steps are executed to update \(s\) with the new model and to remove any outdated or redundant data points or clusters, respectively. This ensures that the model is continually updated to adapt to evolving data distributions.
On the other hand, when One-shotRefine is selected, the function performs the ExtractAll step to retrieve all clusters and data points from \(s\). A batch clustering algorithm is then applied during the RefineModel step to improve the clustering results. Finally, the UpdateStructure step updates \(s\) with the refined clustering model, offering a more accurate snapshot of the current data distribution.
```
/* ref.: Refinement strategy, s: Summarizing data structure */
1FunctionRefine Fun. (ref.,s):
2if\(ref.=\) IncrementalRefinethen
3ExtractModel(s). / Get current model */
4UpdateModel(s). Batch clustering */
5UpdateStructure(s). /* Updates */
6CleanStructure(s). /* Remove outdated data */
7else
8ExtractAll(s). /* Get all clusters and points */
9RefineModel(s). /* Batch clustering */
10UpdateStructure(s). /* Updates */
```
**Algorithm 8** Refine Fun. of _Benne_.
## 5 Experimental Analysis
In this section, we present the evaluation results. All experiments are carried out on an Intel Xeon processor. Table II summarizes the detailed specification of the hardware and software used in our experiments.
### _Implementation Details_
_Benne_ is architected as a three-threaded pipeline to process data streams, thereby approximating a realistic computational environment. Inter-thread communication is facilitated through a shared-memory queue, mitigating the latency associated with network transmissions.
The first thread, termed as the _Data Producer Thread_, is responsible for loading benchmark workloads into memory. It then sequentially enqueues each data point into a shared queue. To simulate a high-throughput scenario, the input arrival rate is configured to be immediate, thereby eliminating idle time for the algorithm.
The second thread, known as the _Data Consumer Thread_, executes a _DSC_ algorithm to process the incoming data stream. It dequeues input tuples from the shared queue for processing and subsequently generates temporal clustering results. These results are then forwarded to the next thread in the pipeline. Notably, all efficiency metrics are captured in this thread to ensure a consistent basis for comparing various _DSC_ algorithms.
Finally, the _Result Collector Thread_ serves as the repository for the temporal clustering results generated by the Data Consumer Thread. Accuracy metrics are computed in this thread to minimize any interference with efficiency measurements. The quality of clustering is evaluated using purity [25], and the capability of the design aspects to handle cluster evolution is assessed using CMM [26].
### _Algorithm Selection_
In our comparative analysis, _Benne_ is benchmarked against eight established _DSC_ algorithms, summarized in Table III. The selection of these algorithms is guided by two primary criteria: 1) they collectively represent a broad spectrum of design decisions across all four design aspects, as detailed in Table III; 2) they span a historical range in the field, from
\begin{table}
\begin{tabular}{|l|l|} \hline Component & Description \\ \hline Processor & Intel(R) Xeon(R) Gold 6338 CPU @ 2.00GHz \\ \hline L3 Cache Size & 48MiB \\ \hline Memory & 256GB DDR4 RAM \\ \hline OS & Ubuntu 20.04.4 LTS \\ \hline Kernel & Linux 5.4.0-122-generic \\ \hline Compiler & GCC 11.3.0 with -O3 \\ \hline \end{tabular}
\end{table} TABLE II: Specification of our evaluation platform
foundational algorithms like BIRCH [14] to contemporary contributions such as SL-KMeans [19].
The BIRCH algorithm [14], a seminal work in this domain, introduced the concept of _Clustering Feature (CF)_ for data summarization. This idea was later extended to _Micro Cluster (MC)_ in CluStream [16], which pioneered the online-offline strategy for efficient stream clustering. DenStream [17] builds upon CF and incorporates _outlier detection_ to mitigate the influence of noise.
DSream [15] employs a unique approach by partitioning the feature space into density cells and mapping data objects into these cells. In contrast, StreamKM++ [20] utilizes a two-step _"merge and split"_ mechanism centered around a _Coreset Tree_ data structure. DBSream [18] addresses the fragmentation of dense areas in micro-cluster-based algorithms, while EDMStream[2] employs a damped window model to allow cluster density to decay over time. Lastly, SL-KMeans [19] introduces algorithms for \(k\)-clustering on sliding windows, demonstrating superior performance over analytical bounds.
### _Dataset Selection_
Table IV provides a summary of the datasets selected for evaluation. Our dataset selection is governed by two primary criteria. First, we aim for a fair and comprehensive evaluation by including the three most frequently used datasets across the algorithms summarized in Table III. Specifically, _FCT_ (Forest CoverType) is employed by algorithms such as SL-KMeans, StreamKM++, EDMStream, and DBSream. The _KDD99_ dataset is utilized by StreamKM++, DStream, EDMStream, DenStream, GluStream, and DBSream, while _Sensor_ is specifically used by DBSream. In addition to these three classical datasets, we incorporate a more recent dataset, _Insects_, which was proposed in 2020 [43].
Second, although some previous studies have proposed synthetic datasets, these datasets are not publicly available. To address this limitation and to evaluate the algorithm under varying workload characteristics, as delineated in Table IV, we design three synthetic datasets: _EDS_, _ODS_, and _Dim_. The _EDS_ dataset contains varying frequencies of the occurrence of cluster evolution, _ODS_ features a time-varying number of outliers at different stages, and _Dim_ comprises data points with extremely high dimensions.
A more detailed account of each dataset is as follows:
* _FCT_ (Forest CoverType) [42] consists of tree observations from four areas of the Roosevelt National Forest in Colorado. It is a high-dimensional dataset with 54 attributes, and each data point has a cluster label indicating its tree type. The dataset contains no outliers.
* _KDD99_[1] is a large dataset of network intrusion detection stream data collected by the MIT Lincoln Laboratory. It is also high-dimensional and contains a significant number of outliers, making it suitable for testing outlier detection capabilities.
* _Insects_[43] is the most recent dataset, generated by an optical sensor that measures insect flight characteristics. It is specifically designed for testing the clustering of evolving data streams.
* _Sensor_[44] contains environmental data such as temperature, humidity, light, and voltage, collected from sensors deployed in the Intel Berkeley Research Lab. It is a low-dimensional dataset with only five attributes but has a high frequency of cluster evolution.
* _EDS_ is a synthetic dataset used in previous works [17] to study cluster evolution. It is divided into five stages according to evolving frequency, allowing for a comparative analysis of algorithmic performance across these stages.
* _ODS_ is another synthetic dataset, distinct from _EDS_ in that its second half is composed entirely of outliers, enabling an analysis of algorithmic performance under varying numbers of outliers.
* _Dim_ is generated using the RandomTreeGenerator from the MOA framework [45]. It features data points with dimensions ranging from 20 to 100 and 50 classes, with other specific configurations set to default.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline
**Workload** & **Length** & **Dim.** & **Cluster** & **Outliers** & **Evolving** \\ & & & **Num.** & & **Freq.** \\ \hline _FCT_[42] & 581012 & 54 & 7 & False & Low \\ \hline _KDD95_[1] & 4898431 & 41 & 23 & True & Low \\ \hline _Insects_[43] & 905145 & 33 & 24 & False & Low \\ \hline _Sensor_[44] & 2219803 & 5 & 55 & False & High \\ \hline _EDS_[17] & 245270 & 2 & 363 & False & Varying \\ \hline _ODS_[17] & 100000 & 2 & 90 & Varying & High \\ \hline _ES_[17] & 345270 & 2 & 453 & Varying & Varying \\ \hline _Dim_[45] & 500000 & 20\(\sim\)100 & 50 & Low & Low \\ \hline \end{tabular}
\end{table} TABLE IV: Characteristics differences of selected workloads. Note that the outliers column refers to whether there are outliers in the final clustering results.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline
**Algorithm** & **Year** & \multicolumn{2}{c|}{**Summarizing Data Structure**} & **Window Model** & **Outlier Detection** & **Offline Refinement** \\ \cline{2-6} & & _Name_ & _Catalog_ & & \\ \hline BIRCH [14] & 1996 & CFT & Hierarchical & LandmarkWM & OutlierD & IncrementalRefine \\ \hline CluStream [16] & 2003 & MCs & Partitional & LandmarkWM & OutlierD-T & IncrementalRefine \\ \hline DenStream [41] & 2006 & MCs & Partitional & DampedMM & OutlierD-ST & One-shotRefine \\ \hline DSStream [15] & 2007 & Grids & Partitional & DampedMM & OutlierD-T & One-shotRefine \\ \hline StreamKM++ [20] & 2012 & CoreT & Hierarchical & LandmarkWM & NoOutlierD & One-shotRefine \\ \hline DBSream [18] & 2016 & MCs & Partitional & DampedMM & OutlierD-T & One-shotRefine \\ \hline EDMStream [2] & 2017 & DPT & Hierarchical & DampedMM & OutlierD-BT & IncrementalRefine \\ \hline SL-KMeans [19] & 2020 & AMS & Partitional & SlidingWM & NoOutlierD & IncrementalRefine \\ \hline \end{tabular}
\end{table} TABLE III: A summary of representative _DSC_ algorithms and their design decisions. The year attribute for each algorithm is when it was first published.
### _General Evaluation of Clustering Behavior_
The versatility of _Benne_ allows it to be configured into two primary variants: _Benne (Accuracy)_ and _Benne (Efficiency)_. These variants are tailored to different optimization objectives and are derived from distinct algorithmic design decisions. We initiate our evaluation by comparing the clustering behavior of these _Benne_ variants with eight existing _DSC_ algorithms. The evaluation is conducted on four real-world datasets--_FCT_, _KDD99_, _Insects_, and _Sensor_ --as well as two evolving datasets--_EDS_ and _ODS_. The outcomes are illustrated in Figure 5, Figure 6, and Figure 11, yielding two key insights.
First, Figure 5 reveals that _Benne (Accuracy)_ attains state-of-the-art purity across all four real-world datasets. Conversely, _Benne (Efficiency)_ exhibits purity levels comparable to existing algorithms but surpasses them in throughput. These observations confirm that by judiciously selecting and integrating different design elements, as elaborated in Section 4.5, both _Benne_ variants can either optimize for accuracy or efficiency. However, achieving both optimal accuracy and efficiency simultaneously remains elusive, corroborating our initial analysis regarding the trade-off between these two metrics.
Second, Figures 6 and 11 demonstrate the robustness of both _Benne_ variants in evolving scenarios. Specifically, _Benne (Accuracy)_ and _Benne (Efficiency)_ maintain their respective optimization targets even under frequent cluster or outlier evolution. This stability contrasts with the deteriorating performance observed in several existing _DSC_ algorithms, such as DStream and SL-KMeans, which struggle to adapt to evolving conditions. We attribute this resilience to _Benne_'s dynamic composition capability, as discussed in Section 4. Unlike most existing algorithms, _Benne_ continuously monitors stream characteristics to detect any changes, thereby enabling timely and accurate adaptations to the evolving data stream. This dynamic adaptability ensures superior clustering performance under varying conditions.
### _Analysis of Dynamic Composition Ability_
We then conduct a detailed analysis to show the effectiveness of _Benne_'s critical ability for dynamic composition, including regular stream characteristics detection (Section 4.2), automatic choice selection (Section 4.3) and flexible algorithm migration (Section 4.4) respectively under the evolving workload characteristics.
Specifically, to measure the role of the regular stream characteristics detection module, we estimate the general location where the stream characteristics evolve in the workload and check whether the evolution has been detected timely and accurately based on the output of the regular detection function of _Benne_ as discussed in Algorithm 2. To show the effectiveness of the last two modules, we add two other variants, one is _Benne (Accuracy) without migration_, and the other is _Benne (Efficiency) without selection_. By comparing the changes in both clustering purity and throughput of the total four variants of _Benne_, we are able to identify the real functionality of the last two modules of dynamic composition.
#### 5.5.1 Composition Effectiveness Study
We commence by evaluating the performance of the four _Benne_ variants on the real-world _KDD99_ workload, characterized by a high frequency of outliers but low cluster evolution, as detailed in Table IV. Measurements are taken at intervals of 40,000 data points, capturing both clustering outcomes and workload characteristic evolutions, as illustrated in Figure 8. Notably, _Benne_ (Accuracy) excels in purity, while _Benne_ (Efficiency) demonstrates superior throughput, each aligning with their respective optimization objectives. Three key observations emerge from this analysis.
First, a synchronized examination of _Benne_'s clustering behavior and workload evolution (indicated by grey lines in the figure) reveals that while workload changes adversely affect clustering performance (evident in Phases 1 and 3), both _Benne_ (Accuracy) and _Benne_ (Efficiency) swiftly recover (Phases 2 and 4).
For _Benne_ (Accuracy), the initial use of the DampedMM window model leads to a rapid decline in both purity and throughput during Phase 1 due to its inability to effectively manage increasing outlier frequencies. However, the algorithm's regular stream characteristics detection module, as outlined in Algorithm 2, timely identifies this issue at the close of Phase 1, transitioning to the LandmarkWM window model to better cope with outlier evolution. The subsequent improvement in purity during Phase 2 attests to the efficacy of this module. A similar recovery is observed in Phase 4, where the algorithm switches from CFT to MCs to adapt to increasing cluster evolution in Phase 3.
For _Benne_ (Efficiency), the algorithm opts to forgo outlier detection to minimize overhead, in line with its optimization
Fig. 5: Performance Comparison on Four Real-world Workloads.
target. Consequently, its performance deteriorates in Phases 1 and 2. However, upon entering Phase 3, characterized by high cluster evolution, the algorithm promptly detects this change and switches from Grids to DPT, resulting in improved purity and throughput in Phase 4, as depicted in Figure 8.
Second, apparently, both the purity and throughput drops significantly when canceling the usage of automatic design choice selection and switching, as shown by _Banne (Efficiency) without selection_. This indicates the limitation of the individual composition, as discussed in Section 3. On the contrary, applying the automatic design choice selection into the algorithm can make full use of the strengths of every design choices, leading to both better clustering accuracy and efficiency, as shown by _Banne_ (Efficiency),
Third, the inclusion of migration in the algorithm improves accuracy at the expense of clustering speed. A comparative analysis of _Banne_ (Accuracy) and _Banne (Accuracy) without migration_ reveals that while the former consistently outperforms the latter in purity, it lags in throughput. A detailed assessment of the overhead incurred by the three dynamic composition modules will be presented in the subsequent section.
#### 5.5.2 Analysis of Composition-Related Overhead
We proceed to examine the computational overhead associated with _Banne_'s two pivotal composition procedures: detection and migration. This is juxtaposed against the time expended on clustering, contingent on the selected composition. As illustrated in Figure 9, the time allocation for both detection and migration is relatively minimal for _Banne_ (Accuracy) in comparison to the primary clustering task. This underscores the efficiency of these composition procedures.
In the case of _Banne_ (Efficiency), the scenario is somewhat different. Given that this variant is optimized for speed, the clustering operation itself is more time-efficient. Consequently, the proportion of time spent on the detection
Fig. 8: Detailed performance analysis of _Banne_’s variants on _KDD99_.
Fig. 6: Performance Comparison on _EDS_ workload with varying cluster evolution frequency.
Fig. 7: Performance Comparison on _ODS_ workload with varying outlier evolution frequency.
procedure appears to be larger relative to _Benne_ (Accuracy). However, it's crucial to note that _Benne_ (Efficiency) omits the migration procedure altogether, as elaborated in Section 4.4. This strategic omission further enhances its efficiency, aligning it closely with its optimization objectives.
This analysis confirms that the overheads associated with dynamic composition in _Benne_ are well-contained, thereby not compromising the algorithm's primary objectives of either accuracy or efficiency.
### _Scalability Across Dimensions_
We turn our attention to evaluating the scalability of _Benne_ in the context of varying dimensions, utilizing the _Dim_ workload for this purpose. This workload comprises datasets with dimensions ranging from 20 to 100, as detailed in Table IV. The outcomes of this evaluation are graphically represented in Figure 10.
Remarkably, _Benne_ maintains a stable purity level of approximately 0.4 across datasets with diverse dimensions. While this purity level may not be high, its consistency is noteworthy, particularly when contending with the _Curse of Dimensionality_. Operations integral to clustering--such as updating the summarizing data structure and pinpointing the appropriate cluster for data insertion--are intrinsically reliant on distance calculations involving the original high-dimensional data. The efficacy of these distance metrics tends to wane as the dimensionality escalates, thereby exacerbating the challenge of distinguishing between data points in high-dimensional spaces.
Additionally, we discern a decrement in _Benne_'s efficiency concomitant with an increase in the dataset's dimensionality. Our analysis ascertains that the computational complexity of several pivotal operations--including but not limited to the updating of the summarizing data structure and the selection of suitable clusters for data insertion--is directly influenced by the dimensionality of the workload. Consequently, a surge in dimensionality incurs a proportional rise in the computational time required for these operations, thereby attenuating the overall efficiency of the clustering process.
### _Sensitivity Analysis of Parameters_
We conducted an exhaustive sensitivity analysis on two specialized variants of _Benne_ --_Benne_ (Accuracy) and _Benne_ (Efficiency)--utilizing the _FCT_ workload. As previously delineated, _Benne_ (Accuracy) aims for elevated purity levels, while _Benne_ (Efficiency) targets higher throughput rates. Both variants fulfill their respective optimization criteria. We scrutinize the following three parameters:
1) **Outlier Distance Threshold (\(\delta\))**: For each data point \(x_{t}\) and its nearest cluster \(C_{t}\), we compute the distance \(D(x_{t},C_{t})\). If this distance surpasses the threshold \(\delta\), \(x_{t}\) is deemed an outlier. We experimented with \(\delta\) values from 50 to 950 for _Benne_ (Accuracy) and from 10 to 100 for _Benne_ (Efficiency). Both variants maintain stable purity and throughput levels across this range. Notably, _Benne_ (Accuracy) undergoes a window model transition from 'landmark' to 'damped' when the outlier distance threshold increases within a specific range, causing a sudden alteration in performance metrics. Experimental data indicate an increase in cluster size and purity. However, the throughput does not change, and this is due to the fact that both LandmarkWM and DampedWM are time consuming for clustering. Conversely, _Benne_ (Efficiency) remains unaffected as it does not consider the number of outliers as a performance metric.
2) **Queue Size Threshold**: We varied the queue size from 20,000 to 100,000. Both variants exhibit a decline in purity as the queue size increases, attributed to less frequent algorithmic adjustments. While the throughput for _Benne_ (Accuracy) diminishes with an increasing queue size due to the growing complexity of its summarizing data structure, _Benne_ (Efficiency) experiences a throughput increase. This is attributed to fewer algorithmic migrations and a transition to a more efficient data structure (Grids) as the queue size enlarges.
3) **Variance Threshold**: _Benne_ activates the _characteristics/requent evolution_ flag when the calculated variance of sampled data exceeds a predefined threshold. We tested variance thresholds from 400 to 4,000. As the threshold rises, both purity and throughput for _Benne_ (Accuracy) decline. This is due to the algorithm's assumption of infrequent evaluations at higher variance thresholds, leading to less frequent algorithmic migrations and increased computational overhead. _Benne_ (Efficiency) remains relatively stable across varying variance thresholds. At a high variance threshold of 4,000, both variants achieve similar purity levels, but _Benne_ (Efficiency) outperforms _Benne_ (Accuracy) in throughput due to the absence of algorithmic migrations.
## 6 Related Work
This section delineates research contributions pertinent to the four cardinal design aspects of _DSC_ algorithms. These
Fig. 10: Performance Comparison on _Dim_ workload with varying dimensionality.
Fig. 9: Execution Time Break Down Analysis on Real-world Workloads.
aspects serve as the bedrock for the development of _Benne_ and its automated selection methodology.
**Summarizing Data Structure.** The quest for efficient data structures for summarizing data streams has been a focal point in research. Zhang et al.[46] introduced the Clustering Feature Tree (CFT), characterized by its incrementality and additivity, making it well-suited for streaming workloads. Aggarwal et al.[16] extended the CF structure into microclusters (MCs), incorporating additional summary information such as timestamps and weights to synergize with window models. Conversely, Chen et al.[15] advocated for a grid-based data structure for efficiency. Gong et al.[2] presented the Dependency Tree (DPT), which strikes a balance between efficiency and accuracy. However, DPT may yield sub-optimal results in handling cluster evolution effectively. These contributions inform _Benne_'s automated selection of appropriate summarizing data structures.
**Window Model.** Various window models have been proposed to specify the subset of data streams to be processed. Metwally et al.[39] proposed the landmark window model (LandmarkMM), while Zhou et al.[40] and Borassi et al.[19] introduced the sliding window model (SlidingWM). Cao et al.[41] and Chen et al. [15] presented the damped window model (DampedMM), which retains all data but prioritizes the most recent information by associating varying weights. _Benne_ leverages these foundational works to automatically select the most fitting window model, contingent on user-defined objectives and data stream characteristics.
**Outlier Detection Mechanism.** Outlier detection is a pivotal design aspect in _DSC_ algorithms. Early contributions like BIRCH [14] included an optional phase for identifying outlier candidates based on object density thresholds. Aggarwal et al.[16] introduced the _outlier timer_ (OutlierD-T) for enhanced outlier identification. Wan et al.[17] further conceptualized the _outlier buffer_ (OutlierD-B) to facilitate the transition between outliers and clustered points. _Benne_ incorporates these mechanisms, enabling automated selection of the most suitable outlier detection strategy based on user objectives and data stream characteristics.
**Refinement Strategy.** Refinement strategies have been integral to _DSC_ algorithms since Aggarwal et al.[16] introduced the online-offline clustering paradigm. Commonly employed offline clustering algorithms include KMeans [5] and its variants such as Scalable k-means [47] and Singlepass k-means [48]. However, our empirical evaluations suggest that refinement strategies often introduce unnecessary computational overhead. _Benne_ takes these observations into account when automatically selecting the most appropriate refinement strategy, aligned with user objectives and data stream characteristics.
## 7 Conclusion
This paper has introduced _Benne_, an innovative _DSC_ algorithm that autonomously selects optimal configurations across four pivotal design aspects, contingent on user-defined objectives and the characteristics of the input data stream. We have conducted a meticulous analysis of these design aspects, encompassing the summarizing data structure, window model, outlier detection mechanism, and refinement strategy. Our empirical evaluations substantiate that _Benne_ surpasses existing state-of-the-art algorithms in both accuracy and efficiency by judiciously selecting the most advantageous combinations of these design aspects. Furthermore, our exhaustive experimental investigations have yielded invaluable insights into the trade-offs inherent in various design aspects of _DSC_ algorithms. These insights serve dual purposes: they assist practitioners in making well-informed choices in the design or selection of _DSC_ algorithms and also lay the groundwork for future scholarly endeavors in this domain.
In addition to our theoretical and experimental contributions, we have encapsulated _Benne_ into a Python library, making it readily accessible for both academic research and practical applications. This library serves as a tool for the community to easily implement, test, and extend our algorithm, thereby fostering further advancements in the field of _DSC_. As avenues for future research, we intend to extend _Benne_ to accommodate high-dimensional data streams and to investigate more sophisticated techniques for the automated selection of optimal configurations based on data stream attributes. We also plan to explore the integration of advanced machine learning methodologies, such as deep learning and reinforcement learning, to further enhance the accuracy and efficiency of _DSC_ algorithms.
## Acknowledgement
This work is supported by the National Research Foundation, Singapore and Infocomm Media Development Authority under its Future Communications Research & Development Programme FCP-SUTD-RG-2022-006, and a MoE AcRF Tier 2 grant (MOE-T2EP20122-0010). Zhengru Wang and Xin Wang are co-first authors. Shuhao Zhang is the corresponding author.
Fig. 11: Parameter Analysis of _Benne_ variants on _FCT_. |
2309.07290 | Influence Phase of a dS Observer I : Scalar Exchange | Inspired by real-time computations in AdS black holes, we propose a method to
obtain the influence phase of a cosmological observer by calculating the
on-shell action on a doubled spacetime geometry. The influence phase is the
effective action for an open system: for a dS static patch observer coupled to
a scalar field it incorporates the radiation reaction due to the bulk fields
and their dS Hawking radiation. For a general extended source in dS, we
describe how to account for finite size effects. In the long-time limit, we get
a Markovian open quantum system susceptible to cosmological fluctuations,
whereas the short-time limit reproduces the worldline theory of flat-space
radiation reaction. We also present a fully covariantised form for the cubic
corrections to the radiation reaction in even spacetime dimensions, including
Hubble contributions, and find an intriguing recursive structure across
dimensions. | R. Loganayagam, Omkar Shetye | 2023-09-13T20:17:13Z | http://arxiv.org/abs/2309.07290v3 | # Influence Phase of a dS Observer I : Scalar Exchange
###### Abstract
Inspired by real-time computations in AdS black holes, we propose a method to obtain the influence phase of a cosmological observer by calculating the on-shell action on a doubled spacetime geometry. The influence phase is the effective action for an open system: for a dS static patch observer coupled to a scalar field it incorporates the radiation reaction due to the bulk fields and their dS Hawking radiation. For a general extended source in dS, we describe how to account for finite size effects. In the long-time limit, we get a Markovian open quantum system susceptible to cosmological fluctuations, whereas the short-time limit reproduces the worldline theory of flat-space radiation reaction. We also present a fully covariantised form for the cubic corrections to the radiation reaction in even spacetime dimensions, including Hubble contributions, and find an intriguing recursive structure across dimensions.
###### Contents
* 1 Introduction
* 2 The cosmological influence phase \(S_{\text{CIP}}\)
* 3 Computing \(S_{\text{CIP}}\) from on-shell effective action
* 4 Radiation reaction and flat space limit
* 5 Interactions
* 6 Summary and Discussion
* A STF tensors and multipole expansion
* B Designer scalar in dS : Green functions, regularisation and renormalisation
* C SK Green functions and the cosmological influence phase
* D Radiation reaction due to light scalar fields
## 1 Introduction
Over the last few decades, many independent lines of evidence have converged on the fact that our universe has a positive cosmological constant[1; 2; 3; 4]. This has presented a difficult conundrum for those who want to think about the relation between gravity and quantum mechanics[5; 6]. Among the most fruitful ideas coming out of research in quantum gravity has been holography, i.e. the statement that a gravitational theory is equivalent to a quantum system living on its boundary. However, spacetimes with positive cosmological constant do not have any time-like boundaries for a dual quantum system to live in. Thus, it seems, that gravity in such spacetimes cannot have a holographic dual theory (or at least we cannot have a dual which is a conventional quantum dynamical system).
One attempt to overcome this obstacle is as follows[7; 8]: imagine a lone observer probing such a spacetime. The worldline of such an observer can then be thought of as a time-like boundary where a possible holographic description might reside. This is the idea of _solipsistic holography_, which posits that a quantum system1 living on such a worldline encodes the quantum theory of gravity that describes the universe. To claim that the information about the entire universe can be gleaned from a single worldline within it might seem speculative: but it is pertinent to remember that all existing knowledge about our universe can be traced ultimately to measurements around the earth. Thus, we might want to assess the viability of such a proposal by examining it further.2
Any object in a dynamical spacetime influences and is influenced by its surroundings. In this sense, any gravitational observer should be thought of as an open quantum system constantly interacting with the rest of the universe. On the quantum mechanical side, the emergence of the open system is due to integrating out observer's internal degrees of freedom. This is the cosmological analogue of the fluid-gravity correspondence[24]. In the AdS/CFT context, on the gravity side, fluid dynamics emerges by integrating out the physics in radial direction, whereas on the gauge theory side, it is a consequence of coarse-graining quarks and gluons. There is, by now, non-trivial evidence supporting this statement, including precise matching of anomalous effects on both sides[25; 26]. In a similar vein, one might ask how we could go about checking the cosmological version of this statement.
The central challenge in answering this question is twofold: first, to derive an open quantum system on the worldline from the ambient dynamics. As we shall see, a precise definition of this first step already involves some work.3 More precisely, what we need is a cosmological analogue of GKPW prescription in AdS/CFT that will allow us to derive the open system for an observer. This note is aimed at addressing this issue.
Footnote 3: Systematic description of observers in the middle of a spacetime (as opposed to asymptotic observers) is well-known to be a hard problem. Some of the approaches to the AdS version of this question, starting from the CFT side, can be found in [27; 28; 29; 30]. It would be interesting to extend these ideas to take into account the open nature of the observer, as we do here.
The second step would be to construct a dual unitary quantum system that, after integrating out appropriate degrees of freedom, leads to the same open theory as gravity. This might be a hard undertaking: after all, even in the fluid-gravity correspondence, to derive the fluid dynamics from a strongly coupled gauge theory is practically impossible. But, since we are dealing with a quantum mechanical system here, there is reason for hope. One immediate goal would be to check whether the putative open quantum system derived in the first step shows the right structural features to admit a solipsistic interpretation. We will postpone further thoughts on this issue to the discussion section.
Let us return now to the issue of constructing the open system on the gravitational side. Imagine a universe described by a dynamical spacetime along with a variety of fields living on it. A local observer in such a theory may be modelled as a source for these fields: a source that emits/radiates as well as a source that absorbs/detects. Any autonomous motion of the observer is then accompanied by an outgoing radiation and an associated radiation reaction. This results in a dissipation of the observer's energy, and we seek an open quantum system should describe this physics. The open quantum theory on the world line should also describe the influence of the incoming radiation from the rest of the universe. As we shall elaborate on later, this incoming radiation also includes the Hawking radiation from the Hubble horizon.4
Footnote 4: See [31] for an analysis of dS observer from the point of view of von Neumann algebras. It would be interesting to link such an analysis to the ideas discussed in this note, e.g., one may ask how the physics of radiation reaction is encoded within von Neumann algebras. Another algebraic statement of potential interest is the ‘time-like tube theorem’[32; 33; 34; 35], but, again, it is unclear to us how such formal statements relate to the description of dS observer as an open system.
Quite independently of such holographic quests, the worldline open quantum theory under question shows up in a variety of concrete physical questions. As an example, worldline EFT has emerged as a useful way to organise the post-Newtonian expansion of a binary system radiating gravitational waves[36; 37; 38; 39; 40]. The basic idea in such approaches is to systematically integrate out the short-distance gravitational physics that binds the binaries to get an effective theory that describes the inspiral process. Due to the radiation of gravitational waves, ultimately such a binary is also an open system of the type described above. These ideas can be generalised into a cosmological setting where, for example, a worldline EFT which takes also the expansion of the universe into account might be useful
in studying the dynamics of galactic formation, cooling and mergers.5 A motivation of this note is to describe an approach that might help us systematically derive such an EFT.
Footnote 5: See [41] for the role played by worldline methods in the effective field theory(EFT) of large scale structure(LSS).
We will conclude this preamble with a broad outline of what follows: in section SS2, we begin by describing the basic geometric set-up used in deriving the open quantum mechanics associated with the cosmological observer. The prescription we propose is inspired by the recent developments in real-time AdS/CFT[42; 43; 44; 45; 46; 47; 48; 49; 50; 51] that have led to systematic derivation of open quantum systems by integrating out a thermal holographic CFT bath. The essential idea here is a real-time version of Gibbons-Hawking procedure[52]: one proposes an appropriate semi-classical geometry only containing the relevant region (BH exterior for Gibbons-Hawking, dS static patch in the current problem), and computes the path integral in a saddle-point approximation by evaluating on-shell action. We will argue that such a prescription leads to an answer which correctly encodes both the radiation reaction and Hawking radiation from the Hubble horizon.
The problem of cosmological observer exhibits broad structural similarities to the AdS case, which we exploit. But we also find significant differences: for one, much of the standard holographic machinery (e.g. GKPW prescription, counter-term procedure) available on the AdS side is simply absent. We outline a regularisation procedure that gives finite answers in SS2, relegating the details to appendices.
In section SS3 we use our prescription to derive the open effective action/influence phase for an observer coupled to a class of generalised free scalar fields. Next, we examine the flat space limit of our influence phase in SS4 demonstrating how the already known expressions of the flat space radiation reaction are reproduced in this limit. We also compute the leading cosmological corrections to the Abraham-Lorentz-Dirac radiation-reaction force. In the penultimate section SS5, we sketch how interactions can be incorporated into our formalism. We conclude with a summary and a discussion of further directions in SS6.
To enable readability, we confine ourselves to describing the basic physical ideas as well as the central results in the main sections. Much of the relevant technical details are presented in appendices. The first appendix SSA is a review of multipole expansion in flat spacetime, along with a description of the multipole expansion in terms of symmetric trace-free(STF) tensors. Much of this is standard material just cast into a notation convenient for our purposes. In the next appendix SSB, we review the outgoing scalar solutions in dS and describe a counterterm procedure to deal with point-like sources placed at the centre of the static patch. In appendix SSC, we show that the counterterm procedure extends to the most general scalar configurations and describe how to deal with extended sources. The discussion in these two appendices culminates in an effective action describing arbitrary scalar sources in the dS static patch. The next appendix SSD specialises to point-like observers in arbitrary motion in \(dS_{d+1}\) with \(d\) odd: we show that our effective action evaluated for such sources re-assembles into a generally covariant radiation-reaction force with Hubble corrections.
## 2 The cosmological influence phase \(\mathbf{S_{\text{CIP}}}\)
Our goal is to describe the experience of an observer in an expanding spacetime. This, in turn, will help us in understanding the spacetime itself. In particular, we want to ask how to construct the open quantum system that describes the cosmological observer. In its full generality, this is a difficult problem, but we can start with a simple model for the observer. We can think of the observer as a single worldline undergoing absorption and emission processes. So the observer is privy to 3 kinds of data:
utgoing radiation: Emission data along with the outgoing propagator tells us the field values at a later time in the spacetime.
* Incoming radiation: The fields in the past can be reconstructed by using an incoming propagator given the absorption data.
* Fluctuations: The observer will also be sensitive to _cosmic noise_, which shows up in the absorption data.
This is reminiscent of the motion of a Brownian particle in Langevin theory. A pollen grain in water is sensitive not only to coarse-grained currents in the water (analogous to the incoming radiation) but also to fluctuations arising from the motion of water molecules. Finally, the motion of the Brownian particle can influence the dynamics of water as well (analogous to outgoing radiation).
The dynamics of such open quantum systems can be derived by the path integral prescription of Feynman and Vernon[53] describing the density matrix evolution. According to the authors of [53], the effective description of the open system can be derived starting from two non-interacting copies of each of the system as well as the environment (describing the combined density matrix). Integrating out two copies of the environment then induces new interactions between the copies of the system, resulting in a non-unitary evolution of the system state. These terms constitute the _influence phase_, which encodes completely the effect of the environment on the system. Applying this insight to the question at hand, we conclude that all cosmological effects on an observer(the system) are succinctly summarised in a _cosmological influence phase_\(S_{\rm CIP}\).
What does \(S_{\rm CIP}\) depend on? It should depend on how effective the observer is at emitting/absorbing radiation of a given frequency \(\omega\) and a given multipole type \(\mathbb{L}\). Say we have two sets of functions \(\mathcal{J}_{A}(\omega,\mathbb{L})\) and \(\mathcal{J}_{D}(\omega,\mathbb{L})\) characterising the emission/absorption efficiency of the observer. From the Feynman-Vernon viewpoint, \(\mathcal{J}_{A}(\omega,\mathbb{L})\) and \(\mathcal{J}_{D}(\omega,\mathbb{L})\) have the following interpretation: to begin with, we have two copies of the observer (left/right), each probing their copy of the universe via their respective multipole moments \(\mathcal{J}_{L}(\omega,\mathbb{L})\) and \(\mathcal{J}_{R}(\omega,\mathbb{L})\) respectively. The influence phase, which results from integrating out the universe, then depends on the average
\[\mathcal{J}_{A}(\omega,\mathbb{L})\equiv\frac{1}{2}[\mathcal{J}_{R}(\omega, \mathbb{L})+\mathcal{J}_{L}(\omega,\mathbb{L})]\,\]
as well as the difference
\[\mathcal{J}_{D}(\omega,\mathbb{L})\equiv\mathcal{J}_{R}(\omega,\mathbb{L})- \mathcal{J}_{L}(\omega,\mathbb{L})\]
Figure 1: A cosmological observer can access 3 kinds of data: radiation due to its own emissions, incoming radiation from sources in the environment and noise.
of these two multipole moments. The fact that the average/difference sources characterise its emissive/absorptive properties is a well-known feature of the Feynman-Vernon formalism[54; 55; 56; 57]: this fact can ultimately be traced to the past/future boundary conditions on the two copies imposed within this formalism. To conclude, the cosmology as seen by an observer with multipole moments \(\mathcal{J}_{A}(\omega,\mathbb{L})\) and \(\mathcal{J}_{D}(\omega,\mathbb{L})\) is encoded in a single influence functional \(S_{\rm CIP}\left[\mathcal{J}_{A}(\omega,\mathbb{L}),\mathcal{J}_{D}(\omega, \mathbb{L})\right]\). In terms of the Schwinger-Keldysh path integral of quantum gravity, we can write
\[e^{iS_{\rm CIP}}\equiv\int[d\varphi_{R}][d\varphi_{L}]\ e^{iS_{g}[\varphi_{R}, \mathcal{J}_{R}]-iS_{g}[\varphi_{L},\mathcal{J}_{L}]}\, \tag{1}\]
where \(\varphi_{L,R}\) denote the bra/ket copy of the bulk quantum fields in cosmology (including the spacetime metric) and \(S_{g}[\varphi,\mathcal{J}]\) is the full gravitational action in the background of an observer with multipole moments \(\mathcal{J}\). The above path integral should then be interpreted in a wilsonian sense: we want to integrate out the fast modes of quantum gravitational theory, while freezing the slow degrees of freedom of the observer, and obtain an effective action which describes the open dynamics of such an observer.
The cosmological influence phase \(S_{\rm CIP}\) is a direct observable. Given an expanding universe, assuming we have a sufficiently long-lived observer with arbitrary multipole moments in some region, the force on an observer due to radiation reaction as well as radiation reception can directly be measured. This force serves to determine all terms in the 'effective action' \(S_{\rm CIP}\) that encodes the influence of the ambient universe. All the _real_ observables of astrophysics and cosmology, e.g. the sky maps at different frequencies, can be incorporated this way into the absorptive part of \(S_{\rm CIP}\).
From this viewpoint, all cosmological calculations should, in principle, be recast in terms of \(S_{\rm CIP}\) to connect them with observations. This is already implicit in the existing approaches to cosmology: for example, the final step in CMB power spectrum computation is to expand it in spherical harmonics centred around us. Phrasing observables in terms of \(S_{\rm CIP}\) makes explicit this observer-dependence (which is probably essential for defining observables _within_ a quantum spacetime). Talking in terms of a single functional \(S_{\rm CIP}\) may also be convenient for effective field theory (EFT) based approaches to cosmology based on direct observables (e.g. those based on classifying sources in the red-shift space[58; 59; 60]). More ambitiously, one may conceive of a bootstrap program based on the cosmological influence phase that complements existing proposals for cosmological bootstrap[17; 19; 20; 21; 23; 61].
What are the general principles that constrain \(S_{\rm CIP}\)? First of all, when \(\mathcal{J}_{D}(\omega,\mathbb{L})\) is set to zero, \(S_{\rm CIP}\) should vanish. This statement arises from the microscopic unitarity of the environment: if the two copies of the observer in Feynman-Vernon formalism introduce identical perturbations into the environment, their effect cancels out of all correlators[56]. From the viewpoint of the observer, the above condition is equivalent to the conservation of the observer density matrix's trace. Apart from this, there are also constraints on \(S_{\rm CIP}\) coming from causality. For example, causality implies that the coefficient of \(\mathcal{J}_{D}^{*}(\omega,\mathbb{L})\mathcal{J}_{A}(\omega,\mathbb{L})\) is analytic in the upper half plane of complex \(\omega\) : this coefficient is the retarded correlator on the worldline of the observer[62; 63; 54; 55; 56; 57]. A similar statement holds for the coefficients of any term of the form \(\mathcal{J}_{D}^{*}(\omega,\mathbb{L})\prod_{k}\mathcal{J}_{A}(\omega_{k}, \mathbb{L}_{k})\).
Evaluation of the influence phase requires us to know the real-time or Schwinger-Keldysh(SK) propagators of the environment. It is unclear how to perform such computations in generic cosmological spacetimes, especially if gravity is also to be quantised. We will show that for an observer in dS, this computation can be geometrised roughly akin to recent implementations of SK path integrals in case of AdS black holes[42; 43; 44; 45; 46; 47; 48; 49; 50; 51]. The hope then is that one can later generalise it beyond dS to incorporate full FRW cosmology.
Specifically, in the case of dS, we conjecture that the computation of cosmological influence phase \(S_{\rm CIP}\) is dominated by a geometric saddle point built out of two copies of the static patch stitched
together at their horizons. We will call this doubled geometry, the dS Schwinger-Keldysh(dS-SK) spacetime. In the rest of this subsection, we will describe this geometry in more detail before moving to the evidence for our conjecture in the subsequent sections.
Let us begin by setting up the basic notation required: consider a \((d+1)\)-dimensional dS spacetime dS\({}_{d+1}\) whose Penrose diagram is shown in Fig.2. A horizontal slice (i.e., a constant time slice) in this diagram denotes the prime-meridian on a spatial sphere \(S^{d}\), with the two ends denoting the poles. Each point in the horizontal slice corresponds to a sphere \(S^{d-1}\), which shrinks to a point near the poles. The first example we will consider is a co-moving dS observer whom we place at the south pole. Our focus will be on the static patch of such an observer, i.e., the patch between the past and future cosmological horizons of the observer. We will later describe a more general class of observers spread arbitrarily over this static patch, modelled as a sequence of spherical shells around south pole (Fig.2).
We will find it convenient to work with _outgoing_ Eddington-Finkelstein coordinates on the static patch. The metric in this coordinate system takes the form
\[ds^{2}=-(1-r^{2}H^{2})\ du^{2}-2dudr+r^{2}d\Omega_{d-1}^{2}. \tag{2}\]
Here \(H\) is the Hubble constant of dS spacetime, \(r\) is the radial distance from the observer, \(u\) denotes the outgoing time labelling the outgoing waves and \(d\Omega_{d-1}^{2}\) is the line element on a unit \(S^{d-1}\) sphere. The south-pole observer sitting at \(r=0\) sees a future horizon at \(r=1/H\) where the outgoing coordinates are well-behaved. In most of what follows, we will set \(H=1\) for convenience and restore it later when we examine the flat space (i.e. \(H\to 0\)) limit.
We now turn to the model of the observer: Conceptually, the simplest model is that of a point particle with specified multipole moments sitting at \(r=0\). However, such a model needs to be regulated with appropriate counter-terms to allow the computation of radiation reaction effects. To this end, we will take the observer to be a small sphere of radius \(r_{c}\) and thicken its worldline into a time-like 'world-tube'. The point particle limit then corresponds to taking \(r_{c}\to 0\) limit _after_ the addition of counter-terms: both the Green functions and required counter-terms can be determined exactly for a dS observer coupled to generalised free scalar fields. The radius \(r_{c}\) then acts like a UV regulator for the problem.
Figure 2: Penrose diagrams of dS with the static patch of the south pole observer shown in green. Constant \(u\) slices are shown in blue. **Left :** a localised observer at the south pole whose worldline is thickened to a world-tube(orange) of radius \(r_{c}\). **Right :** an extended observer modelled as a sequence of spherical shells of radius \(r_{i}\) with \(i=1,\dots,N\).
Apart from the formal requirements of regularisation, we are also interested in the actual problem of an extended observer of a finite size. In such a case, there are no divergences. Nevertheless, finite counter-terms are needed to renormalise the bare parameters into the physically measured properties of the observer. As mentioned before, a simple model of the extended observer is a sequence of spherical shells of radius \(r_{i}\) with \(i=1,\ldots,N\): their complement in the static patch is then the rest of the universe to be integrated out. We can define multipole moments for such extended observers and still write down a cosmological influence phase as a function of those multipole moments. The locality in radial direction gets obscured in such a description: this is however natural in the solipsistic viewpoint where radial locality is an approximate/emergent property of the dual quantum mechanics.
We now turn to our conjecture for the dS-SK geometry, i.e., the semi-classical saddle point that dominates the quantum gravity path integral for \(S_{\rm CIP}\). What we seek is a real-time analogue of the Gibbons-Hawking-construction[64] as well as gr-SK construction in AdS[42; 43; 44; 45; 46; 49], which would compute for us the cosmological influence phase. Here is the geometry we propose: take two copies of the static patch and stitch them together smoothly at the future horizon (Fig.3). To parametrise this geometry, we complexify the radial coordinate and think of dS-SK as a co-dimension one contour in the complex \(r\) plane (Fig.2). To make this precise, let us define a _mock tortoise coordinate_\(\zeta\) as
Figure 4: The branch cut structure of \(\zeta(r)\) in the complex \(r\) plane at fixed \(u\): branch-cut shown as a wiggly line. We also show the _clockwise_ dS-SK radial contour running from \(\zeta=1\) to \(\zeta=0\) (the blue curve in this figure and in Fig.3). The Im \(r>0\) branch is the time-ordered/right branch, whereas the Im \(r<0\) branch is the anti-time-ordered/left branch.
Figure 3: The two sheeted complex dS-SK geometry can be thought of as two static patches smoothly connected at the future horizon. The radial contour along an outgoing Eddington-Finkelstein slice (i.e., a constant \(u\) slice) is shown in blue. The radial contour has an outgoing R branch and an incoming L branch.
follows:
\[\zeta(r)=\frac{1}{i\pi}\,\int\limits_{r}^{0-i\epsilon}\frac{dr^{\prime}}{1-r^{ \prime 2}}=\frac{1}{2\pi i}\ln\left(\frac{1-r}{1+r}\right) \tag{3}\]
This integral has logarithmic branch points at \(r=\pm 1\) and we choose its branch-cut to be over the interval \(r\in[-1,1]\) on the real line. As shown in Fig.2, our normalisation is such that, if we begin from \(0+i\epsilon\) (i.e., just above the midpoint of the branch-cut) and then go clockwise around the branch cut to \(0-i\epsilon\) (i.e., just above the midpoint of the branch-cut), we pick up a discontinuity in \(\zeta\) equal to negative unity. The choice of the overall constant in (3) is such that the real part is \(1\) on the \(R\) static patch (the \(r+i\epsilon\) contour), and the real part falls to zero as we move clockwise and turn to traverse the \(L\) boundary(the \(r-i\epsilon\) contour).6
Footnote 6: The reader should note the use of clockwise contours in the complex \(r\) plane for dS, in contrast to the counter-clockwise contours used in the AdS black-brane case. This fact means that we need to be careful to add appropriate minus signs whenever we use the residue theorem, but this inconvenience seems unavoidable given the standard time orientations of the Schwinger-Keldysh contour.
We are now ready to state our prescription:
\[\textbf{Cosmological influence phase}=\textbf{On-shell gravitational action of the dS-SK geometry}\,\,. \tag{4}\]
To be clear, on both sides of this equality, we treat observer(s) as prescribed sources, viz., we take it off-shell by freezing its dynamics. Both sides can then be thought of as functionals of the observer multipole moments that emit/detect fields. In the dual quantum mechanics, these multipole moments should be thought of as the'slow macroscopic degrees of freedom' whose influence phase is computed by integrating out the 'fast microscopic degrees of freedom'. The solipsistic holography would then imply that we can replace the LHS in the above equality with such an influence phase computed in the dual quantum mechanics. The above statement can then be thought of as giving a GKPW-like prescription[65; 66] for solipsistic holography. The primary aim of this note is to exhibit simple example systems where we can show that the above prescription yields sensible answers.
Before we turn to examples, we would like to comment on an interesting philosophical point: In this geometric picture, the cosmology reduces entirely to the static patch accessible to the observer, bypassing questions about the rest of the universe (or multi-verse as the case may be). We think of this focus on actual observables as a desirable feature of our proposal, in contrast to traditional descriptions of quantum gravity in dS spacetime phrased in terms of global questions. In the AdS black-brane case, gravitational Schwinger-Keldysh geometry (and its Gibbons-Hawking predecessor) divorces the phenomenology of the exterior from speculations about singularity and BH interior. In a similar vein, our geometric proposal aims at isolating the physics of the static patch from speculations about superhorizon modes, side-stepping the measure problem in cosmology. Our saddle point geometry can be thought of as a way to implement the causal-diamond-based cosmological measures ala Bousso[67; 68].
## 3 Computing \(\boldsymbol{S_{\text{CIP}}}\) from on-shell effective action
In the following sections, we will evaluate the on-shell action on the dS-SK geometry described above and show that we get meaningful semi-classical results for the cosmological influence phase \(S_{\text{CIP}}\). We will do this in three parts: First, in this section, we will describe a class of systems where observers act as sources for scalar fields. We will describe how the on-shell action can be computed for these systems to yield \(S_{\text{CIP}}\). Next, in Sec84, we will argue how \(S_{\text{CIP}}\) does indeed capture the physics of
radiation reaction for a moving dS observer. Finally, in Sec\(\lx@sectionsign\)5, we will describe how field interactions could be taken into account.
Let us begin by examining the mode decomposition in dS-SK geometry. Outgoing modes of frequency \(\omega\) in the static patch have the form:
\[\mathfrak{f}(r,\omega,\ell)\ \mathscr{Y}_{\mathbb{L}}(\Omega)\ e^{-i\omega u}. \tag{11}\]
where \(\mathfrak{f}(r,\omega,\ell)\) is an analytic function of \(r\) in the region \(0<r\leq 1\): since we are working in outgoing Eddington-Finkelstein coordinates, analyticity near \(r=1\) is equivalent to the outgoing boundary condition. Here, the field is decomposed into spherical harmonics \(\mathscr{Y}_{\mathbb{L}}(\Omega)\) on \(\mathbb{S}^{d-1}\) with labels \(\mathbb{L}\equiv\{\ell,\vec{m}\}\). The spherical harmonics with label \(\ell\) are eigenfunctions of the sphere Laplacian with eigenvalue \(-\ell(\ell+d-2)\). Given the analyticity of \(\mathfrak{f}(r,\omega,\ell)\), the outgoing modes can be analytically continued to the complex \(r\) plane without any branch cuts. Consequently, on the dS-SK geometry, the outgoing modes become modes which are _identical_ in the right/left branches of the static patch.
The incoming modes are readily found by time reversing the outgoing mode. Time-reversal isometry of dS-SK geometry is implemented by taking \(u\to 2\pi i\zeta-u\) and \(\omega\to-\omega\). We then get an incoming mode of the form
\[\mathfrak{f}(r,-\omega,\ell)\ \mathscr{Y}_{\mathbb{L}}(\Omega)\ e^{-2\pi \omega\zeta-i\omega u}=\mathfrak{f}^{*}(r,\omega,\ell)\ \mathscr{Y}_{\mathbb{L}}(\Omega)\ e^{-2\pi\omega\zeta-i\omega u}.\]
To get the last equality, we have assumed \(\mathfrak{f}(r,\omega,\ell)\) to be a Fourier transform of a real function. The reader should note here the presence of the non-analytic factor \(e^{-2\pi\omega\zeta}\), thus resulting in a branch cut for the incoming mode. The incoming mode hence picks up a factor of \(e^{2\pi\omega}\) if the argument crosses the branch cut from above (\(\zeta=1\)) to below (\(\zeta=0\)), i.e., as we move from right to the left static patch. As we will see below, this is indeed the appropriate Boltzmann factor for the static patch, encoded automatically in the incoming modes.
Consider a free scalar field theory on dS\({}_{d+1}\). Let \(G_{\mathcal{N}}^{\text{Out}}(r,\omega,\ell)\) denote the radial part of the _outgoing_ boundary-to-bulk Green function, i.e., the outgoing field created by a unit point source placed at the south pole. Here, and in what follows, we use the subscript \(\mathcal{N}\) to denote the exponent that characterises the near origin behaviour of the scalar field. More precisely, we define \(G_{\mathcal{N}}^{\text{Out}}(r,\omega,\ell)\) as the solution of an appropriate radial ODE that obeys the following boundary conditions: at the worldline, we impose a Dirichlet condition
\[\lim_{r\to 0}r^{\nu+\frac{N-1}{2}}G_{\mathcal{N}}^{\text{Out}}(r,\omega,\ell) =1\, \tag{12}\]
where we have defined \(\nu\equiv\ell+\frac{d}{2}-1\) and taken the behaviour of the Green function to be \(r^{-\nu-\frac{N-1}{2}}\) near the source. As an example, for a massless minimal scalar field, we have the fall-off \(r^{-(\ell+d-2)}\) corresponding to \(\mathcal{N}=d-1\).
Apart from the above condition imposed at the origin, we impose analyticity/outgoing boundary conditions at the dS horizon (\(r=1\)). Note however that this is _not_ the appropriate solution on the dS-SK geometry: its boundaries are not the worldline + dS horizon but rather the right/left worldlines. It is then more natural to impose a _double_ Dirichlet boundary condition. To this end, we begin with the most general linear combination of outgoing/incoming modes for the radial part
\[\varphi_{{}_{N}}(\zeta,\omega,\mathbb{L})=-G_{\mathcal{N}}^{\text{Out}}(r, \omega,\ell)\mathscr{J}_{F}(\omega,\mathbb{L})+e^{2\pi\omega(1-\zeta)}G_{ \mathcal{N}}^{\text{Out}*}(r,\omega,\ell)\mathscr{J}_{P}(\omega,\mathbb{L}). \tag{13}\]
Here the subscripts \(F\) and \(P\) denote the sources that radiate to the future and detectors that absorb from the past respectively. We use \(\zeta\) to indicate the radial argument of \(\varphi_{{}_{N}}\) to emphasise that this general linear combination takes two different values in the two branches of dS-SK geometry.
he coefficients \(\mathcal{J}_{\bar{F}},\mathcal{J}_{\bar{P}}\) appearing above can be linked to the left/right sources via the double Dirichlet condition, i.e., at the left/right copy of the worldlines, we impose
\[\mathcal{J}_{L}(\omega,\mathbb{L}) \equiv\lim_{\zeta\to 0}r^{\nu+\frac{\chi-1}{2}}\varphi_{{}_{N}}=- \mathcal{J}_{\bar{F}}(\omega,\mathbb{L})+e^{2\pi\omega}\mathcal{J}_{\bar{P}}( \omega,\mathbb{L})\, \tag{3.4}\] \[\mathcal{J}_{R}(\omega,\mathbb{L}) \equiv\lim_{\zeta\to 1}r^{\nu+\frac{\chi-1}{2}}\varphi_{{}_{N}}=- \mathcal{J}_{\bar{F}}(\omega,\mathbb{L})+\mathcal{J}_{\bar{P}}(\omega,\mathbb{L })\.\]
Using this, we can then rewrite Eq.(3.3) as \(\varphi_{{}_{N}}(r,\omega,\mathbb{L})=g_{R}\mathcal{J}_{R}-g_{L}\mathcal{J}_{L}\), where \(g_{R,L}(\zeta,\omega,\mathbb{L})\) denote the right/left boundary-to-bulk propagators on the dS-SK geometry (see Figure 5). Our use of the symbol \(\mathcal{J}\) here is a deliberate allusion to the observer's multipole moments. Inverting the above relations, we obtain
\[\mathcal{J}_{\bar{F}}(\omega,\mathbb{L}) \equiv-\Big{\{}(1+n_{\omega})\mathcal{J}_{R}(\omega,\mathbb{L})- n_{\omega}\mathcal{J}_{L}(\omega,\mathbb{L})\Big{\}}=-\mathcal{J}_{A}(\omega, \mathbb{L})-\left(n_{\omega}+\frac{1}{2}\right)\mathcal{J}_{D}(\omega, \mathbb{L}) \tag{3.5}\] \[\mathcal{J}_{\bar{P}}(\omega,\mathbb{L}) \equiv-n_{\omega}\Big{\{}\mathcal{J}_{R}(\omega,\mathbb{L})- \mathcal{J}_{L}(\omega,\mathbb{L})\Big{\}}=-n_{\omega}\ \mathcal{J}_{D}(\omega,\mathbb{L})\.\]
Here we have introduced the average/difference sources \(\mathcal{J}_{A}\equiv\frac{1}{2}\mathcal{J}_{R}+\frac{1}{2}\mathcal{J}_{L}\) and \(\mathcal{J}_{D}\equiv\mathcal{J}_{R}-\mathcal{J}_{L}\). We note here the natural appearance of the Bose-Einstein factor
\[n_{\omega}\equiv\frac{1}{e^{2\pi\omega}-1}. \tag{3.6}\]
Such a factor arises naturally by solving the detailed-balance constraint \(1+n_{\omega}=e^{2\pi\omega}n_{\omega}\) which equates the probability of spontaneous/stimulated emission by the source to the absorption probability. The appearance of such a factor is an evidence that dS-SK contour naturally incorporates the thermality of Hawking radiation emitted from the dS horizon[64].
Given the solution determined in terms of the multipole moments, we can compute the on-shell action. A scalar system with a requisite exponent is given by an action
\[S=-\frac{1}{2}\int d^{d+1}x\sqrt{-g}\ r^{\mathcal{N}+1-d}\left\{(\partial \Phi_{\mathcal{N}})^{2}+\frac{\Phi_{\mathcal{N}}^{2}}{4r^{2}}\left[(d+ \mathcal{N}-3)(d-\mathcal{N}-1)-r^{2}\left(4\mu^{2}-(\mathcal{N}+1)^{2}\right) \right]\right\}. \tag{3.7}\]
Figure 5: Propagators in dS-SK geometry: the boundary to bulk propagators are denoted in red and the bulk to bulk propagator is denoted in brown.
We will refer to this as a _designer_ scalar system with a radially varying dilaton, an appropriate centrifugal potential term, and a mass term. Our motivation to consider this class of actions is that, at specific values of \(\mathcal{N}\) and \(\mu\), the above action captures the physics of different field theories. For example, a massive KG scalar of mass \(m\) corresponds to setting \(\mathcal{N}=d-1\) and \(4m^{2}=(\mathcal{N}+1)^{2}-4\mu^{2}=d^{2}-4\mu^{2}\) in the above action. Another example is the KG scalar field with a conformal mass: this corresponds to setting \(\mathcal{N}=d-1\) and \(\mu=\frac{1}{2}\).
Further, such actions with different values of \(\mathcal{N}\) and \(\mu\) arise naturally when considering scalar-vector-tensor spherical harmonic decompositions of Maxwell as well as linearised Einstein equations about dS background[69; 70; 71]. More precisely, the radial ODEs in all these sectors coincide with the radial ODE obtained from the above action for some value of \(\mathcal{N}\) and \(\mu\) (See table 1).7 For these reasons, we consider it worthwhile to study the influence phase obtained by integrating out such designer scalars.
Footnote 7: We note that, for such massless fields, the exponent \(\mathcal{N}\) and the parameter \(\mu\) in all sectors are related by the condition \(4\mu^{2}=(\mathcal{N}+1)^{2}\), i.e., they do not have the last term given in Eq.(23).
The radial ODE for designer scalar systems can be solved exactly in terms of hypergeometric functions. The outgoing boundary to bulk Green function satisfying the boundary condition in Eq.(21) is given by[72; 73; 7]
\[\begin{split} G^{\text{Out}}_{\mathcal{N}}(r,\omega,\mathbb{L})& =r^{\nu-\frac{\mathcal{N}}{2}}(1+r)^{-i\omega}\\ &\quad\times\frac{\Gamma\left(\frac{1+\nu-\mu-i\omega}{2}\right) \Gamma\left(\frac{1+\nu+\mu-i\omega}{2}\right)}{\Gamma(1-i\omega)\Gamma\left( 1+\nu\right)}\ _{2}F_{1}\left[\frac{1+\nu-\mu-i\omega}{2},\frac{1+\nu+\mu-i\omega}{2};1-i \omega;1-r^{2}\right]\,\end{split} \tag{24}\]
where we have used \(\nu\equiv\ell+\frac{d}{2}-1\). This solution is manifestly analytic near \(r=1\) and thus this is an outgoing solution. The solution on the dS-SK geometry can then be written as
\[\Phi_{\mathcal{N}}(\zeta,u,\Omega)=\sum_{\mathbb{L}}\int\frac{d\omega}{2\pi} \varphi_{{}_{\mathcal{N}}}(\zeta,\omega,\mathbb{L})\mathscr{Y}_{\mathbb{L}}( \Omega)\ e^{-i\omega u} \tag{25}\]
with the radial part \(\varphi_{{}_{\mathcal{N}}}(\zeta,\omega,\mathbb{L})\) being given by Eq.(20). This expression can then be substituted back into the designer scalar action given in Eq.(23). The resultant on-shell action itself is formally divergent but can be rendered finite with counterterms localised at the worldlines. We will refer the reader to appendices B and C for the technical details of how this is done. The end result of this evaluation can be cast into the form
\[\begin{split} S_{\text{CIP}}&=-\sum_{\mathbb{L}} \int\frac{d\omega}{2\pi}K_{\text{Out}}(\omega,\ell)\ [\mathscr{J}_{R}-\mathscr{J}_{L}]^{*}\ [(1+n_{\omega})\mathscr{J}_{R}-n_{\omega} \mathscr{J}_{L}]\\ &=\sum_{\mathbb{L}}\int\frac{d\omega}{2\pi}\frac{K_{\text{Out}}( \omega,\ell)}{1+n_{\omega}}\ \mathscr{J}_{\bar{P}}^{*}\mathscr{J}_{\bar{F}}=-\sum_{\mathbb{L}}\int\frac{d \omega}{2\pi}K_{\text{Out}}(\omega,\ell)\ \mathscr{J}_{D}^{*}\ \left[\mathscr{J}_{A}+\left(n_{ \omega}+\frac{1}{2}\right)\mathscr{J}_{D}\right]\,,\end{split} \tag{26}\]
w
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline & KG Scalar & EM Vector & EM Scalar & Gravity Tensor & Gravity Vector & Gravity Scalar \\ \hline \(\mathcal{N}\) & \(d-1\) & \(d-3\) & \(3-d\) & \(d-1\) & \(1-d\) & \(3-d\) \\ \hline \(\mu\) & \(\frac{d}{2}\) & \(\frac{d}{2}-1\) & \(\frac{d}{2}-2\) & \(\frac{d}{2}\) & \(\frac{d}{2}-1\) & \(\frac{d}{2}-2\) \\ \hline \end{tabular}
\end{table}
Table 1: \(\mathcal{N},\mu\) values for different massless fields
where we have written the answer in right/left, past/future as well as the average/difference basis. Here, \(K_{\text{Out}}\) is the boundary 2-point function encoding the effects of radiation reaction (explicit expressions are provided below). Notable features of this action are as follows:
1. The absence of \(\mathfrak{J}_{A}^{*}(\omega,\mathbb{L})\mathfrak{J}_{A}(\omega,\mathbb{L})\) term is an expected consequence of the collapse rule which demands that the influence phase go to zero when the sources on the two sides are the same.
2. The \(\mathfrak{J}_{D}^{*}(\omega,\mathbb{L})\mathfrak{J}_{A}(\omega,\mathbb{L})\) coefficient is imaginary, implying that this term is purely dissipative. We will show in the next section that this term captures the radiation reaction experienced by the observer.
3. The noise term captured by the \(\mathfrak{J}_{D}^{*}(\omega,\mathbb{L})\mathfrak{J}_{D}(\omega,\mathbb{L})\) coefficient is proportional to the dissipative term with a factor. This factor is correctly picked out by our geometry so as to satisfy the Kubo-Martin-Schwinger (KMS) condition for dS.
Hence, we claim that this action correctly captures the effect of the environment on the observer. In fact, this is exactly the form expected out of two-point functions of thermal systems[74] and matches with analogous expressions in holographic open systems[46; 47].
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \(\mu=\frac{d}{2}\) & \(\ell=1\) & \(\ell=2\) & \(\ell=3\) & \(\ell=4\) & \(\ell=5\) \\ \hline \(d=3\) & 4 & 1 & \(\frac{64}{225}\) & \(\frac{4}{49}\) & \(\frac{256}{11025}\) \\ \(d=4\) & \(\frac{9\pi^{2}}{16}\) & 1 & \(\frac{25\pi^{2}}{1024}\) & \(\frac{1}{16}\) & \(\frac{441\pi^{2}}{262144}\) \\ \(d=5\) & \(\frac{64}{9}\) & 1 & \(\frac{256}{1225}\) & \(\frac{4}{81}\) & \(16384\) \\ \(d=6\) & \(\frac{225\pi^{2}}{256}\) & 1 & \(\frac{1225\pi^{2}}{65536}\) & \(\frac{1}{25}\) & \(\frac{3969\pi^{2}}{4194304}\) \\ \(d=7\) & \(\frac{256}{25}\) & 1 & \(\frac{16384}{99225}\) & \(\frac{4}{121}\) & \(\frac{65536}{9018009}\) \\ \(d=8\) & \(\frac{1225\pi^{2}}{1024}\) & 1 & \(\frac{3969\pi^{2}}{262144}\) & \(\frac{1}{36}\) & \(\frac{9801\pi^{2}}{16777216}\) \\ \(d=9\) & \(\frac{16384}{1225}\) & 1 & \(\frac{480249}{80249}\) & \(\frac{4}{169}\) & \(\frac{1048576}{22545025}\) \\ \(d=10\) & \(\frac{99225\pi^{2}}{65536}\) & 1 & \(\frac{53361\pi^{2}}{4194304}\) & \(\frac{1}{49}\) & \(\frac{1656369\pi^{2}}{4294967296}\) \\ \(d=11\) & \(\frac{65536}{3969}\) & 1 & \(\frac{1048576}{9018009}\) & \(\frac{4}{225}\) & \(\frac{4194304}{1329696225}\) \\ \hline \end{tabular}
\end{table}
Table 2: \(\tau_{dS}\) for \(\mu=\frac{d}{2}\) (Massless KG scalar, Gravity tensor sector)
The exact expressions for \(K_{\rm Out}\) depend on whether the number of spatial dimensions \(d\) is odd or even. For \(d\) odd, we have [7; 73]
\[K_{\rm Out}|_{\rm Odd\ d}=-e^{i\nu\pi}\frac{2\pi i}{\Gamma(\nu)^{2}}\frac{\Gamma \left(\frac{1+\nu-\mu-i\omega}{2}\right)\Gamma\left(\frac{1+\nu+\mu-i\omega}{2 }\right)}{\Gamma\left(\frac{1-\nu+\mu-i\omega}{2}\right)\Gamma\left(\frac{1- \nu-\mu-i\omega}{2}\right)}\, \tag{3.11}\]
and for \(d\) even, we get
\[\begin{split} K_{\rm Out}|_{\rm Even\ d}&=\Delta_{ N}(\nu,\mu,\omega)\left[\psi^{(0)}\left(\frac{1+\nu-\mu-i\omega}{2}\right)+\psi^{(0)} \left(\frac{1+\nu+\mu-i\omega}{2}\right)\right.\\ &\left.+\psi^{(0)}\left(\frac{1-\nu-\mu-i\omega}{2}\right)+\psi^{ (0)}\left(\frac{1-\nu+\mu-i\omega}{2}\right)-4\psi^{(0)}(\nu)\right]\,\end{split} \tag{3.12}\]
where \(\psi^{(0)}(z)\equiv\frac{d}{dz}\ln\Gamma(z)\) is the di-gamma function and the function \(\Delta_{\mathcal{N}}\) is defined via
\[\begin{split}\Delta_{\mathcal{N}}(n,\mu,\omega)&\equiv \frac{(-)^{n}}{\Gamma(n)^{2}}\frac{\Gamma\left(\frac{1+n-\mu-i\omega}{2} \right)\Gamma\left(\frac{1+n+\mu-i\omega}{2}\right)}{\Gamma\left(\frac{1-n+\mu -i\omega}{2}\right)\Gamma\left(\frac{1-n-\mu-i\omega}{2}\right)}=\frac{1}{ \Gamma(n)^{2}}\prod_{k=1}^{n}\left[\frac{\omega^{2}}{4}+\frac{1}{4}(\mu-n+2k- 1)^{2}\right]\\ &=\Delta_{\mathcal{N}}^{*}(n,\mu,\omega)\.\end{split} \tag{3.13}\]
The important fact to note about these expressions is that, for all values of \(\mu\) appearing in table 1 except \(\mu=\frac{d}{2}\), we get a nice small \(\omega\) expansion. For \(\mu=\frac{d}{2}\), we still get a small \(\omega\) expansion for all \(\ell>0\): only the \(\ell=0\) term has a \(1/\omega\) behaviour at small \(\omega\). The physical interpretation of these statements is this: in all these cases except \(\ell=0,\mu=\frac{d}{2}\), one obtains a Markovian open system at small \(\omega\), i.e., a cosmically old observer in dS does not retain any memory of its past.8 This is an interesting observation, especially in even \(d\) where the corresponding flat spacetime problem has memory terms[76]. This suggests that _the radiation reaction problem in an expanding spacetime is perhaps better behaved than the one in flat spacetime_. In dual quantum mechanics, this predicts that a clean separation of slow/fast degrees of freedom should be possible, at least in the leading large \(N\) approximation.
Footnote 8: The mild breakdown of small \(\omega\) expansion in \(\mu=\frac{d}{2}\) gives a tail term in the radiation reaction. This has been previously noted in [75]. This tail term can be avoided either by turning off the monopole moment or by giving the scalar a small mass.
We will now argue that the fluctuations also admit a small \(\omega\) expansion. To this end, we use \(1+n_{\omega}+n_{-\omega}=0\) to rewrite the cosmological influence phase as
\[S_{\text{CIP}}=-\sum_{\mathbb{L}}\int\frac{d\omega}{2\pi}\Big{[}K_{\text{Out }}(\omega,\ell)\ \mathcal{J}_{D}^{*}\mathcal{J}_{A}+\frac{1}{2}\left(n_{\omega}+\frac{1}{2} \right)\left[K_{\text{Out}}(\omega,\ell)-K_{\text{Out}}(-\omega,\ell)\right] \mathcal{J}_{D}^{*}\mathcal{J}_{D}\Big{]}. \tag{3.14}\]
Since \(\omega\ n_{\omega}\) has a regular small \(\omega\) expansion, we conclude from the above expression that \(S_{\text{CIP}}\) has a regular small frequency expansion provided \(K_{\text{Out}}\) has such an expansion. Up to 1st order in \(\omega\), we have
\[K_{\text{Out}}=K_{\text{Out}}|_{\omega=0}-i\ \omega\ \tau_{dS}+\ldots \tag{3.15}\]
where \(\tau_{dS}\) can be interpreted as the cosmological decay time-scale for slowly varying multipole moments in dS.9 Due to the dS version of fluctuation-dissipation theorem, this is also proportional the variance of the Hubble Hawking noise. This fact can be gleaned from the leading \(\mathcal{J}_{D}^{*}\mathcal{J}_{D}\) term in the cosmological influence phase:
Footnote 9: We tabulate \(\tau_{dS}\) for various cases of interest in tables 2, 3 and 4.
\[S_{\text{CIP}}\supset i\sum_{\mathbb{L}}\frac{\tau_{dS}}{2\pi}\int\frac{d \omega}{2\pi}\mathcal{J}_{D}^{*}\mathcal{J}_{D}. \tag{3.16}\]
Using the Hubbard-Stratonovich transformation, we can think of this term arising from integrating out a noise field with a time-domain action:
\[\sum_{\mathbb{L}}\int du\left[\frac{i}{2}\frac{\pi}{\tau_{dS}} \mathcal{N}^{2}(u)+\mathcal{J}_{D}(u)\mathcal{N}(u)\right]. \tag{3.17}\]
The first term here then shows that \(\mathcal{N}(u)\) behaves like a Gaussian noise field with variance \(\frac{\tau_{dS}}{\pi}\).
We will conclude this section by describing how the above analysis can be readily generalised to extended sources in dS, modelled as a sequence of spherical shells. The main technical novelty is that
we need the dS bulk-to-bulk propagator to compute the radial part of the field. The expression in Eq.(3.3) is then replaced by a radial contour integral
\[\varphi_{{}_{N}}(\zeta,\omega,\mathbb{L})=\oint r_{0}^{N}dr_{0}\ \mathbb{G}( \zeta|\zeta_{0},\omega,\mathbb{L})\varrho_{{}_{N}}(\zeta_{0},\omega,\mathbb{L})\, \tag{3.18}\]
where \(\varrho_{{}_{N}}(\zeta_{0},\omega,\mathbb{L})\) is a scalar source spread out over dS-SK geometry and \(\mathbb{G}(\zeta|\zeta_{0},\omega,\mathbb{L})\) is the contour-ordered bulk-to-bulk propagator. It is regular everywhere except at \(\zeta=\zeta_{0}\) where its radial derivative has a prescribed discontinuity. Further, we require regularity at the center of the right/left static patches, viz.,
\[\lim_{\zeta\to 0}r^{\nu+\frac{N-1}{2}}\mathbb{G}=\lim_{\zeta \to 1}r^{\nu+\frac{N-1}{2}}\mathbb{G}=0. \tag{3.19}\]
These conditions uniquely determine the bulk-to-bulk propagator as specific combinations of the outgoing/incoming waves on either side of the source point \(\zeta_{0}\). An explicit expression in terms of the right/left boundary-to-bulk propagators is (See appendix C)
\[\mathbb{G}(\zeta|\zeta_{0},\omega,\mathbb{L}) =\frac{1}{W_{LR}(\zeta_{0},\omega,\mathbb{L})}g_{R}(\zeta_{>}, \omega,\mathbb{L})g_{L}(\zeta_{\prec},\omega,\mathbb{L}) \tag{3.20}\] \[\equiv\frac{1}{W_{LR}(\zeta_{0},\omega,\mathbb{L})}\begin{cases}g _{R}(\zeta,\omega,\mathbb{L})g_{L}(\zeta_{0},\omega,\mathbb{L})&\text{if }\zeta \succ\zeta_{0}\\ g_{L}(\zeta,\omega,\mathbb{L})g_{R}(\zeta_{0},\omega,\mathbb{L})&\text{if } \zeta\prec\zeta_{0}\end{cases}\.\]
Here the symbols \(\succ\) and \(\prec\) denote comparison using the radial contour ordering of dS-SK contour. The construction here is analogous to the one in vacuum AdS[66], as well as the contour-ordered bulk-to-bulk Green function in the SK contour corresponding to planar AdS black holes[77, 51].
Once we have the bulk-to-bulk Green function, the on-shell effective action in terms of the extended sources can be computed to be
\[S|_{\mathbf{On-shell}}=\frac{1}{2}\sum_{\mathbb{L}}\int\frac{d \omega}{2\pi}\oint r^{\mathcal{N}}dr\ \oint r_{0}^{\mathcal{N}}dr_{0}\ [\varrho_{{}_{N}}(\zeta,\omega, \mathbb{L})]^{*}\mathbb{G}(\zeta|\zeta_{0},\omega,\mathbb{L})_{\varrho_{{}_{N }}}(\zeta_{0},\omega,\mathbb{L}). \tag{3.21}\]
This is the familiar statement, in free theories, on-shell action reduces to double integral over sources with an appropriate Green function serving as the kernel. The above expression can then be evaluated for a sequence of shell sources by performing the radial contour integrals.
We find that the end result of this computation can be written as
\[S|_{\mathbf{On-shell}}=S^{\text{Pt}}_{\text{CIP}}+S_{\text{Int}}\, \tag{3.22}\]
where \(S^{\text{Pt}}_{\text{CIP}}\) is the cosmological influence phase of Eq.(3.10), computed for the point-like source. To get this form, we should define the multipole moments of the extended source via
\[\mathcal{J}_{R}(\omega,\mathbb{L})\equiv\int_{R}dr\ r^{\mathcal{ N}}\Xi_{n}(r,\omega,\mathbb{L})\ \left(\frac{1-r}{1+r}\right)^{-\frac{\text{i}\omega}{2}}\varrho_{{}_{N}}(\zeta,\omega,\mathbb{L})\, \tag{3.23}\] \[\mathcal{J}_{L}(\omega,\mathbb{L})\equiv-\int_{L}dr\ r^{\mathcal{ N}}\Xi_{n}(r,\omega,\mathbb{L})\ \left(\frac{1-r}{1+r}\right)^{-\frac{\text{i}\omega}{2}}\varrho_{{}_{N}}(\zeta,\omega,\mathbb{L})\.\]
Here the integrals are over the right/left open static patches and \(\Xi_{n}(r,\omega,\mathbb{L})\) is a smearing function given in Eq.(C.37). The remaining terms in the on-shell action (denoted by \(S_{\text{Int}}\)) encode the conservative self-interactions of the extended source:
\[S_{\text{Int}}=\frac{1}{2}\sum_{\mathbb{L}}\int\frac{d\omega}{2 \pi}[\mathcal{J}_{R}^{*}\overline{\varphi}_{R,\text{Int}}-\mathcal{J}_{L}^{*} \overline{\varphi}_{L,\text{Int}}]. \tag{3.24}\]
Here \(\overline{\varphi}_{R/L,\rm Int}\) denote appropriately radially-averaged mean fields in the right/left static patch which couple to the multipole moments defined in Eq.(3.23). Their explicit form is
\[\begin{split}&\overline{\varphi}_{R,\rm Int}(\omega,\mathbb{L}) \equiv\int_{R}dr\ r^{\mathcal{N}}\Xi_{nn}(r,\omega,\mathbb{L})\ \left(\frac{1-r}{1+r}\right)^{-\frac{i\omega}{2}}\varrho_{ {}_{\mathcal{N}}}(\zeta,\omega,\mathbb{L})\,\\ &\overline{\varphi}_{L,\rm Int}(\omega,\mathbb{L})\equiv-\int_{L} dr\ r^{\mathcal{N}}\Xi_{nn}(r,\omega,\mathbb{L})\ \left(\frac{1-r}{1+r}\right)^{-\frac{i\omega}{2}}\varrho_{ {}_{\mathcal{N}}}(\zeta,\omega,\mathbb{L})\.\end{split} \tag{3.25}\]
Here \(\Xi_{nn}(r,\omega,\mathbb{L})\) is a time-reversal invariant Green function given in Eq.(C.38).
In the next section, we will describe how these results for extended sources can be used to compute the radiation reaction force felt by a dS observer in arbitrary motion. To that end, it is convenient to shift back to the standard time domain: we remind the reader that, till now, we have been working in the frequency domain dual to the outgoing EF time \(u\). This is related to sources data on standard time slices via
\[\varrho_{{}_{\mathcal{N}}}(\zeta,\omega,\mathbb{L})=\int du\ e^{i\omega u} \widetilde{\varrho}_{{}_{\mathcal{N}}}(\zeta,t,\mathbb{L})=\int dt\ e^{i\omega t }\left(\frac{1-r}{1+r}\right)^{\frac{i\omega}{2}}\widetilde{\varrho}_{{}_{ \mathcal{N}}}(\zeta,t,\mathbb{L})\, \tag{3.26}\]
where we have used \(u=t+\frac{1}{2}\ln\left(\frac{1-r}{1+r}\right)\). In other words, the combination \(\left(\frac{1-r}{1+r}\right)^{-\frac{i\omega}{2}}\varrho_{{}_{\mathcal{N}}}( \zeta,\omega,\mathbb{L})\) appearing in the above definitions is just the Fourier transform with respect to standard time. Thus, the radiative multipole moments defined in Eq.(3.23) can equivalently well be thought of as being computed via standard time slices, viz.,
\[\begin{split}&\mathcal{J}_{R}(\omega,\mathbb{L})\equiv\int dt\ e^{i\omega t}\int_{R}dr\ r^{\mathcal{N}}\Xi_{n}(r,i\partial_{t},\mathbb{L})\ \widetilde{\varrho}_{{}_{\mathcal{N}}}(\zeta,t,\mathbb{L})\,\\ &\mathcal{J}_{L}(\omega,\mathbb{L})\equiv-\int dt\ e^{i\omega t} \int_{L}dr\ r^{\mathcal{N}}\Xi_{n}(r,i\partial_{t},\mathbb{L})\ \widetilde{\varrho}_{{}_{\mathcal{N}}}(\zeta,t,\mathbb{L})\.\end{split} \tag{3.27}\]
Similar statements hold for the radially-averaged mean fields \(\overline{\varphi}_{R/L,\rm Int}\).
## 4 Radiation reaction and flat space limit
We will turn to the physics of dS radiation reaction(RR), as encoded in the cosmological influence phase \(S_{\rm CIP}\). For simplicity, we will consider an arbitrarily moving point-like source of a KG scalar field (i.e., the \(\mathcal{N}=d-1\) case). In particular, this means that we will no longer consider the cases of scalars coming from the harmonic decomposition of EM/linearised gravity: the dS RR forces for such cases will be dealt with elsewhere[78]. The reason for this restriction is as follows: the analysis of RR force for EM/gravity requires extending the dS multipole expansion to vector/tensor symmetric-trace-free tensors, as well as keeping track of additional velocity dependences in the multipole moments, a task better done elsewhere. Further, we will confine ourselves to \(dS_{d+1}\) with odd values of \(d\), where the flat spacetime RR force is known to time-local[79]: in these cases, we can compute the RR force as a local expression in a low curvature (or small \(H\)) expansion.10 To ensure clarity in the near-flat limit, we will restore Hubble constant \(H\) explicitly in what follows.
Footnote 10: We review the derivation of flat spacetime RR force in appendix A of this work.
Scalar self-force calculations in curved spacetime are a simple setting to understand the intricacies of how RR is affected by curvature back-scattering[80; 81; 82; 83; 84]. It is a simple version of the more observationally relevant problem of gravitational RR force in BH backgrounds, especially in the context of extreme mass ratio inspirals[40].
Before discussing the results, let us first understand the different components of an action that describe RR. RR is a dissipative term that is found in the \(\mathcal{J}_{D}^{*}(\omega,\mathbb{L})\mathcal{J}_{A}(\omega,\mathbb{L})\) term of the action. This term is a product of two radiative multipole moments and a two-point function connecting them, as shown in Figure 6. In Equation (3.10), this retarded two-point function is defined as \(K_{\rm Out}\). The radiative multipole moment \(\mathcal{J}\) is obtained by smearing the source distribution with an appropriate function \(\Xi_{n}\) (See Eq.(3.27)).
Now, let us consider the RR on a single particle. For this, we assume that the particle's trajectory is close to the south pole (\(rH\ll 1\)), the velocity is small (\(v\ll c\)), and the wavelength of the radiation is much smaller than the cosmological scale (\(\omega\gg H\)), but much larger than the observer's length scales (\(r\omega\ll 1\)). Together with the multipole expansion, these approximations lead to the post-Newtonian expansion of the RR, as shown in Figure 7. In the doubled geometry, this corresponds to two charges moving along their trajectories near their respective south poles. The influence phase on a point charge is then written in terms of the average and difference of positions of the two charges, as well as their time derivatives.
To obtain the correct flat space answer, we need to check if both the two-point function and the
Figure 6: RR computation has two main ingredients: radiative multipole moments and a two-point function describing how one multipole moment affects another. The black solid line denotes the trajectory of the source.
Figure 7: The RR is given in a post-Newtonian expansion: the velocity is taken to be small (\(v\ll 1\)) and the trajectory is centred about the south pole (\(\omega r\ll 1\)) while the near-flat expansion requires that the curvature effects are small(i.e. \(H\ll\omega\) and \(rH\ll 1\)).
smearing function reduce to their flat space analogues as \(H\to 0\). This can be checked from the \(H\) expansions of those quantities. We can also obtain further curvature corrections and establish a near-flat expansion. To illustrate how to obtain this expansion easily for any desired order in the Hubble constant, we provide detailed calculations of these expansions in Appendix D.1. For the reader's convenience, we also give a review of flat space RR in Appendix A.3.
Given the near expansion of the action, we can, in a controlled fashion, calculate the curvature corrections to the RR force. This force is given by the variation of the Lagrangian with respect to \(x_{D}\). The leading term in the PN expansion of the flat space RR is a scalar version of the Abraham-Lorentz-Dirac force and stems from the dipole moment of the particle. This term, in arbitrary \(d\), with the first Hubble correction, we find to be:
\[F^{i}_{\rm ALD}=\frac{(-1)^{\frac{d+1}{2}}}{|\mathbb{S}^{d-1}|d!!(d-2)!!}\Bigg{\{} \partial_{t}^{d}x^{i}-H^{2}\frac{d}{6}\left(d^{2}-1\right)\partial_{t}^{d-2}x ^{i}\Bigg{\}}. \tag{4.1}\]
This expression gives an equation of motion that is third-order in derivative for \(dS_{4}\). Even higher time derivatives show up if we include higher-order post-Newtonian corrections. This is a known effect in flat space calculations. In the appendix D, we give a detailed calculation of 2nd order post-Newtonian corrections along with Hubble corrections.
The overall sign of the leading term in the force agrees with the fact that this force is dissipative rather than anti-dissipative. To understand this, consider a 1d oscillator with an RR force of the form
\[\frac{d^{2}x}{dt^{2}}+\omega_{0}^{2}x=\lambda(-1)^{\frac{d+1}{2}}\frac{d^{d}x }{dt^{d}}\, \tag{4.2}\]
where we will assume that \(d\) is odd and \(\lambda>0\). We would now like to argue that the RR force is dissipative. To see this, we note that the above equation is equivalent to a dispersion relation \(\omega^{2}=\omega_{0}^{2}-i\lambda\omega^{d}\), which can be solved approximately to give \(\omega\approx\omega_{0}-i\lambda\omega_{0}^{d}\). Since the imaginary part of \(\omega\) is negative, we conclude that the above force is indeed dissipative.
As noted in [76], the terms in the flat space PN expansion of the RR force add up to give a Poincare covariant expression. This is a non-trivial check for the accuracy of the result as both the structure of the multipole PN expansion and the requirement that it sums up to a covariant result leave little room for error. We similarly find that the curvature corrections obtained along with the flat space results are also tightly constrained: the contributions from our influence phase non-trivially sum up to expressions covariant under dS metric.
We refer the reader to appendix D for a detailed enumeration of the terms in the PN expansion. The final RR force then takes the form
\[F^{\mu}_{\rm RR}\equiv\frac{(-)^{\frac{d-1}{2}}}{|\mathbb{S}^{d-1}|(d-2)!!}f ^{\mu}\]
where \(f^{\mu}\) has an expansion of the form
\[f^{\mu}_{d}={}^{0}f^{\mu}_{d}-\frac{H^{2}}{4\times 3!}c_{h}\ ^{0}f^{\mu}_{d-2}+ \frac{H^{4}}{8\times 6!}[5c_{h}^{2}-40(d+2)c_{h}+32(d+2)(d^{2}-1)]\ ^{0}f^{\mu}_{d-4}+O(H^{6}). \tag{4.3}\]
Here \(c_{h}\equiv 12\mu^{2}+d^{2}-4\) contains the information about the mass of the scalar and the combinations
\({}^{0}f^{\mu}_{d}\)'s for odd values of \(d\) are (we have listed the expressions up to \(d=11\) in appendix D):
\[\begin{split}{}^{0}f^{\mu}_{1}&\equiv-v_{\nu}\,\\ {}^{0}f^{\mu}_{3}&\equiv\frac{P^{\mu\nu}}{3!}\left\{- a^{(1)}_{\nu}\right\}-\frac{H^{2}}{3!}\left\{v_{\nu}\right\}\,\\ {}^{0}f^{\mu}_{5}&\equiv\frac{P^{\mu\nu}}{5!}\left\{- a^{(3)}_{\nu}+5\ (a\cdot a)\ a^{(1)}_{\nu}+10\ (a\cdot a^{(1)})\ a_{\nu}\right\}-H^{2}\frac{P^{\mu\nu}}{5!!}\left\{a^{(1)}_{ \nu}\right\}+\frac{H^{4}}{5!}\left\{-v_{\nu}\right\}\,\\ {}^{0}f^{\mu}_{7}&\equiv\frac{P^{\mu\nu}}{7!} \left\{-a^{(5)}_{\nu}+14\ (a\cdot a)\ a^{(3)}_{\nu}+70\ (a\cdot a^{(1)})\ a^{(2)}_{\nu}+84\ (a\cdot a^{(2)})\ a^{(1)}_{\nu}+42\ (a\cdot a^{(3)})\ a_{\nu} \right.\\ &\qquad\qquad\left.+\frac{224}{3}\ (a^{(1)}\cdot a^{(1)})\ a^{(1)}_{\nu}+105\ (a^{(1)}\cdot a^{(2)})\ a_{\nu}+O(a^{5})\right\}\\ &\qquad-H^{2}\frac{P^{\mu\nu}}{7!}\left\{a^{(3)}_{\nu}+15\ (a\cdot a)\ a^{(1)}_{\nu}+37\ (a\cdot a^{(1)})\ a_{\nu}\right\}+H^{4}\frac{P^{\mu\nu}}{7!}\left\{-a^{(1)}_{ \nu}\right\}-\frac{H^{6}}{7!}\left\{v_{\nu}\right\}\.\end{split} \tag{42}\]
Here \(v^{\mu}=\frac{dx^{\mu}}{d\tau}\) is the proper velocity of the particle computed using dS metric, \(a^{\mu}\equiv\frac{D^{2}x^{\mu}}{D\tau^{2}}\) is its proper acceleration and \(P^{\mu\nu}\equiv g^{\mu\nu}+v^{\mu}v^{\nu}\) is the transverse projector to the worldline. We use \(a^{(k)}_{\mu}\equiv\frac{D^{k}a^{\mu}}{D\tau^{k}}\) to denote the proper-time derivatives of the velocity. All the spacetime dot products are computed using dS metric.
One remarkable feature of the above formula for radiation reaction is the recursive nature of the Hubble corrections. One can see that the \(O(H^{2k})\) correction to the force in \(d\) dimensions is related to the RR force in \(d-2k\) dimensions. It would be interesting to see whether there are specific quantum mechanical models which can reproduce such a recursive structure.
One of the consequences of this recurrence is that the \(H^{d-1}\) terms in \(dS_{d+1}\) resembles the RR effects in \(d=1\) flat space. The flat space \(d=1\) massless scalar RR was explored in [85]. However, as noted there, it is inconsistent to assume a constant coupling for a particle coupled to a massless scalar in 2D flat space. Similar issues emerge at \(O(H^{2})\) in \(d=3\) dS[75] and in general in any \(d\) at \(O(H^{d-1})\) due to the aforementioned recurrence relation. This is, in turn, related to the breakdown of the small \(\omega\) expansion of the \(K_{\rm Out}\) noted in footnote 8: an issue that can be cured by turning on a small mass for the scalar.
We have checked that the flat limit of the RR force coincides with the covariant expressions derived in [76]. However, there are sign mismatches with expressions of [86]11. The expressions at order \(H^{2}\) do not match the general curved space force in [86] restricted to dS. Since our methods differ significantly from [86], we are unable to comment further on the specific source of disagreement.
Footnote 11: This sign mismatch was noted by [76] as well.
## 5 Interactions
In this section, we will describe how the computation of on-shell action can be extended beyond the free-field examples. In particular, we would like to check that our prescription in Eq.(4) equating the cosmological influence phase to on-shell action, still works after we include interactions. We will check this in a simple example: \(\varphi^{3}_{{}_{N}}\) theory in dS\({}_{4}\). However, as will be clear below, our arguments can be easily adapted to set up perturbative diagrammatics for arbitrary interactions.
For \(\varphi^{3}_{{}_{N}}\) theory in dS\({}_{4}\), we should simply evaluate the bulk on-shell action with the cubic interaction term. At leading order in perturbation theory, the cubic contribution to \(S_{\rm CIP}\) is obtained by substituting the free field solutions into interaction terms of the action.12 The interaction term should then be integrated over the full dS-SK geometry: this is the dS version of a Witten diagram vertex.
We saw in Secs3 that the \(S_{\rm CIP}\) we derived satisfies constraints due to SK collapse and KMS conditions. That this should be true for interacting theories as well is not clear a priori, but we will now show that these constraints are still satisfied at least at the level of contact diagrams. This is most easily seen in terms of the \(P-F\) basis multipole moments defined in Eq.(10). In terms of these multipole moments, SK collapse and KMS conditions are equivalent to showing that there are no terms in the action with only \(\mathcal{J}_{\bar{F}}\) or only \(\mathcal{J}_{\bar{P}}\)[74].
To check these conditions, we use Eq.(11) and write the vertex contribution to the on-shell action as
\[-\frac{\lambda_{3}}{3!}\int d^{3+1}x \varphi_{{}_{\!N}}^{3}=\sum_{\ell_{i},m_{i}}\text{Gaunt}(\ell_{i},m_{i})\int_{\omega_{1},\omega_{2},\omega_{3}}\delta(\omega_{1}+\omega_{2}+ \omega_{3}) \tag{12}\] \[\times\left[\mathcal{I}_{FFF}(\omega_{i},\ell_{i},m_{i})+ \mathcal{I}_{FFP}(\omega_{i},\ell_{i},m_{i})+\mathcal{I}_{FPP}(\omega_{i},\ell _{i},m_{i})+\mathcal{I}_{PPP}(\omega_{i},\ell_{i},m_{i})\right]\.\]
Here the index \(i\) runs over \(\{1,2,3\}\) and \(\text{Gaunt}(\ell_{i},m_{i})\) are the Gaunt coefficients coming from the integral of 3 spherical harmonics over the sphere (see equation 34.3.22 of [87]). Time-translation invariance implies that the three frequencies \(\omega_{1}\), \(\omega_{2}\) and \(\omega_{3}\) are constrained by an energy-conserving \(\delta\) function. The contributions to the cubic influence phase are given by radial contour integrals, viz.,
\[\mathcal{I}_{FFF}(\omega_{i},\ell_{i},m_{i})\equiv\frac{\lambda_{ 3}}{3!}\oint_{\zeta}G_{\mathcal{N}}^{\rm Out}(\zeta,\omega_{1},\ell_{1})\ G_{ \mathcal{N}}^{\rm Out}(\zeta,\omega_{2},\ell_{2})\ G_{\mathcal{N}}^{\rm Out}( \zeta,\omega_{3},\ell_{3}) \tag{13}\] \[\times\mathcal{J}_{\bar{F}}(\omega_{1},\mathbb{L}_{1})\mathcal{J }_{\bar{F}}(\omega_{2},\mathbb{L}_{2})\mathcal{J}_{\bar{F}}(\omega_{3},\mathbb{ L}_{3})\,\] \[\mathcal{I}_{FFP}(\omega_{i},\ell_{i},m_{i})\equiv-\frac{\lambda_{ 3}}{2!}\oint_{\zeta}e^{2\pi\omega_{3}(1-\zeta)}\ G_{\mathcal{N}}^{\rm Out}( \zeta,\omega_{1},\ell_{1})\ G_{\mathcal{N}}^{\rm Out}(\zeta,\omega_{2},\ell_{ 2})\ G_{\mathcal{N}}^{\rm Out}(\zeta,\omega_{3},\ell_{3})\] \[\times\mathcal{J}_{\bar{F}}(\omega_{1},\mathbb{L}_{1})\mathcal{J }_{\bar{P}}(\omega_{2},\mathbb{L}_{2})\mathcal{J}_{\bar{P}}(\omega_{3},\mathbb{ L}_{3})\,\] \[\mathcal{I}_{FPP}(\omega_{i},\ell_{i},m_{i})\equiv\frac{\lambda_{ 3}}{2!}\oint_{\zeta}e^{2\pi(\omega_{2}+\omega_{3})(1-\zeta)}\ G_{\mathcal{N}}^{ \rm Out}(\zeta,\omega_{1},\ell_{1})\ G_{\mathcal{N}}^{\rm Out}(\zeta,\omega_{ 2},\ell_{2})\ G_{\mathcal{N}}^{\rm Out}(\zeta,\omega_{3},\ell_{3})\] \[\times\mathcal{J}_{\bar{F}}(\omega_{1},\mathbb{L}_{1})\mathcal{J }_{\bar{P}}(\omega_{2},\mathbb{L}_{2})\mathcal{J}_{\bar{P}}(\omega_{3},\mathbb{ L}_{3})\,\] \[\mathcal{I}_{PPP}(\omega_{i},\ell_{i},m_{i})\equiv-\frac{\lambda_{ 3}}{3!}\oint_{\zeta}e^{2\pi(\omega_{1}+\omega_{2}+\omega_{3})(1-\zeta)}\ G_{ \mathcal{N}}^{\rm Out}(\zeta,\omega_{1},\ell_{1})\ G_{\mathcal{N}}^{\rm Out}( \zeta,\omega_{2},\ell_{2})\ G_{\mathcal{N}}^{\rm Out}(\zeta,\omega_{3},\ell_ {3})\] \[\times\mathcal{J}_{\bar{P}}(\omega_{1},\mathbb{L}_{1})\mathcal{J }_{\bar{P}}(\omega_{2},\mathbb{L}_{2})\mathcal{J}_{\bar{P}}(\omega_{3},\mathbb{ L}_{3})\.\]
We now note that, since \(G_{\mathcal{N}}^{\rm Out}\) is analytic, the integrands in \(\mathcal{I}_{FFF}\) and \(\mathcal{I}_{PPP}\) are analytic (to see the latter, we use energy conservation). This, in turn, implies that these integrals evaluate to zero by Cauchy's theorem. It is now evident that this argument generalises to all contact diagrams of \(\varphi_{{}_{\!N}}^{n}\) type, thus demonstrating our claim about SK collapse and KMS conditions. A similar argument in the AdS blackhole case has been checked also for exchange diagrams[77, 51] and it would be interesting to check whether a similar claim holds here. Further, it would also be interesting to study the correction to the radiation reaction due to such non-linear interactions[88].
## 6 Summary and Discussion
In this work, we have proposed a de Sitter-Schwinger Keldysh(dS-SK) geometry formed by two copies of the static patch stitched together at their future horizons. We then showed how the influence phase of a dS observer could be obtained by evaluating the on-shell action on this geometry. Our proposal yields results that pass a variety of checks: first, from a broad structural point of view, it satisfies
the constraints imposed on it from bulk unitarity (SK collapse) and the dS version of Kubo-Martin-Schwinger (KMS) conditions.
Another check is the flat space limit, where we showed that, for point-like sources, the dissipative part of the action correctly produces the flat space radiation reaction. This also allows us to calculate Hubble corrections to the radiation reaction in odd spatial dimensions, and show that they combine into generally covariant expressions on the dS background, which serves another non-trivial check on our computation. As a technical aside, we have also shown how we can counter-term the influence phase for localised sources with multipole moments by using a Dirac-Deitweiler-Whiting type decomposition of the dS Green functions.
Our analysis can readily be extended to interactions, following techniques invented in the AdS context[47; 51; 77; 89], as we sketched in the main text. This aspect will be explored in detail elsewhere[88]. It would also be interesting to explore whether the familiar tools of conformal invariance, e.g., conformal block decomposition, can shed more light on the structure of radiation reaction at a non-linear level. On the face of it, the presence of the observer breaks the dS isometries to just rotations/time-translations around the observer's worldline. But the re-emergence of the full dS isometry in the effective action that we described above, suggests that conformal techniques could be fruitfully exploited to understand the structure of Hubble corrections to the radiation reaction. There is also the question of extending our analysis to gauge theories and linearised gravity, which we shall pursue in a subsequent work[78].
In this work, we have advocated a point of view that the real-world cosmology is fruitfully framed in terms of a cosmological influence phase \(S_{\rm CIP}\) for an observer's worldline. It is interesting to ask whether realistic FLRW cosmology from \(\Lambda\)-CDM and the CMB phenomenology can indeed be rewritten in these terms. To this end, it would be interesting to extend our analysis to time-varying cosmological spacetimes: perhaps, one should begin by extending our framework to simpler time-dependent extensions involving sudden/adiabatic approximations. More broadly, we can enquire of the role played by radiation reaction in cosmology. Understanding the gravitational radiation reaction at the galactic/extra-galactic scales might be crucial to predict the stochastic gravitational wave background[90; 91; 92].
Much of what we say about dS radiation reaction can readily be adapted to the AdS case, with a change of signs. This statement is expected to be true at short times, where the cosmological constant can be treated perturbatively, and its sign does not result in any qualitatively new features. Thus, at short times, we expect generally covariant expressions for the radiation reaction felt by an AdS observer, very similar to the ones we derive in this work. However, we expect qualitative differences at long time-scales due to reflection at AdS asymptotia, resulting in long-time tails in radiation reaction. Further, we do not expect an analogue of dS Hawking radiation in AdS. It might be worthwhile to make these intuitions more precise and understand the dual CFT interpretation of these statements. This would be a good test of the existing proposals describing bulk observers within AdS/CFT[28; 29; 30].
We began this note by motivating our work in the context of solipsistic holography. We see the results here as a first step towards constructing an open system whose details can be compared against proposed dual quantum mechanical models. Following the examples like BFSS matrix model[9]13, it is natural to expect some sort of a large \(N\) matrix quantum mechanics to give rise to the same influence phase as what we derive here. To check this, it would be good to construct a formalism for computing the influence phase of slow macroscopic observables in a large \(N\) matrix model: our computations suggest that a clean separation of slow/fast modes is possible at least when there is a dual gravity
description. These slow observables describing the dS observer should not be entirely gauge-invariant but rather have the structure of partially gauge-fixed probes[95; 96; 97]. Whether this is so is yet to be seen.
## Acknowledgements
We would like to thank Ofek Birnholtz, Tuneer Chakraborty, Chandramouli Chowdhury, Victor Godet, Chandan Jana, Godwin Martin, Shiraz Minwalla, Priyadarshi Paul, Suvrat Raju, Mukund Rangamani, Joseph Samuel, Ashoke Sen, Shivam Sharma, Akhil Sivakumar, Sandip Trivedi and Spenta Wadia for valuable discussions. RL would like to thank the organisers of All Lambdas Holography @ Prague 2021 online workshop for discussions related to this work. We acknowledge support of the Department of Atomic Energy, Government of India, under project no. RTI4001, and would also like to acknowledge our debt to the people of India for their steady and generous support to research in the basic sciences.
## Appendix A STF tensors and multipole expansion
We will begin by reviewing the notion of symmetric trace-free tensors, which are the appropriate tools to discuss multipole expansion. The \(d=3\) version of this story is discussed in a variety of places.14 The generalisation to arbitrary dimensions is straightforward, if somewhat involved. In the course of this work, we had to use a variety of identities involving STF tensors in arbitrary dimensions scattered across these references. The goal of this section is to review this theory for the reader's benefit.
Footnote 14: See [98] for a textbook discussion. We will refer the reader to [76; 79; 99; 100; 101; 102; 103; 104] for a discussion of STF tensors in general dimensions.
We will begin with a more traditional account of electrostatic multipole expansion in \(\mathbb{R}^{d}\) via orthonormal spherical harmonics on \(\mathbb{S}^{d-1}\). This is the generalisation of familiar multipole expansion in \(d=3\), and we will use it to set the stage for a more modern account of multipole expansion using symmetric, trace-free (STF) tensors in the later subsections. We conclude this appendix with a discussion of radiation reaction in flat spacetime using these tools.
### Orthonormal Spherical harmonics on \(\mathbb{S}^{d-1}\)
Let us begin by considering the problem of electrostatics in \(\mathbb{R}^{d}\). Our goal in this subsection would be to describe the multipole expansion in this case, given an orthonormal basis of spherical harmonics on \(\mathbb{S}^{d-1}\). Later in this subsection, we will give an explicit construction of such an orthonormal basis, which can, in principle, be used in explicit computations.
Given a charge distribution \(\rho(\vec{r})\), the electric potential produced by such a distribution is given in terms of the Newton-Coulomb integral
\[\phi(\vec{r})=\int d^{d}r_{0}\frac{\rho(\vec{r}_{0})}{(d-2)|\mathbb{S}^{d-1}|| \vec{r}-\vec{r}_{0}|^{d-2}}. \tag{101}\]
Here, we have denoted the volume of a unit sphere \(\mathbb{S}^{d-1}\) via
\[|\mathbb{S}^{d-1}|\equiv\frac{2\pi^{\frac{d}{2}}}{\Gamma(\frac{d}{2})}\, \tag{102}\]
and have fixed our normalisations such that the Poisson equation takes the form \(\nabla^{2}\phi=-\rho\). While the above integral is indeed the right, a more useful answer is obtained by performing a multipole
expansion of the Newton-Coulomb potential in terms of Legendre polynomials. In \(d=3\), this is a well-known statement from undergraduate physics courses, and we will now describe a quick way to generalise this statement to arbitrary dimensions.
To this end, consider a simple problem where the answer due to multipole expansion is straightforward: we imagine a spherical shell of radius \(R\) in \(\mathbb{R}^{d}\) carrying a surface charge density \(\sigma_{\ell\vec{m}}(\hat{r})\) proportional to a spherical harmonic \(\mathscr{Y}_{\ell\vec{m}}(\hat{r})\), i.e., a spherical harmonic which under the sphere laplacian has an eigenvalue \(-\ell(\ell+d-2)\) and we use \(\vec{m}\) to denote the additional labels required to furnish an orthonormal basis within this eigenspace. The above eigenvalue follows from demanding that \(r^{\ell}\mathscr{Y}_{\ell\vec{m}}(\hat{r})\) be a harmonic function annihilated by the Laplacian operator
\[\nabla^{2}_{\mathbb{R}^{d}}\equiv\frac{1}{r^{d-1}}\frac{\partial}{\partial r}r ^{d-1}\frac{\partial}{\partial r}+\frac{1}{r^{2}}\nabla^{2}_{\mathbb{S}^{d-1}}. \tag{111}\]
By symmetry, the potential due to such a problem should also be proportional to the same spherical harmonic as the charge distribution. The potential should be a harmonic function for \(r\neq R\), regular at the origin, vanishing at infinity, be continuous at \(r=R\) but have a derivative discontinuity at the shell equal to the charge density. These requirements uniquely determine the solution to be
\[\frac{R\sigma_{\ell\vec{m}}(\hat{r})}{(2\ell+d-2)}\Big{\{}\Theta(r<R)\frac{r^ {\ell}}{R^{\ell}}+\Theta(r>R)\frac{R^{\ell+d-2}}{r^{\ell+d-2}}\Big{\}}. \tag{112}\]
This answer can be generalised to an arbitrary charge distribution, once it is realised that any distribution can be built shell by shell and \(\ell\) by \(\ell\). Using an orthonormal basis of spherical harmonics to do the projection to every \(\ell\), we can then write the potential for an arbitrary charge distribution as
\[\int d^{d}r_{0}\ \rho(\vec{r}_{0})\ \sum_{\ell\vec{m}}\frac{\mathscr{Y}_{\ell \vec{m}}(\hat{r})\mathscr{Y}^{*}_{\ell\vec{m}}(\hat{r}_{0})}{(2\ell+d-2)r_{0}^ {d-2}}\Big{\{}\Theta(r<r_{0})\frac{r^{\ell}}{r_{0}^{\ell}}+\Theta(r>R)\frac{r_ {0}^{\ell+d-2}}{r^{\ell+d-2}}\Big{\}}. \tag{113}\]
Comparing this against the Newton-Coulomb integral, we obtain the multipole expansion formula in \(\mathbb{R}^{d}\) :
\[\begin{split}&\frac{1}{(d-2)|\mathbb{S}^{d-1}||\vec{r}-\vec{r}_{0} |^{d-2}}\\ &\quad=\sum_{\ell\vec{m}}\frac{\mathscr{Y}_{\ell\vec{m}}(\hat{r} )\mathscr{Y}^{*}_{\ell\vec{m}}(\hat{r}_{0})}{(2\ell+d-2)r_{0}^{d-2}}\Big{\{} \Theta(r<r_{0})\frac{r^{\ell}}{r_{0}^{\ell}}+\Theta(r>R)\frac{r_{0}^{\ell+d-2 }}{r^{\ell+d-2}}\Big{\}}\.\end{split} \tag{114}\]
If we define the spherical multipole moments of the charge distribution \(\rho(\vec{r})\) by
\[q_{\ell\vec{m}}\equiv\frac{1}{2\ell+d-2}\int d^{d}r_{0}\ \rho(\vec{r}_{0})\ r_{0}^{ \ell}\mathscr{Y}^{*}_{\ell\vec{m}}(\hat{r}_{0})\, \tag{115}\]
we can write the potential far-outside the charge distribution as
\[\sum_{\ell\vec{m}}\frac{1}{r^{\ell+d-2}}\ q_{\ell\vec{m}}\mathscr{Y}_{\ell \vec{m}}(\hat{r}). \tag{116}\]
This is the basic content of multipole expansion in electrostatics. However, to actually compute these multipole moments for a give charge distribution \(\rho(\vec{r})\), we will need the explicit form of the spherical harmonics \(\mathscr{Y}_{\ell\vec{m}}(\hat{r})\) on \(\mathbb{S}^{d-1}\): we will now proceed to address this in the rest the subsection.
The first step in constructing the spherical harmonics is to derive the most symmetric among them: the Legendre polynomials. We will do this by recasting the above expansion in terms of the
Legendre polynomial. In the formula above, the sum over orthonormal spherical harmonics of a given \(\ell\) can be performed through a higher dimensional generalisation of the addition theorem for spherical harmonics, viz.,
\[\sum_{\vec{m}}\mathscr{Y}_{\ell\vec{m}}(\hat{r})\mathscr{Y}_{\ell \vec{m}}^{*}(\hat{r}_{0})=\frac{N_{HH}(d,\ell)}{|\mathbb{S}^{d-1}|}P_{\ell}(d, \hat{r}\cdot\hat{r}_{0}). \tag{114}\]
Here \(N_{HH}(d,\ell)\) is the number of orthonormal spherical harmonics of degree \(\ell\), with the notation here inspired by the fact that it is also the number of linearly independent, homogeneous, harmonic polynomials (HHPs) of degree \(\ell\) in \(\mathbb{R}^{d}\). We will elaborate on this and get an explicit expression for \(N_{HH}(d,\ell)\) below. For now, we move on to note that \(P_{\ell}(d,x)\) is the generalisation of the Legendre polynomial to \(\mathbb{R}^{d}\): it is the unique spherical harmonic invariant under \(SO(d-1)\) rotations which keep two poles of \(\mathbb{S}^{d-1}\) fixed and is normalised to unity at the north pole, i.e., \(P_{\ell}(d,x=1)\equiv 1\).
With the above definitions, we can argue for the above addition theorem as follows: first of all, the sum over orthonormal spherical harmonics of a given \(\ell\) should be a spherical harmonic which only depends on the relative orientation of \(\hat{r}\) and \(\hat{r}_{0}\) and hence, the above sum should be proportional to \(P_{\ell}(d,\hat{r}\cdot\hat{r}_{0})\). The constant of proportionality can then be fixed by setting \(\hat{r}=\hat{r}_{0}\) and integrating over the sphere \(\mathbb{S}^{d-1}\) using orthonormality.
As a corollary of the above addition theorem, we note the following formula for the inner product between Legendre harmonics of two different orientations:
\[\int_{\mathbb{S}^{d-1}}P_{\ell}(d,\hat{r}\cdot\hat{r}_{0})P_{\ell^ {\prime}}(d,\hat{r}\cdot\hat{r^{\prime}_{0}})=\delta_{\ell\ell^{\prime}}\frac {|\mathbb{S}^{d-1}|}{N_{HH}(d,\ell)}P_{\ell}(d,\hat{r}_{0}\cdot\hat{r^{\prime} _{0}}). \tag{115}\]
This statement follows directly by the use of addition theorem followed by the fact that \(\mathscr{Y}_{\ell\vec{m}}(\hat{r})\) are assumed to be orthonormal.For \(\hat{r}_{0}=\hat{r}^{\prime}_{0}\), we get the Legendre orthogonality relation
\[\int_{0}^{\pi}d\vartheta\ \sin^{d-2}\vartheta\ P_{\ell}(d,\cos\vartheta)P_{\ell^ {\prime}}(d,\cos\vartheta)=\delta_{\ell\ell^{\prime}}\frac{|\mathbb{S}^{d-1}| }{|\mathbb{S}^{d-2}|N_{HH}(d,\ell)}. \tag{116}\]
With the addition theorem, we can recast the multipole expansion in terms of the Legendre polynomial as15
Footnote 15: This expansion is often used to define Gegenbauer polynomials \(C^{\mu}_{\ell}(z)\), which differ from the generalised Legendre polynomials introduced here merely by an overall normalisation. These polynomials are also proportional to the associated Legendre functions. The explicit relations are given by
\[C^{\frac{d}{2}-1}_{\ell}(z)\equiv(d-2)\frac{N_{HH}(d,\ell)}{2\ell+d-2}P_{\ell} (d,z)\,\quad P^{-\mu}_{\lambda}(z)\equiv\frac{\left(\sqrt{1-z^{2}}\right)^{\mu}}{2^ {\mu}\mu!}P_{\lambda-\mu}(2\mu+3;z). \tag{117}\]
\[\begin{split}&\frac{1}{(d-2)|\vec{r}-\vec{r}_{0}|^{d-2}}\\ &\quad=\sum_{\ell}\frac{N_{HH}(d,\ell)P_{\ell}(d,\hat{r}\cdot \hat{r}_{0})}{(2\ell+d-2)r_{0}^{d-2}}\Big{\{}\Theta(r<r_{0})\frac{r^{\ell}}{ r_{0}^{\ell}}+\Theta(r>R)\frac{r_{0}^{\ell+d-2}}{r^{\ell+d-2}}\Big{\}}\.\end{split} \tag{118}\]
As is well-known in \(d=3\) case, this series expansion can be used to derive an explicit expression for \(P_{\ell}(d,x)\).
The steps involved are as follows: we take the case \(r_{0}<r\), set \(t=\frac{r_{0}}{r}<1\) and \(x=\hat{r}\cdot\hat{r}_{0}\) to write
\[N_{HH}(d,\ell)P_{\ell}(d,x)=(2\ell+d-2)\times\text{Coefficient of $t^{\ell}$ in $\frac{1}{(d-2)(1-2xt+t^{2})^{\frac{d}{2}-1}}$}. \tag{119}\]
To extract the \(t^{\ell}\) coefficient, we use
\[\frac{(2\ell+d-2)}{(d-2)(1-2xt+t^{2})^{\frac{d}{2}-1}}=\frac{\ell+\frac{d}{2}-1}{ \Gamma(\frac{d}{2})}\int_{0}^{\infty}ds\ s^{\frac{d}{2}-2}\ e^{-s+2xst-st^{2}}\, \tag{101}\]
expand the exponentials involving \(t\) and integrate to obtain
\[\begin{split} N_{HH}(d,\ell)P_{\ell}(d,x)&=\frac{ \ell+\frac{d}{2}-1}{\Gamma(\frac{d}{2})}\sum_{k}\int_{0}^{\infty}ds\ s^{\frac{d} {2}-2}\ e^{-s}\frac{(2xs)^{\ell-2k}}{(\ell-2k)!}\frac{(-s)^{k}}{k!}\\ &=\frac{2^{\ell}\Gamma\left(\ell+\frac{d}{2}\right)}{\Gamma( \frac{d}{2})}\sum_{k}\frac{\Gamma\left(\ell+\frac{d}{2}-1-k\right)}{\Gamma( \ell+\frac{d}{2}-1)}\frac{(-)^{k}}{2^{2k}k!}\frac{x^{\ell-2k}}{(\ell-2k)!}\.\end{split} \tag{102}\]
Here, the sum over \(k\) runs from \(k=0\) and until the combination \(\ell-2k\) is non-negative. Defining the normalisation factor16
Footnote 16: The interpretation of this ubiquitous normalisation factor will become clearer when we describe STF tensors in the next subsection. For now, we will note that \(\mathcal{N}_{d,\ell}\) is an inverse integer which has the following alternate forms
\[\mathcal{N}_{d,\ell}\equiv\frac{|\mathbb{S}^{d+2\ell-1}|}{|\mathbb{S}^{1}|^{ \ell}|\mathbb{S}^{d-1}|}=\frac{(d-2)!!}{(d+2\ell-2)!!}. \tag{103}\]
\[\mathcal{N}_{d,\ell}\equiv\frac{\Gamma(\frac{d}{2})}{2^{\ell}\Gamma\left( \ell+\frac{d}{2}\right)}\,\quad\nu\equiv\frac{d}{2}+\ell-1\, \tag{104}\]
we finally obtain an explicit expression for the generalised Legendre polynomial as
\[\mathcal{N}_{d,\ell}N_{HH}(d,\ell)P_{\ell}(d,x)=\sum_{k}\frac{\Gamma\left(\nu -k\right)}{2^{2k}k!\Gamma(\nu)}\frac{(-)^{k}x^{\ell-2k}}{(\ell-2k)!}. \tag{105}\]
Incidentally, the same expansion at \(x=1\) also gives the number of orthonormal spherical harmonics of degree \(\ell\) as
\[N_{HH}(d,\ell)=\frac{2\ell+d-2}{d-2}\times\text{Coefficient of $t^{\ell}$ in $\frac{1}{(1-t)^{d-2}}=\frac{2\ell+d-2}{d-2}\binom{\ell+d-3}{\ell}$}. \tag{106}\]
This finishes our construction of the Legendre harmonic on \(\mathbb{S}^{d-1}\).
Next, we will give a recursive construction of a complete orthonormal basis of spherical harmonics on \(\mathbb{S}^{d-1}\), just using the Legendre polynomials constructed above. We begin with an explicit spherical coordinate system in \(\mathbb{R}^{d}\) given by
\[\begin{split} x_{1}&=r\ \sin\vartheta_{d-2}\ \sin\vartheta_{d-3}\ \dots\ \sin\vartheta_{2}\ \sin\vartheta_{1}\ \cos\varphi\,\\ x_{2}&=r\ \sin\vartheta_{d-2}\ \sin\vartheta_{d-3}\ \dots\ \sin\vartheta_{2}\ \sin\vartheta_{1}\ \sin\varphi\,\\ x_{3}&=r\ \sin\vartheta_{d-2}\ \sin\vartheta_{d-3}\ \dots\ \sin\vartheta_{2}\ \cos\vartheta_{1}\,\\ x_{4}&=r\ \sin\vartheta_{d-2}\ \sin\vartheta_{d-3}\ \dots\ \cos\vartheta_{2}\,\\ &\dots\,\\ x_{d-2}&=r\ \sin\vartheta_{d-2}\ \sin\vartheta_{d-3}\ \cos \vartheta_{d-4}\,\\ x_{d-1}&=r\ \sin\vartheta_{d-2}\ \cos\vartheta_{d-3}\,\\ x_{d}&=r\ \cos\vartheta_{d-2}\.\end{split} \tag{107}\]
Here the radius \(r\) varies from \(0\) to \(\infty\) whereas the allowed values of angles is \(\vartheta_{i}\in[0,\pi]\) and \(\varphi\in[0,2\pi)\). In these coordinates, we can write the metric of \(\mathbb{S}^{d-1}\) as
\[\begin{split} d\Omega_{d-1}^{2}&=d\vartheta_{d-2}^{ 2}+\sin^{2}\vartheta_{d-2}d\Omega_{d-2}^{2}\\ &=d\vartheta_{d-2}^{2}+\sin^{2}\vartheta_{d-2}d\vartheta_{d-3}^{ 2}+\sin^{2}\vartheta_{d-2}\sin^{2}\vartheta_{d-3}d\vartheta_{d-4}^{2}+\ldots \\ &\ +\prod_{k=j+1}^{d-2}\sin^{2}\vartheta_{k}\ d\vartheta_{j}^{2}+ \ldots+\prod_{k=1}^{d-2}\sin^{2}\vartheta_{k}\ d\varphi^{2}.\end{split}\] (A.22)
The volume form
\[\int_{\mathbb{S}^{d-1}}(\ldots)\equiv\int d\vartheta_{1}\wedge d\vartheta_{2} \ldots d\vartheta_{d-2}\wedge d\varphi\ \prod_{k=1}^{d-2}\sin^{k}\vartheta_{k}\ (\ldots)\.\] (A.23)
We are interested in constructing an orthonormal basis of spherical harmonics in these coordinates. As we described above, the simplest spherical harmonic is the Legendre harmonic \(P_{\ell}(d,\cos\vartheta_{d-2})\) which depends only on \(\vartheta_{d-2}\). It obeys the second-order ODE
\[\left[\frac{1}{\sin^{d-2}\vartheta}\frac{d}{d\vartheta}\sin^{d-2}\vartheta\frac {d}{d\vartheta}+\ell(\ell+d-2)\right]P_{\ell}(d,\cos\vartheta)=0\.\] (A.24)
The function \(P_{\ell}(d,\cos\vartheta)\) is the unique \(\ell^{th}\) degree polynomial in \(\cos\vartheta\) that solves the above ODE and is normalised to \(P_{\ell}(d,\cos\vartheta=1)=1\). In general, spherical harmonics of degree \(\ell\) obey the eigenvalue equation \(\left[\nabla^{2}_{\mathbb{S}^{d-1}}+\ell(\ell+d-2)\right]\mathcal{S}_{\ell}( \Omega_{d-1})=0\), or in more detail
\[\left[\frac{1}{\sin^{d-2}\vartheta_{d-2}}\frac{\partial}{\partial\vartheta_{d- 2}}\sin^{d-2}\vartheta_{d-2}\frac{\partial}{\partial\vartheta_{d-2}}+\frac{1}{ \sin^{2}\vartheta_{d-2}}\nabla^{2}_{\mathbb{S}^{d-2}}+\ell(\ell+d-2)\right] \mathcal{S}_{\ell}(\Omega_{d-1})=0\.\] (A.25)
This equation can be solved via a separation of variables ansatz
\[\mathcal{S}_{\ell}=(\sin\vartheta_{d-2})^{m}P_{\ell-m}(d+2m,\cos\vartheta_{d- 2})\widehat{\mathcal{S}}_{m}(\Omega_{d-2})\,\] (A.26)
for a non-negative integer \(0\leq m\leq\ell\). Substituting this ansatz into the equation above yields the eigenvalue equation \(\left[\nabla^{2}_{\mathbb{S}^{d-2}}+m(m+d-3)\right]\widehat{\mathcal{S}}_{m}( \Omega_{d-2})=0\) in the lower dimensional sphere, i.e., the function \(\widehat{\mathcal{S}}_{m}(\Omega_{d-2})\) is actually a spherical harmonic of degree \(m\) on \(\mathbb{S}^{d-2}\). This gives rise to
\[\sum_{m=0}^{\ell}N_{HH}(d-1,m)=N_{HH}(d,\ell)\] (A.27)
number of spherical harmonics of degree \(\ell\) on \(\mathbb{S}^{d-1}\) (to get the above equality, we have used Eq.(A.20)). Recursing this construction, we get a set of spherical harmonics of the form
\[\mathcal{C}_{\ell\vec{m}}\ e^{\pm im_{1}\varphi}\left[\prod_{k=1}^{d-2}(\sin \vartheta_{k})^{m_{k}}P_{m_{k+1}-m_{k}}(k+2+2m_{k},\cos\vartheta_{k})\right]_ {m_{d-1}=\ell}\,\] (A.28)
one for every non-decreasing sequence of non-negative integers
\[0\leq m_{{}_{1}}\leq m_{2}\ldots\leq m_{d-2}\leq m_{d-1}=\ell\.\] (A.29)
Here \(\mathcal{C}_{\ell\vec{m}}\) is a normalisation constant which we shall determine below.
We will now argue that these spherical harmonics form an orthonormal set: any two harmonics with distinct \(e^{i\varphi}\) factors are evidently orthogonal. Thus, we need to address only the case where \(e^{i\varphi}\) factors are the same. Without loss of generality, let us assume that the dependences on \(\vartheta_{k}\) for all \(k<i\) are also the same between the two spherical harmonics for some \(i<d-1\), and they differ first on their \(\vartheta_{i}\) dependence, i.e., we consider two spherical harmonics in the above set with \(m_{k}=m_{k}^{\prime}\) for all \(k\leq i\), but have \(m_{i+1}\neq m_{i+1}^{\prime}\). The inner product between these two spherical harmonics then has a factor
\[\int_{0}^{\pi}d\vartheta_{i}(\sin\vartheta_{i})^{i+2m_{i}}P_{m_{i+1}-m_{i}}(i +2+2m_{i},\cos\vartheta_{i})P_{m_{i+1}^{\prime}-m_{i}}(i+2+2m_{i},\cos \vartheta_{i})\,\] (A.30)
which then vanishes using Legendre orthogonality (see Eq.(A.11)) on \(\mathbb{S}^{i+2m_{i}+1}\). The mutual orthogonality along with the counting in Eq.(A.27) proves then that we have indeed constructed a complete set of spherical harmonics of degree \(\ell\) on \(\mathbb{S}^{d-1}\).
We will conclude this discussion by normalising the spherical harmonics constructed above. The norm computation reduces to a product integral like the one above, which can then be evaluated using Eq.(A.11). Thus, the normalisation of the spherical harmonic given in Eq.(A.28) is given by
\[\begin{split}|\mathcal{C}_{\ell\vec{m}}|^{-2}& \equiv 2\pi\prod_{i=1}^{d-2}\int_{0}^{\pi}d\vartheta_{i}(\sin \vartheta_{i})^{i+2m_{i}}P_{m_{i+1}-m_{i}}^{2}(i+2+2m_{i},\cos\vartheta_{i}) \\ &=2\pi\prod_{i=1}^{d-2}\frac{|\mathbb{S}^{i+2m_{i}+1}|}{|\mathbb{ S}^{i+2m_{i}}|N_{HH}(i+2m_{i}+2,m_{i+1}-m_{i})}\,\end{split}\] (A.31)
With this, we have a concrete realisation of the orthonormal spherical harmonics \(\mathscr{Y}_{\ell\vec{m}}(\hat{r})\), using which multipole moments could be computed for a given charge distribution.
### STF tensors in \(\mathbb{R}^{d}\) and cartesian multipole moments
Till now, we have described the multipole expansion in terms of an orthonormal basis of spherical harmonics \(\mathscr{Y}_{\ell\vec{m}}(\hat{r})\) and the corresponding spherical multipole moments \(q_{{}_{\ell\vec{m}}}\). We will now describe an alternate formalism based on a more symmetric, but over-complete basis of spherical harmonics made of Legendre polynomials about arbitrary directions (we will call this basis an STF basis). A general spherical harmonic in STF basis is naturally described by symmetric trace-free (STF) tensors with constant cartesian components.
For definiteness, we consider spherical harmonics of the form
\[\begin{split}\mathcal{N}_{d,\ell}N_{HH}(d,\ell)P_{\ell}(d,\hat {\kappa}\cdot\hat{r})&=\sum_{k}\frac{\Gamma\left(\nu-k\right)}{2 ^{2k}k!\Gamma(\nu)}\frac{(-)^{k}(\hat{\kappa}\cdot\hat{r})^{\ell-2k}}{(\ell-2k )!}\\ &=\frac{1}{\ell!}\hat{\kappa}_{i_{1}}\hat{\kappa}_{i_{2}}\ldots \hat{\kappa}_{i_{\ell}}\hat{r}^{<i_{1}}\hat{r}^{i_{2}}\ldots\hat{r}^{i_{\ell} >}\\ &=\frac{1}{\ell!}\hat{\kappa}_{i_{1}}\hat{\kappa}_{i_{2}}\ldots \hat{\kappa}_{i_{\ell}}\hat{r}^{j_{1}}\hat{r}^{j_{2}}\ldots\hat{r}^{j_{\ell}} \Pi_{<j_{1}j_{2}\ldots j_{\ell}>}^{<i_{1}i_{2}\ldots i_{\ell}>}\,\end{split}\] (A.32)
where \(\hat{\kappa}\) is an arbitrary unit vector, and in the last line we have written the spherical harmonic as a projected contraction of two tensors. The angular bracket here denotes the symmetric trace-free (STF) projection and \(\Pi\) is the STF-projector. An explicit expression that follows from the above
definition is
\[\begin{split}\hat{r}^{<i_{1}}\hat{r}^{i_{2}}\dots\hat{r}^{i_{\ell}>}& =\sum_{k}\frac{(-)^{k}\Gamma\left(\nu-k\right)}{2^{k}\Gamma(\nu)}\\ &\times\left\{\hat{r}^{i_{1}}\hat{r}^{i_{2}}\dots\hat{r}^{i_{\ell -2k}}\delta^{i_{\ell+1-2k}i_{\ell+2-2k}}\dots\delta^{i_{\ell-1}i_{\ell}}+ \text{distinct index permutations}\right\}\,.\end{split}\] (A.33)
Here the sum within the curly braces sums over all index permutations of the set \(\{i_{1},\dots,i_{\ell}\}\) which give distinct answers. The number of such distinct permutations can be counted as follows: there are \(\binom{\ell}{2k}\) ways of choosing the subset of indices that go into Kronecker deltas, and \(\frac{(2k)!}{2^{k}k!}=(2k-1)!!\) distinct ways of pairing a given subset.17Thus, the total number of distinct permutations is \((2k-1)!!\binom{\ell}{2k}=\frac{\ell!}{2^{k}k!(\ell-2k)!}\). With this counting of distinct permutations, it is then easy to check that contracting \(\hat{r}^{<i_{1}}\hat{r}^{i_{2}}\dots\hat{r}^{i_{\ell}>}\) with \(\frac{1}{d}\hat{\kappa}_{i_{1}}\hat{\kappa}_{i_{2}}\dots\hat{\kappa}_{i_{\ell}}\) does give \(\mathcal{N}_{d,\ell}\mathcal{N}_{HH}(d,\ell)P_{\ell}(d,\hat{\kappa}\cdot\hat{r})\). The STF projector can also be given a closed-form expression as:
Footnote 17: The number of pairings can be counted as follows: the \((2k)!\) ways to permute the subset of indices on Kronecker deltas. Exchanging an index within a pair, as well as permuting the pair as a whole does not change the final resultant pairings, i.e., there is a \((\mathbb{Z}_{2})^{k}\times\mathbb{S}_{k}\) automorphism group which acts freely and transitively on the equivalence class of permutations which result in a given pairing. We hence obtain the number of distinct pairings by dividing out the cardinality of the automorphism group.
\[\begin{split}\Pi^{<i_{1}i_{2}\dots i_{\ell}>}_{<j_{1}j_{2}\dots j _{\ell}>}&=\sum_{k}\frac{(-)^{k}\Gamma\left(\nu-k\right)}{2^{k} k!(\ell-2k)!\Gamma(\nu)}\\ &\times\delta^{(i_{1}}_{j_{1}}\delta^{i_{2}}_{j_{2}}\dots\delta^{ i_{\ell-2k}}_{j_{\ell-2k}}\delta^{i_{\ell+1-2k}i_{\ell+2-2k}}\dots\delta^{i_{\ell-1 }i_{\ell}})\delta_{j_{\ell+1-2k}j_{\ell+2-2k}}\dots\delta_{j_{\ell-1}j_{\ell} }\,\,,\end{split}\] (A.34)
where the \((i_{1}\dots i_{\ell})\) denotes a symmetric projection. To elucidate the arguments above, we will now write down the explicit expressions of \(\hat{r}^{<i_{1}}\hat{r}^{i_{2}}\dots\hat{r}^{i_{\ell}>}\) for \(\ell\leq 5\). We have
\[\begin{split}\hat{r}^{<i_{1}>}\equiv\hat{r}^{<i_{1}>}\,\,,\,\, \hat{r}^{<i_{1}}\hat{r}^{i_{2}>}\equiv\hat{r}^{i_{1}}\hat{r}^{i_{2}}-\frac{1}{ d}\delta^{i_{1}i_{2}}\\ \hat{r}^{<i_{1}}\hat{r}^{i_{2}}\hat{r}^{i_{3}>}\equiv\hat{r}^{i_{ 1}}\hat{r}^{i_{2}}\hat{r}^{i_{3}}-\frac{1}{d+2}\left(\hat{r}^{i_{1}}\delta^{i_{ 2}i_{3}}+\hat{r}^{i_{2}}\delta^{i_{1}i_{3}}+\hat{r}^{i_{3}}\delta^{i_{1}i_{2}} \right)\,\,,\end{split}\] (A.35)
for \(\ell\leq 3\). For \(\ell=4\), we have
\[\begin{split}\hat{r}^{<i_{1}}&\hat{r}^{i_{2}}\hat{r }^{i_{3}}\hat{r}^{i_{4}>}\equiv\hat{r}^{i_{1}}\hat{r}^{i_{2}}\hat{r}^{i_{3}} \hat{r}^{i_{4}}\\ &-\frac{1}{d+4}\left(\hat{r}^{i_{1}}\hat{r}^{i_{2}}\delta^{i_{3}i_ {4}}+\hat{r}^{i_{1}}\hat{r}^{i_{3}}\delta^{i_{2}i_{4}}+\hat{r}^{i_{1}}\hat{r}^ {i_{4}}\delta^{i_{2}i_{3}}+\hat{r}^{i_{2}}\hat{r}^{i_{3}}\delta^{i_{1}i_{4}}+ \hat{r}^{i_{2}}\hat{r}^{i_{4}}\delta^{i_{1}i_{3}}+\hat{r}^{i_{3}}\hat{r}^{i_{4} }\delta^{i_{1}i_{2}}\right)\\ &+\frac{1}{(d+4)(d+2)}\left(\delta^{i_{1}i_{2}}\delta^{i_{3}i_{4} }+\delta^{i_{1}i_{3}}\delta^{i_{2}i_{4}}+\delta^{i_{1}i_{4}}\delta^{i_{2}i_{3} }\right)\,\,,\end{split}\] (A.36)
and for \(\ell=5\), we get
\[\begin{split}\hat{r}^{<i_{1}}&\hat{r}^{i_{2}}\hat{r }^{i_{3}}\hat{r}^{i_{4}}\hat{r}^{i_{5}>}\equiv\hat{r}^{i_{1}}\hat{r}^{i_{2}} \hat{r}^{i_{3}}\hat{r}^{i_{4}}\hat{r}^{i_{5}}\\ &-\frac{1}{d+6}\Big{(}\hat{r}^{i_{1}}\hat{r}^{i_{2}}\hat{r}^{i_{3 }}\delta^{i_{4}i_{5}}+\hat{r}^{i_{1}}\hat{r}^{i_{2}}\hat{r}^{i_{4}}\delta^{i_{ 3}i_{5}}+\hat{r}^{i_{1}}\hat{r}^{i_{2}}\hat{r}^{i_{4}}\delta^{i_{2}i_{5}}+ \hat{r}^{i_{2}}\hat{r}^{i_{3}}\hat{r}^{i_{4}}\delta^{i_{1}i_{5}}\\ &+\hat{r}^{i_{5}}\hat{r}^{i_{1}}\hat{r}^{i_{2}}\delta^{i_{3}i_{4}} +\hat{r}^{i_{5}}\hat{r}^{i_{1}}\hat{r}^{i_{3}}\delta^{i_{2}i_{4}}+\hat{r}^{i_{5}} \hat{r}^{i_{1}}\hat{r}^{i_{4}}\delta^{i_{2}i_{3}}+\hat{r}^{i_{5}}\hat{r}^{i_{2} }\hat{r}^{i_{3}}\delta^{i_{1}i_{4}}+\hat{r}^{i_{5}}\hat{r}^{i_{3}}\hat{r}^{i_{4} }\delta^{i_{1}i_{2}}\Big{)}\\ &+\frac{1}{(d+6)(d+4)}\Big{(}\hat{r}^{i_{1}}\delta^{i_{2}i_{3}} \delta^{i_{4}i_{5}}+\hat{r}^{i_{1}}\delta^{i_{2}i_{4}}\delta^{i_{3}i_{5}}+ \hat{r}^{i_{1}}\delta^{i_{2}i_{5}}\delta^{i_{3}i_{4}}\\ &+\hat{r}^{i_{2}}\delta^{i_{1}i_{3}}\delta^{i_{4}i_{5}}+\hat{r}^{i_ {2}}\delta^{i_{1}i_{4}}\delta^{i_{3}i_{5}}+\hat{r}^{i_{2}}\delta^{i_{1}i_{5}} \delta^{i_{3}i_{4}}+\hat{r}^{i_{3}}\delta^{i_{1}i_{2}}\delta^{i_{4}}+\hat{r}^{ i_{3}}\delta^{i_{1}i_{4}}\delta^{i_{2}i_{5}}+\hat{r}^{i_{3}}\delta^{i_{1}i_{5}} \delta^{i_{2}i_{4}}\\ &+\hat{r}^{i_{4}}\delta^{i_{1}i_{2}}\delta^{i_{3}i_{5}}+\hat{r}^{i_ {4}}\delta^{i_{1}i_{3}}\delta^{i_{2}i_{5}}+\hat{r}^{i_{4}}\delta^{i_{1}i_{5}} \delta^{i_{2}i_{3}}+\hat{r}^{i_{5}}\delta^{i_{1}i_{2}}\delta^{i_{3}i_{4}}+ \hat{r}^{i_{5}}\delta^{i_{1}i_{2}}\delta^{i_{3}i_{4}}+\hat{r}^{i_{5}}\delta^{i_{1}i_{ 3}}\delta^{i_{2}i_{4}}+\hat{r}^{i_{5}}\delta^{i_{1}i_{4}}\delta^{i_{2}i_{3}} \Big{)}\,\,.\end{split}\] (A.37)
The reader can check that the expressions in RHS are completely symmetric under permutations of indices, and vanish if we take a trace over any two indices. Further, our counting of distinct permutations can also be checked for every term written above.
A more succinct way to summarise the permutations/symmetrisations described above is to work instead with the homogeneous harmonic polynomials (HHPs) in cartesian coordinates
\[x^{<i_{1}}x^{i_{2}}\ldots x^{i_{\ell}>}\equiv r^{\ell}\ \hat{r}^{<i_{1}}\hat{r}^{i_{2}} \ldots\hat{r}^{i_{\ell}>}=\left[\sum_{k=0}^{\left\lfloor\frac{\ell}{2}\right \rfloor}\ \frac{\Gamma\left(\nu-k\right)}{k!\ \Gamma\left(\nu\right)}\left(\frac{r}{2}\right)^{2k}\left(-\nabla^{2} \right)^{k}\right]_{\nu=\frac{\ell}{2}+\ell-1}x^{i_{1}}x^{i_{2}}\ldots x^{i_{ \ell}}. \tag{100}\]
The relation to generalised Legendre polynomials then follows from
\[\begin{split}\frac{1}{\ell!}\kappa_{i_{1}}\kappa_{i_{2}}\ldots \kappa_{i_{\ell}}x^{<i_{1}}x^{i_{2}}\ldots x^{i_{\ell}>}&\equiv \left[\sum_{k=0}^{\left\lfloor\frac{\ell}{2}\right\rfloor}\ \frac{\Gamma\left(\nu-k\right)}{k!\ \Gamma\left(\nu\right)}\left(\frac{r}{2}\right)^{2k}\left(-\nabla^{2} \right)^{k}\right]_{\nu=\frac{\ell}{2}+\ell-1}\frac{(\vec{\kappa}\cdot\vec{r}) ^{\ell}}{\ell!}\\ &=\left[\sum_{k=0}^{\left\lfloor\frac{\ell}{2}\right\rfloor}\ \frac{\Gamma\left(\nu-k\right)}{k!\ \Gamma\left(\nu\right)}\left(-\frac{\kappa^{2}r^{2}}{4}\right)^{k}\frac{( \vec{\kappa}\cdot\vec{r})^{\ell-2k}}{(\ell-2k)!}\right]_{\nu=\frac{\ell}{2}+ \ell-1}\\ &=\mathcal{N}_{d,\ell}N_{HH}(d,\ell)(\kappa r)^{\ell}P_{\ell} \left(d,\hat{\kappa}\cdot\hat{r}\right)\,\end{split} \tag{101}\]
where, in the last step, we have used Eq.(100). The STF basis for multipole expansion in flat spacetime is often introduced in terms of these cartesian HHPs (See e.g.[98]). In dS spacetime (and more generally in cosmology), the absence of global cartesian coordinates limits their scope. The STF basis for spherical harmonics is, however, a useful tool for multipole expansion in such spacetimes, since isotropy is still a true symmetry.
We will now describe how the STF basis relates to the description of spherical harmonics given before. For any given \(\ell\), we can form
\[N_{H}(d,\ell)\equiv\binom{\ell+d-1}{\ell} \tag{102}\]
number of STF harmonics of the form \(\hat{r}^{<i_{1}}\hat{r}^{i_{2}}\ldots\hat{r}^{i_{\ell}>}\). The above binomial coefficient counts the number of ways \(d\) directions can be filled into \(\ell\) indices. The combinatorics here is identical to the bose-counting problem familiar from elementary statistical mechanics, where one counts the ways in which \(d\) bosons could be filled into \(\ell\) degenerate energy levels. All the STF harmonics are not however linearly independent, they obey \(N_{H}(d,\ell-2)\) number of conditions of the form
\[\delta_{i_{1}i_{2}}\hat{r}^{<i_{1}}\hat{r}^{i_{2}}\ldots\hat{r}^{i_{\ell}>}=0. \tag{103}\]
They hence span a vector space of spherical harmonics of dimension
\[N_{H}(d,\ell)-N_{H}(d,\ell-2)=N_{HH}(d,\ell)\, \tag{104}\]
where the equality follows by using the explicit forms in Eqs.(101) and (102). This shows that the harmonics \(\hat{r}^{<i_{1}}\hat{r}^{i_{2}}\ldots\hat{r}^{i_{\ell}>}\) indeed form an overcomplete basis of spherical harmonics of degree \(\ell\).
The completeness means the following: say we are given a spherical harmonic \(\mathscr{Y}_{\ell}(\hat{r})\) of degree \(\ell\) on \(\mathbb{S}^{d-1}\). We can then define a symmetric trace-free (STF) tensor \(\mathscr{Y}_{i_{1}i_{2}\ldots i_{\ell}}\) of rank \(\ell\) in \(\mathbb{R}^{d}\) such that
\[\mathscr{Y}_{\ell}(\hat{r})=\frac{1}{\ell!}\mathscr{Y}_{i_{1}i_{2}\ldots i_{ \ell}}\hat{r}^{<i_{1}}\hat{r}^{i_{2}}\ldots\hat{r}^{i_{\ell}>}. \tag{105}\]
The orthonormal basis of spherical harmonics constructed in the previous subsection then defines an orthonormal set of STF tensors
\[\mathscr{Y}_{\ell\bar{m}}(\hat{r})=\frac{1}{\ell!}\mathscr{Y}_{i_{1}i_{2}\dots i _{\ell}}^{(\ell\bar{m})}\tilde{r}^{<i_{1}}\hat{r}^{i_{2}}\dots\hat{r}^{i_{l}>}. \tag{110}\]
Further, the inner product on the space of STF tensors is induced from the standard inner product on the space of functions on \(\mathbb{S}^{d-1}\). To get an explicit expression, consider the following integral
\[\begin{split}\int_{\hat{r}\in\mathbb{S}^{d-1}}\frac{1}{\ell!}& \kappa_{i_{1}}\kappa_{i_{2}}\dots\kappa_{i_{\ell}}\hat{r}^{<i_{1}} \hat{r}^{i_{2}}\dots\hat{r}^{i_{l}>}\times\frac{1}{\ell!}\bar{\kappa}_{j_{1}} \bar{\kappa}_{j_{2}}\dots\bar{\kappa}_{j_{\ell}}\hat{r}^{<j_{1}}\hat{r}^{j_{2} }\dots\hat{r}^{j_{\ell}>}\\ &=[\mathcal{N}_{d,\ell}N_{HH}(d,\ell)]^{2}(\kappa\bar{\kappa})^{ \ell}\int_{\hat{r}\in\mathbb{S}^{d-1}}P_{\ell}(d,\hat{\kappa}\cdot\hat{r})P_{ \ell}(d,\hat{\kappa}\cdot\hat{r})\\ &=\mathcal{N}_{d,\ell}^{2}N_{HH}(d,\ell)[\mathbb{S}^{d-1}](\kappa \bar{\kappa})^{\ell}P_{\ell}(d,\hat{\kappa}\cdot\hat{\kappa})\\ &=\mathcal{N}_{d,\ell}|\mathbb{S}^{d-1}|\ \frac{1}{\ell!}\kappa_{<i_{1}} \kappa_{i_{2}}\dots\kappa_{i_{\ell}>\,\bar{\kappa}^{<i_{1}}}\bar{\kappa}^{i_ {2}}\dots\bar{\kappa}^{i_{l}>}\.\end{split} \tag{111}\]
For example, the STF tensors corresponding to the orthonormal spherical harmonics have an inner product given by
\[\frac{\mathcal{N}_{d,\ell}|\mathbb{S}^{d-1}|}{\ell!}\mathscr{Y}_{(\ell\bar{m} ^{\prime})}^{*<i_{1}i_{2}\dots i_{\ell}>}\mathscr{Y}_{<i_{1}i_{2}\dots i_{ \ell}>}^{(\ell\bar{m})}=\delta_{\bar{m}^{\prime}}^{\bar{m}}. \tag{112}\]
We recognise \(\mathcal{N}_{d,\ell}|\mathbb{S}^{d-1}|\) here as the conversion factor between the STF tensor inner product and the standard functional inner product between the spherical harmonics. The same factor also appears in the statement of spherical harmonic addition theorem, stated in terms of STF tensors:
\[\frac{\mathcal{N}_{d,\ell}|\mathbb{S}^{d-1}|}{\ell!}\sum_{\vec{m}}\mathscr{Y} _{(\ell\bar{m})}^{*<i_{1}i_{2}\dots i_{\ell}>}\mathscr{Y}_{<j_{1}j_{2}\dots j _{\ell}>}^{(\ell\bar{m})}=\Pi_{<j_{1}j_{2}\dots j_{\ell}>}^{<i_{1}i_{2}\dots i _{\ell}>}. \tag{113}\]
This important relation can be proved in many ways: one way is to use Eq.(104) to convert the standard addition theorem into STF tensors. Another ab initio derivation is to first argue that LHS should be proportional to RHS for symmetry reasons and then fix the normalisation by using the orthonormality relation Eq.(112).
### Green functions in Minkowski spacetime
We will begin by briefly reviewing the Green functions of the wave operator (i.e., the massless scalar operator) in \(\mathbb{R}^{d,1}\). This theory is standard, although the notations and normalisations for Green functions in \(d\neq 2,3\) are non-standard. Thus, this subsection mainly serves to establish our notation. We will state our results with an eye towards their generalisation to dS Green functions.
We begin with the unique spherically symmetric eigenfunction of the Laplacian in \(\mathbb{R}^{d}\) with eigenvalue \(-\omega^{2}\) :
\[J_{0}(d,\omega r)\equiv{}_{0}F_{1}\left[\frac{d}{2},-\frac{\omega^{2}r^{2}}{4 }\right]. \tag{114}\]
We can construct a whole tower of descendants from this eigenfunction by taking an STF derivative
\[\mathscr{Y}_{\ell}(-\vec{\nabla})J_{0}(d,\omega r)\equiv\omega^{\ell}\ J_{ \ell}(d,\omega r)\ \mathscr{Y}_{\ell}(\vec{n})=\omega^{\nu-\frac{d}{2}+1}\ J_{\ell}(d,\omega r)\ \mathscr{Y}_{\ell}(\vec{n})\, \tag{115}\]
where we have defined (we remind the reader that \(\nu\equiv\ell+\frac{d}{2}-1\))
\[J_{\ell}(d,\omega r)\equiv\Gamma\left(\frac{d}{2}\right)\left(\frac{\omega r}{2} \right)^{1-\frac{d}{2}}\ J_{\nu}(\omega r)\equiv\frac{\Gamma(d/2)}{\Gamma(1+ \nu)}\left(\frac{\omega r}{2}\right)^{\nu-\frac{d}{2}+1}\ \ {}_{0}F_{1}\left[1+\nu,-\frac{ \omega^{2}r^{2}}{4}\right]. \tag{111}\]
The notation is motivated by the fact that the functions that appear here generalise the Bessel J functions in the \(d=2\) version of the above problem. We can also define the functions analogous to Neumann and Hankel functions. We will define the _Neumann Green function_ via
\[\begin{split}& N_{\ell}(d,\omega r)\equiv-\frac{1}{4}\frac{Y_{ \nu}(\omega r)}{(2\pi\omega r)^{\frac{d}{2}-1}}\\ &=\frac{\Gamma(\nu)}{(4\pi)^{d/2}}\left(\frac{\omega r}{2} \right)^{-\nu-\frac{d}{2}+1}\left\{{}_{0}F_{1}\left[1-\nu,-\frac{\omega^{2}r^ {2}}{4}\right]-\frac{\pi\cot\nu\pi}{\Gamma(\nu)\Gamma(1+\nu)}\left(\frac{ \omega r}{2}\right)^{2\nu}\ \ {}_{0}F_{1}\left[1+\nu,-\frac{\omega^{2}r^{2}}{4}\right]\right\}\.\end{split} \tag{112}\]
In the above definition, for half-integer \(\nu\) (i.e., for \(d\) odd), we can set \(\cot\nu\pi=0\), whereas for integer \(\nu\), the divergence in the \(\cot\nu\pi\) cancels the divergence in the first term and this formula should be interpreted as a limit. Green functions are normalised such that
\[-(\vec{\nabla}^{2}+\omega^{2})[\omega^{\nu+\frac{d}{2}-1}N_{\ell}(d,\omega r) \mathscr{Y}_{\ell}(\vec{n})]=\mathscr{Y}_{\ell}(-\vec{\nabla})\delta^{d}( \vec{r}). \tag{113}\]
Since the RHS here is a multipole source, the combination \(\omega^{\nu+\frac{d}{2}-1}N_{\ell}(d,\omega r)\mathscr{Y}_{\ell}(\vec{n})\) should then be interpreted as the amplitude of standing wave sourced by such a multipole source. We term this a _standing wave_ since it is an even function of frequency. In contrast, the outgoing/ingoing waves are denoted by \(\omega^{\nu+\frac{d}{2}-1}H_{\ell}^{\pm}(d,\omega r)\) respectively. We will refer to them as _Hankel Green functions._ Given a spherical harmonic \(\mathscr{Y}_{\ell}(\vec{n})\) of degree \(\ell\) on \(\mathbb{S}^{d-1}\), both these Green functions satisfy
\[-(\vec{\nabla}^{2}+\omega^{2})[\omega^{\nu+\frac{d}{2}-1}H_{\ell}^{\pm}(d, \omega r)\mathscr{Y}_{\ell}(\vec{n})]=\mathscr{Y}_{\ell}(-\vec{\nabla}) \delta^{d}(\vec{r}). \tag{114}\]
The outgoing/ingoing conditions are imposed by taking \(H_{\ell}^{\pm}(d,\omega r)\) to be analytic in the upper/lower half plane of complex frequency respectively. The notation here is again motivated by the fact that these functions generalise the Hankel functions in \(d=2\) (up to normalisations). Their explicit forms are given by
\[\begin{split}& H_{\ell}^{\pm}(d,\omega r)\equiv\frac{\pm i}{4} \frac{H_{\nu}^{1,2}(\omega r)}{(2\pi\omega r)^{\frac{d}{2}-1}}\equiv N_{\ell} (d,\omega r)\pm\frac{i\pi}{\Gamma(d/2)(4\pi)^{d/2}}J_{\ell}(d,\omega r)\\ &=\frac{\Gamma(\nu)}{(4\pi)^{d/2}}\left(\frac{\omega r}{2} \right)^{-\nu-\frac{d}{2}+1}\left\{{}_{0}F_{1}\left[1-\nu,-\frac{\omega^{2}r ^{2}}{4}\right]\pm(1\pm i\cot\nu\pi)\frac{2\pi i}{\Gamma(\nu)^{2}}\frac{1}{2 \nu}\left(\frac{\omega r}{2}\right)^{2\nu}\ \ {}_{0}F_{1}\left[1+\nu,-\frac{\omega^{2}r^{2}}{4} \right]\right\}\.\end{split} \tag{115}\]
As in the case of Neumann Green functions, for half-integer \(\nu\) (i.e., for \(d\) odd), we can set \(\cot\nu\pi=0\), whereas for integer \(\nu\) the above expression is indeterminate and should be interpreted as a limit.
As in the case of Bessel J functions, these Green functions could also be obtained by STF-differentiating their corresponding primary eigenfunction at \(\ell=0\), viz.,
\[\begin{split}\mathscr{Y}_{\ell}(-\vec{\nabla})N_{0}(d,\omega r )&=\omega^{\ell}\ N_{\ell}(d,\omega r)\ \mathscr{Y}_{\ell}(\vec{n})=\omega^{\nu-\frac{d}{2}+1}\ N_{\ell}(d,\omega r)\ \mathscr{Y}_{\ell}(\vec{n})\,\\ \mathscr{Y}_{\ell}(-\vec{\nabla})H_{0}^{\pm}(d,\omega r)& =\omega^{\ell}\ H_{\ell}^{\pm}(d,\omega r)\ \mathscr{Y}_{\ell}(\vec{n})=\omega^{\nu-\frac{d}{2}+1}\ H_{\ell}^{\pm}(d, \omega r)\ \mathscr{Y}_{\ell}(\vec{n})\.\end{split} \tag{116}\]
A related statement is the _multipole-expansion_ of these Green functions, which, in our normalisations, takes the following form:
\[\begin{split}& J_{0}(d,\omega|\vec{r}-\vec{r}_{0}|)=\sum_{\ell m}| \mathbb{S}^{d-1}|\ \mathscr{Y}_{\ell\bar{m}}(\hat{r})\mathscr{Y}_{\ell\bar{m}}(\hat{r}_{0})^{*}J_ {\ell}(d,\omega r)J_{\ell}(d,\omega r_{0})\,\\ & N_{0}(d,\omega|\vec{r}-\vec{r}_{0}|)=\sum_{\ell m}|\mathbb{S}^{d- 1}|\ \mathscr{Y}_{\ell\bar{m}}(\hat{r})\mathscr{Y}_{\ell\bar{m}}(\hat{r}_{0})^{*} \\ &\qquad\times\Big{\{}\Theta(r<r_{0})J_{\ell}(d,\omega r)N_{\ell}( d,\omega r_{0})+\Theta(r>r_{0})J_{\ell}(d,\omega r_{0})N_{\ell}(d,\omega r) \Big{\}}\,\\ & H_{0}^{\pm}(d,\omega|\vec{r}-\vec{r}_{0}|)=\sum_{\ell\bar{m}}| \mathbb{S}^{d-1}|\ \mathscr{Y}_{\ell\bar{m}}(\hat{r})\mathscr{Y}_{\ell\bar{m}}(\hat{r}_{0})^{*} \\ &\qquad\times\Big{\{}\Theta(r<r_{0})J_{\ell}(d,\omega r)H_{\ell}^ {\pm}(d,\omega r_{0})+\Theta(r>r_{0})J_{\ell}(d,\omega r_{0})H_{\ell}^{\pm}( d,\omega r)\Big{\}}\.\end{split} \tag{100}\]
Here, the set of functions \(\mathscr{Y}_{\ell\bar{m}}(\hat{r})\) for different \(\vec{m}\) denote an orthonormal basis of \(\mathbb{S}^{d-1}\) spherical harmonics of degree \(\ell\). Further, in the equation above, the symbol
\[|\mathbb{S}^{d-1}|\equiv\frac{2\pi^{\frac{d}{2}}}{\Gamma\left(\frac{d}{2} \right)} \tag{101}\]
denotes the volume of the unit sphere. The argument for the above expansion is well-known within the theory of Green functions: we first expand the LHS in terms of eigenfunctions and then fix the coefficients by demanding continuity and a unit jump in the radial derivative. The jump can be readily evaluated using the Wronskian formulae18
Footnote 18: Our Wronskian convention is \(\mathscr{W}[f(z),g(z)]\equiv f\partial_{z}g-g\partial_{z}f\).
\[\mathscr{W}[N_{\ell}(d,z),J_{\ell}(d,z)]=\mathscr{W}[H_{\ell}^{\pm}(d,z),J_{ \ell}(d,z)]=\frac{1}{|\mathbb{S}^{d-1}|z^{d-1}}. \tag{102}\]
We will be interested here in the multipole expansion of the retarded/outgoing Green function \(\omega^{d-2}H_{0}^{+}(d,\omega|\vec{r}-\vec{r}_{0}|)\), which, using the relations quoted earlier, we can rewrite entirely in terms of \({}_{0}F_{1}\) functions:
\[\begin{split}&\omega^{d-2}H_{0}^{+}(d,\omega|\vec{r}-\vec{r}_{0}| )=\omega^{d-2}H_{0}^{+}(d,\omega|\vec{r}_{0}-\vec{r}|)\\ &=\frac{i\pi}{2}\sum_{\ell\bar{m}}\frac{(rr_{0})^{\nu-\frac{d}{2}+ 1}}{\Gamma(1+\nu)^{2}}\left(\frac{\omega}{2}\right)^{2\nu}\ \mathscr{Y}_{\ell\bar{m}}(\hat{r})\mathscr{Y}_{\ell\bar{m}}(\hat{r}_{0})^{*}\ _{0}F_{1}\left[1+\nu,-\frac{ \omega^{2}r_{0}^{2}}{4}\right]{}_{0}F_{1}\left[1+\nu,-\frac{\omega^{2}r_{0}^{2 }}{4}\right]\\ &\quad+\sum_{\ell\bar{m}}\frac{1}{2\nu}\frac{r_{<}^{\nu-\frac{d}{ 2}+1}}{r_{>}^{\nu+\frac{d}{2}-1}}\ \mathscr{Y}_{\ell\bar{m}}(\hat{r})\mathscr{Y}_{\ell\bar{m}}(\hat{r}_{0})^{*}\ _{0}F_{1}\left[1+\nu,-\frac{\omega^{2}r_{<}^{2}}{4}\right]\\ &\quad\times\left\{{}_{0}F_{1}\left[1-\nu,-\frac{\omega^{2}r_{>}^ {2}}{4}\right]-\frac{\pi\cot\nu\pi}{\Gamma(\nu)\Gamma(1+\nu)}\left(\frac{ \omega r_{>}}{2}\right)^{2\nu}\ _{0}F_{1}\left[1+\nu,-\frac{\omega^{2}r_{>}^{2}}{4}\right]\right\}\,.\end{split} \tag{103}\]
Here we have used a commonly used notation in such expansions, viz.,
\[r_{>}\equiv\text{Max}(r,r_{0})\,\quad r_{<}\equiv\text{Min}(r,r_{0}). \tag{104}\]
Further, we have also separated out the real and the imaginary parts of the radial functions.
Consider double integrals of the form
\[\int d^{d}r\ \rho_{1}(\vec{r},\omega)\int d^{d}r_{0}\ \rho_{2}(\vec{r}_{0}, \omega)\ f(d,\omega|\vec{r}-\vec{r}_{0}|) \tag{105}\]
where \(f\) could be any one of the functions discussed above. Using the multipole expansion, such a double integral can be decomposed into an infinite sum of factorised integrals, one each for every spherical harmonic. For the practical computation of radiation reaction, it is then convenient to convert the spherical harmonic sum into an STF expression using Eq.(A.47).
Let us illustrate the above remarks by computing the flat spacetime scalar radiation reaction, which, in a slightly different notation, is explained in detail in references[76, 79]. Say we have an extended scalar source whose emissive part at frequency \(\omega\) is the average source \(\rho_{A}(\omega,\vec{r})\), and whose absorptive part is the difference source \(\rho_{D}(\omega,\vec{r})\). Ignoring all fluctuation effects, the flat spacetime influence phase for this source, after integrating out the massless scalar field about vacuum, can be written down as
\[S_{RR}^{\rm bare}=\int\frac{d\omega}{2\pi}\int d^{d}r_{0}\int d^{d}r\ [\rho_{D}(\vec{r}_{0},\omega)]^{*}\ \rho_{A}(\vec{r},\omega)\ \omega^{d-2}H_{0}^{+}(d,\omega|\vec{r}_{0}-\vec{r}|)\.\] (A.62)
Here \(\omega^{d-2}H_{0}^{+}(d,\omega|\vec{r}_{0}-\vec{r}|)\) is the outgoing Green function for the scalar field, and the superscript 'bare' indicates that this expression is divergent and has to be counter-termed before it makes sense. We do not give here a derivation of the above influence phase except the heuristic that the above action describes causal propagation of a free scalar about the Minkowski vacuum. The above influence phase is also natural if one applies the original Feynman-Vernon argument in [53] for harmonic oscillators to each Minkowski mode of the scalar field and sums the result. A more proper derivation should involve a careful discussion of the fall-offs near space-like, time-like, and null asymptotia. We do not attempt such a discussion here because, as we shall see later, dS-SK geometry naturally incorporates such boundary conditions. Our dS answer in an appropriate limit will reduce to the above result.
We substitute the multipole expansion Eq.(A.59) into the influence phase \(S_{RR}^{\rm bare}\). For simplicity, we will take the number of spatial dimensions (i.e., \(d\)) to be odd, so that \(\nu\equiv\ell+\frac{d}{2}-1\) is a half-integer (and \(\cot\nu\pi=0\)). The above action then has two sets of terms: the first set of terms, odd under time reversal, are
\[\begin{split}&\sum_{\ell\vec{m}}\int\frac{d\omega}{2\pi}\frac{i \pi}{2}\left(\frac{\omega}{2}\right)^{2\nu}\frac{1}{\Gamma(1+\nu)^{2}}\\ &\times\int d^{d}r_{0}\left\{\rho_{D}(\vec{r}_{0},\omega)r_{0}^{ \nu-\frac{d}{2}+1}\mathscr{Y}_{\ell\vec{m}}(\hat{r}_{0})\ _{0}F_{1}\left[1+\nu,-\frac{\omega^{2}r_{0}^{2}}{4}\right]\right\}^{*}\\ &\qquad\times\int d^{d}r\ \left\{\rho_{A}(\vec{r},\omega)r^{\nu -\frac{d}{2}+1}\mathscr{Y}_{\ell\vec{m}}(\hat{r})\ _{0}F_{1}\left[1+\nu,-\frac{\omega^{2}r^{2}}{4}\right]\right\}\.\end{split}\] (A.63)
The combinations appearing in the second and the third line are the Bessel-smeared radiative multipole moments19 of the sources, i.e.,
Footnote 19: The reader should compare this definition against electrostatic multipole moments defined in Eq.(A.7), remembering \(\nu\equiv\ell+\frac{d}{2}-1\).
\[\begin{split}\mathscr{J}_{A}(\omega,\ell,\vec{m})& \equiv\frac{1}{2\nu}\int d^{d}r\ \rho_{A}(\vec{r},\omega)\ r^{\nu-\frac{d}{2}+1}\mathscr{Y}_{\ell\vec{m}}(\hat{r} )\ _{0}F_{1}\left[1+\nu,-\frac{\omega^{2}r^{2}}{4}\right]\,\\ \mathscr{J}_{D}(\omega,\ell,\vec{m})&\equiv\frac{1}{ 2\nu}\int d^{d}r\ \rho_{D}(\vec{r},\omega)\ r^{\nu-\frac{d}{2}+1}\mathscr{Y}_{\ell\vec{m}}(\hat{r} )\ _{0}F_{1}\left[1+\nu,-\frac{\omega^{2}r^{2}}{4}\right]\.\end{split}\] (A.64)
We can then write the time reversal odd terms in the form
\[\sum_{\ell\vec{m}}\int\frac{d\omega}{2\pi}\frac{2\pi i}{\Gamma(\nu)^{2}} \left(\frac{\omega}{2}\right)^{2\nu}\mathscr{J}_{D}^{*}(\omega,\mathbb{L}) \mathscr{J}_{A}(\omega,\mathbb{L})\.\] (A.65)
Given that \({}_{0}F_{1}\) functions are completely regular when their first argument is positive (i.e., when \(1+\nu=\ell+\frac{d}{2}>0\), we conclude that these multipole moments are finite, even for point-like sources. Hence, each term in Eq.(A.63) is finite. The reader should contrast this with the second set of terms, even under time reversal:
\[\begin{split}&\sum_{\ell\vec{m}}\int\frac{d\omega}{2\pi}\int d^{d}r \int d^{d}r_{0}\,\left[\rho_{D}(\vec{r},\omega)\right]^{*}\,\rho_{A}(\vec{r}, \omega)\\ &\qquad\times\frac{1}{2\nu}\frac{r_{<}^{\nu-\frac{d}{2}+1}}{r_{> }^{\nu+\frac{d}{2}-1}}\,\mathscr{Y}_{\ell\vec{m}}(\hat{r})\mathscr{Y}_{\ell \vec{m}}(\hat{r}_{0})^{*}\,_{0}F_{1}\left[1+\nu,-\frac{\omega^{2}r_{<}^{2}}{4 }\right]\,_{0}F_{1}\left[1-\nu,-\frac{\omega^{2}r_{>}^{2}}{4}\right]\.\end{split}\] (A.66)
which are divergent due to the Green functions \({}_{0}F_{1}(1-\nu,\ldots)\). Fortunately, since these are all even under time reversal, one can counter-term away these terms. In other words, these terms in the influence phase serve to renormalise the non-dissipative terms already present in the action of the source.
Let us return to the terms in Eq.(A.63): they are odd in \(\omega\), and hence _cannot_ be countertermed or absorbed into the non-dissipative action. We can simplify these remaining terms by substituting the STF definition of spherical harmonics (see Eq.(A.44)) and invoking the STF addition theorem in Eq.(A.47). We then get the radiation-reaction influence phase as
\[\begin{split} S^{\text{Odd }d}_{RR}&=\sum_{\ell}\int\frac{d \omega}{2\pi}\frac{i\pi}{2\mathbb{N}_{d,\ell}|\mathbb{S}^{d-1}|}\left(\frac{ \omega}{2}\right)^{2\nu}\frac{1}{\Gamma(1+\nu)^{2}}\frac{1}{\ell!}\Pi_{<j_{1 }j_{2}\ldots j_{\ell}>}^{\leq i_{1}i_{2}\ldots i_{\ell}>}\\ &\times\int d^{d}r_{0}\left\{\rho_{D}(\vec{r}_{0},\omega)x_{0}^{ j_{1}}x_{0}^{j_{2}}\ldots x_{0}^{j_{\ell}}\,_{0}F_{1}\left[1+\nu,-\frac{ \omega^{2}r_{0}^{2}}{4}\right]\right\}^{*}\\ &\qquad\times\int d^{d}r\,\left\{\rho_{A}(\vec{r},\omega)x_{i_{1 }}x_{i_{2}}\ldots x_{i_{\ell}}\,_{0}F_{1}\left[1+\nu,-\frac{\omega^{2}r^{2}}{4 }\right]\right\}\.\end{split}\] (A.67)
Here we recognise the STF multipole moments of the sources' absorptive and emissive parts. We will find it convenient to define our STF multipole moments as
\[\begin{split}\mathbb{Q}^{i_{1}\ldots i_{\ell}}_{A,STF}(\omega)& \equiv\frac{1}{2\nu}\Pi_{<j_{1}j_{2}\ldots j_{\ell}>}^{\leq i_{1}i_{2} \ldots i_{\ell}>}\int d^{d}r\,\,\rho_{A}(\vec{r},\omega)x^{j_{1}}x^{j_{2}} \ldots x^{j_{\ell}}\,_{0}F_{1}\left[1+\nu,-\frac{\omega^{2}r^{2}}{4}\right]\,\\ \mathbb{Q}^{i_{1}\ldots i_{\ell}}_{D,STF}(\omega)& \equiv\frac{1}{2\nu}\Pi_{<j_{1}j_{2}\ldots j_{\ell}>}^{\leq i_{1}i_{2} \ldots i_{\ell}>}\int d^{d}r\,\,\rho_{D}(\vec{r},\omega)x^{j_{1}}x^{j_{2}} \ldots x^{j_{\ell}}\,_{0}F_{1}\left[1+\nu,-\frac{\omega^{2}r^{2}}{4}\right]\.\end{split}\] (A.68)
In terms of these STF multipole moments, the action for radiation reaction takes the form
\[\begin{split} S^{\text{Odd }d}_{RR}&=\sum_{\ell\vec{m}} \int\frac{d\omega}{2\pi}\frac{2\pi i}{\Gamma(\nu)^{2}}\left(\frac{\omega}{2} \right)^{2\nu}\mathscr{J}_{D}^{*}(\omega,\mathbb{L})\mathscr{J}_{A}(\omega, \mathbb{L})\\ &=\sum_{\ell}\int\frac{d\omega}{2\pi}\frac{2\pi i}{\Gamma(\nu)^{2} }\left(\frac{\omega}{2}\right)^{2\nu}\frac{1}{\mathbb{N}_{d,\ell}|\mathbb{S}^ {d-1}|}\frac{1}{\ell!}\mathbb{Q}_{D,STF}^{*<i_{1}i_{2}\ldots i_{\ell}>} \mathbb{Q}_{<i_{1}i_{2}\ldots i_{\ell}>}^{A,STF}\,.\end{split}\] (A.69)
In the first line, we have quoted the answer in terms of the spherical multipole moments for comparison. The multipole action above could also be derived entirely by using Cartesian STF harmonics from the very beginning (See [105] for a detailed derivation). Given the absence of Cartesian coordinates valid everywhere on the static patch, we will employ a judicious mix of spherical harmonic and STF harmonic expansions to compute the influence phase. The flat spacetime derivation we have given here closely mimics the strategy we will eventually use for dS.
Let us conclude this flat spacetime discussion by commenting on the case where \(d\) is even and \(\nu\in\mathbb{Z}\). We will tackle this case by a dimensional regularisation via analytic continuation in \(\nu\). From our discussion of multipole expansion, it is clear that the time reversal even terms in Eq.(114) are the same for any \(\nu\) and can be counter-termed away similarly.
The terms in Eq.(113), on the other hand, get multiplied by a factor of \((1+i\cot\pi\nu)\) for a general \(\nu\): this can be seen, e.g., in Eq.(112). The \(\cot\pi\nu\) factor leads to novel divergences as \(\nu\) approaches an integer, necessitating further counter-terms.
To compute the counter-terms as \(\nu\to n\in\mathbb{Z}\), we need the following expansion:
\[(1+i\cot\pi\nu)\frac{2\pi i}{\Gamma(\nu)^{2}}\left(\frac{\omega}{2H}\right)^{2 \nu}=\frac{1}{\Gamma(n)^{2}}\left(\frac{\omega}{2H}\right)^{2n}\left\{\frac{2} {\nu-n}-4\psi^{(0)}(n)+\ln\left(\frac{\omega}{2H}\right)^{4}+O(\nu-n)\right\}. \tag{115}\]
Here \(H\) is the characteristic scale for dimensional regularisation and \(\psi^{(0)}(x)\equiv\frac{d}{dx}\ln\Gamma(x)\) is the digamma function. Using a version of modified minimal subtraction, we counter-term away the first two terms inside the bracket of RHS. Thus, the influence phase due to radiation reaction for even spatial dimensions is
\[S_{RR}^{\text{Even }d}=\sum_{\ell}\int\frac{d\omega}{2\pi}\frac{1}{\Gamma(\nu)^ {2}}\left(\frac{\omega}{2}\right)^{2\nu}\ln\left(\frac{\omega^{4}}{H^{4}} \right)\frac{1}{\mathcal{N}_{d,\ell}|\mathbb{S}^{d-1}|}\frac{1}{\ell!} \mathcal{D}_{D,STF}^{*\zeta_{i}i_{1}i_{2}\dots i_{\ell}>}\mathcal{O}_{<i_{1}i_ {2}\dots i_{\ell}>}^{A,STF}\, \tag{116}\]
where we have reset \(n\) again everywhere to the variable \(\nu\). What we have here is a classical renormalisation group running of the multipole couplings present in the world line action, i.e., an RGE induced by the classical radiation reaction. Such classical RGE is, in fact, common in many radiation reaction problems (See e.g. discussions in [36; 37; 76; 106]). We will see later how this non-local influence phase gets further modified in dS spacetime.
## Appendix B Designer scalar in dS : Green functions, regularisation and renormalisation
In this appendix, we aim to describe the scalar Green functions in dS spacetime in some amount of detail. Our focus will be on a point-like observer sitting on the south pole, and our Green functions are all hence 'boundary-to-bulk' with the boundary being the world line at the south pole. The point-like nature necessitates a careful discussion of regularisation, counter-terms etc.: our discussion will closely parallel the flat spacetime discussion in the previous appendix as well as the dS discussion in [7; 73]. We will also confine ourselves to a single copy of the static patch in this appendix, relegating the applications to dS-SK to the next appendix.
We will work with outgoing Eddington Finkelstein(EF) coordinates[107] describing the static patch of dS spacetime dS\({}_{d+1}\). This spacetime is a solution of the Einstein equations with a positive cosmological constant
\[\Lambda=\frac{1}{2}d(d-1). \tag{117}\]
We have chosen units where the Hubble constant is unity. The spacetime metric is
\[ds^{2}=-2\ du\ dr-(1-r^{2})\ du^{2}+r^{2}d\Omega_{d-1}^{2}. \tag{118}\]
Here \(d\Omega_{d-1}^{2}\) denotes the metric on a unit \(\mathbb{S}^{d-1}\). The outgoing Eddington Finkelstein time \(u\) is related to the more commonly used time \(t\) via \(u=t-r_{*}\) where \(r_{*}\) is the tortoise coordinate defined via
\[r_{*}\equiv-i\pi\zeta\equiv\int_{0}^{r}\frac{d\rho}{1-\rho^{2}}=\frac{1}{2}\ln \left(\frac{1+r}{1-r}\right). \tag{119}\]
The radial coordinate \(r\) is centred around a static observer sitting at \(r=0\). We will mostly work with the frequency domain where the time dependence of fields20 is taken to be \(\sim e^{-i\omega u}\). Further, we will decompose everything into appropriate spherical harmonics on \(\mathbb{S}^{d-1}\). The spherical harmonics are labelled by the eigenvalue of the sphere Laplacian \(\nabla^{2}_{\mathbb{S}^{d-1}}\) which is \(-\ell(\ell+d-2)\).
Footnote 20: We note a slight inconsistency in our definitions when compared to definitions in the appendix A. In appendix A, we fourier-transformed with respect to standard time slices, whereas here in \(dS\) we are fourier-transforming with respect to outgoing EF time \(u\). Since in flat spacetime \(u=t-r\), this means that all the flat space radial functions in appendix A should be multiplied with a pre-factor of \(e^{-i\omega r}\) before they can be compared against the dS results described here.
As described in the main text, we will consider a class of _designer_ scalar systems in dS with an action
\[S=-\frac{1}{2}\int d^{d+1}x\sqrt{-g}\ r^{\mathcal{N}+1-d}\left\{\partial^{\mu }\Phi_{\mathcal{N}}\,\partial_{\mu}\Phi_{\mathcal{N}}+\frac{\Phi_{\mathcal{N} }^{2}}{4r^{2}}\left[(d+\mathcal{N}-3)(d-\mathcal{N}-1)-r^{2}\left(4\mu^{2}-( \mathcal{N}+1)^{2}\right)\right]\right\} \tag{111}\]
After we strip out the harmonic dependence in time/angles, the above action results in a radial ODE of the form
\[\begin{split}\frac{1}{r^{\mathcal{N}}}& D_{+}[r^{ \mathcal{N}}D_{+}\varphi_{{}_{\mathcal{N}}}]+\omega^{2}\varphi_{{}_{ \mathcal{N}}}\\ &+\frac{1-r^{2}}{4r^{2}}\Big{\{}(\mathcal{N}-1)^{2}-(d+2\ell-2)^{ 2}+[4\mu^{2}-(\mathcal{N}+1)^{2}]r^{2}\Big{\}}\varphi_{{}_{\mathcal{N}}}=0\.\end{split} \tag{112}\]
Here \(\varphi_{{}_{\mathcal{N}}}(r,\omega,\mathbb{L})\) is the radial part of the field, the derivative operators \(D_{\pm}\equiv(1-r^{2})\partial_{r}\pm i\omega\), and the equation depends on the parameters \(\{\mu,\mathcal{N},\ell\}\) whose physical interpretation will be clear momentarily.
The combination \((\mathcal{N}+1)^{2}-4\mu^{2}\) can be interpreted as a mass term \(4m^{2}\) for the scalar in Hubble units. The exponent \(\mathcal{N}\) describes the auxiliary radial varying dilaton mentioned at the beginning of this appendix. The index \(\ell\) is associated with the eigenvalue of the sphere laplacian. The expressions involved simplify considerably if we use, instead of \(\ell\), the following parameter:
\[\nu\equiv\frac{d}{2}+\ell-1. \tag{113}\]
For example, in terms of \(\nu\), the eigenvalue of the sphere laplacian becomes \((\frac{d}{2}-1)^{2}-\nu^{2}\). Since we will be concerned with the cases where \(d>2\) and \(\ell\geq 0\), \(\nu\) is a positive number. We can then rewrite the above ODE as
\[\begin{split}\frac{1}{r^{\mathcal{N}}}& D_{+}[r^{ \mathcal{N}}D_{+}\varphi_{{}_{\mathcal{N}}}]+\omega^{2}\varphi_{{}_{ \mathcal{N}}}\\ &+\frac{1-r^{2}}{4r^{2}}\Big{\{}(\mathcal{N}-1)^{2}-4\nu^{2}+[4 \mu^{2}-(\mathcal{N}+1)^{2}]r^{2}\Big{\}}\varphi_{{}_{\mathcal{N}}}=0\.\end{split} \tag{114}\]
It is instructive to rewrite the above ODE in terms of a new field \(\psi\equiv r^{\frac{\mathcal{N}}{2}}\varphi_{{}_{\mathcal{N}}}\) as
\[(D_{+}^{2}+\omega^{2})\psi+\frac{1-r^{2}}{4r^{2}}\Big{\{}1-4\nu^{2}+[4\mu^{2} -1]r^{2}\Big{\}}\psi=0. \tag{115}\]
The absence of \(\mathcal{N}\) in this ODE shows that \(\mathcal{N}\) merely controls the overall pre-factor. We also note a symmetry under \(\nu\mapsto-\nu\) and \(\mu\mapsto-\mu\): either of these sign changes should map one solution to the other.
### Outgoing Green function
The above second-order radial ODE can be exactly solved in terms of hypergeometric functions. The worldline to bulk outgoing Green function is given by[72, 73, 7]
\[\begin{split} G^{\text{Out}}_{\mathcal{N}}(r,\omega,\mathbb{L})& =r^{\nu-\frac{\omega}{2}}(1+r)^{-i\omega}\\ &\quad\times\frac{\Gamma\left(\frac{1+\nu-\mu-i\omega}{2}\right) \Gamma\left(\frac{1+\nu+\mu-i\omega}{2}\right)}{\Gamma(1-i\omega)\Gamma\left( 1+\nu\right)}\ _{2}F_{1}\left[\frac{1+\nu-\mu-i\omega}{2},\frac{1+\nu+\mu-i\omega}{2};1-i \omega;1-r^{2}\right]\.\end{split} \tag{111}\]
Here we have fixed the overall normalisation by an appropriate boundary condition to be described below. We will devote this subsection to a detailed study of the above Green function.
We remind the reader that the hypergeometric function always has a nice series expansion around the point where its last argument vanishes. It then follows that the above solution is manifestly regular at the future horizon \(r=1\) without any branch cuts or poles. An alternate form for the same function that emphasises the small \(r\) behaviour near the observer's worldline is
\[\begin{split} G^{\text{Out}}_{\mathcal{N}}&=r^{- \nu-\frac{1}{2}(N-1)}(1+r)^{-i\omega}\\ &\times\Big{\{}_{2}F_{1}\left[\frac{1-\nu+\mu-i\omega}{2},\frac{1 -\nu-\mu-i\omega}{2};1-\nu;r^{2}\right]\\ &\qquad-(1+i\cot\nu\pi)\widehat{K}_{\text{Out}}\frac{r^{2\nu}}{ 2\nu}\ _{2}F_{1}\left[\frac{1+\nu-\mu-i\omega}{2},\frac{1+\nu+\mu-i\omega}{2};1+\nu; r^{2}\right]\Big{\}}\.\end{split} \tag{112}\]
Here \(\widehat{K}_{\text{Out}}\) is the worldline retarded Green function given by the expression[7, 73].
\[\begin{split}\widehat{K}_{\text{Out}}(\omega,\ell)& \equiv 2\frac{\Gamma\left(\frac{1+\nu-\mu-i\omega}{2}\right)\Gamma \left(\frac{1+\nu+\mu-i\omega}{2}\right)\Gamma\left(1-\nu\right)}{\Gamma\left( \frac{1-\nu+\mu-i\omega}{2}\right)\Gamma\left(\frac{1-\nu-\mu-i\omega}{2} \right)\Gamma\left(\nu\right)\left(1+i\cot\nu\pi\right)}\\ &=-e^{i\nu\pi}\frac{2\pi i}{\Gamma(\nu)^{2}}\frac{\Gamma\left( \frac{1+\nu-\mu-i\omega}{2}\right)\Gamma\left(\frac{1+\nu+\mu-i\omega}{2} \right)}{\Gamma\left(\frac{1-\nu+\mu-i\omega}{2}\right)\Gamma\left(\frac{1- \nu-\mu-i\omega}{2}\right)}\.\end{split} \tag{113}\]
The reason for choosing the normalisation of \(\widehat{K}_{\text{Out}}\) this way will become clear eventually. The above equation is the dS analogue of the Hankel function decomposition into Neumann and Bessel functions. As in Eq.(100), when \(d\) is even and \(\nu\) is an integer, the above expression should be understood as a limit, with the \(\cot\nu\pi\) divergence exactly cancelling the divergence in the first term of Eq.(112).
The hypergeometric identity used for the above decomposition is
\[\begin{split}&\frac{\Gamma(a)\Gamma(b)}{\Gamma(c)\Gamma(a+b-c)}z^{ a+b-c}{}_{2}F_{1}(a,b;c;1-z)\\ &={}_{2}F_{1}(c-a,c-b;1+c-a-b;z)\\ &\quad+z^{a+b-c}\frac{\Gamma(a)\Gamma(b)}{\Gamma(c)\Gamma(c-a-b)} \frac{\Gamma(c)\Gamma(a+b-c)}{\Gamma(c-a)\Gamma(c-b)}{}_{2}F_{1}(a,b;a+b-c+1; z)\,\end{split} \tag{114}\]
where we have taken
\[a=\frac{1+\nu-\mu-i\omega}{2}\,\ b=\frac{1+\nu+\mu-i\omega}{2}\,\ c=1-i \omega\,\ z=r^{2}. \tag{115}\]
In all these identities, we take the branch cuts of hypergeometric functions as well as \((1+r)^{-i\omega}\) to be outside the open unit disk in the complex \(r\) plane. Thus, all these functions are analytic within the open static patch and in turn, on the dS-SK contour.
With this new form for the outgoing Green function, it is straightforward to obtain a near-origin expansion to all orders. The explicit expressions are given by
\[{}_{2}F_{1}\left[\frac{1+\nu-\mu-i\omega}{2},\frac{1+\nu+\mu-i \omega}{2};1+\nu;r^{2}\right] \tag{115}\] \[=\sum_{k=0}^{\infty}\frac{r^{2k}}{(2k)!}\frac{(\nu-\mu-i\omega-1+ 2k)!!}{(\nu-\mu-i\omega+1)!!}\frac{(\nu+\mu-i\omega-1+2k)!!}{(\nu+\mu-i\omega- 1)!!}\frac{(2\nu)!!(2k-1)!!}{(2\nu+2k)!!}\,\]
as well as
\[{}_{2}F_{1}\left[\frac{1-\nu+\mu-i\omega}{2},\frac{1-\nu-\mu-i \omega}{2};1-\nu;r^{2}\right] \tag{116}\] \[=\sum_{k=0}^{\infty}\frac{(-r^{2})^{k}}{(2k)!}\frac{(\nu-\mu+i \omega-1)!!}{(\nu-\mu+i\omega-1-2k)!!}\frac{(\nu+\mu+i\omega-1)!!}{(\nu+\mu+i \omega-1-2k)!!}\frac{(2\nu-2-2k)!!(2k-1)!!}{(2\nu-2)!!}\.\]
The second expansion can be interpreted literally only for \(d\) odd (i.e., when \(\nu\in\mathbb{Z}+\frac{1}{2}\)). For \(d\) even, the above expansion (and most of the discussion below) should be understood in a dimensionally regularised sense.
The near-origin form of the outgoing Green function shows the normalisation
\[\lim_{r\to 0}r^{\nu+\frac{N-1}{2}}G_{N}^{\text{Out}}=1. \tag{117}\]
This condition can be thought of as the dS analogue of the condition on the AdS boundary-to-bulk Green function. As in that case, the above condition along with outgoing property/analyticity at the future horizon uniquely determines \(G_{\mathcal{N}}^{\text{Out}}\). Extending this analogy to AdS, we can roughly read off the retarded worldline Green function \(\widehat{K}_{\text{Out}}\) by looking at the ratio of coefficients of the sub-dominant solution to the dominant solution in the outgoing solution \(G_{\mathcal{N}}^{\text{Out}}\). This is essentially the Son-Starinets prescription[108] of AdS/CFT adapted to the present dS context. Such analogies have been noted before in [7]: our aim here is to give a more systematic derivation of these statements, taking into account the subtleties associated with divergences, regularisation, finite size effects, etc.
To this end, let us begin with a physical interpretation of the outgoing Green function \(G_{\mathcal{N}}^{\text{Out}}\). If we are given that \(\varphi_{{}_{N}}\) behaves at small \(r\) near the worldline as
\[\varphi_{{}_{N}}(r,\omega,\mathbb{L})=\frac{\mathcal{J}(\omega,\mathbb{L})}{ r^{\nu+\frac{1}{2}(N-1)}}+\ldots\, \tag{118}\]
where \(\mathbb{S}_{\mathbb{L}}\) is a spherical harmonic on \(\mathbb{S}^{d-2}\), we then have a unique outgoing solution
\[\varphi_{{}_{N}}(r,\omega,\mathbb{L})=G_{N}^{\text{Out}}(r,\omega,\mathbb{L}) \mathcal{J}(\omega,\mathbb{L})\]
describing the field that is radiated out of the worldline. This is the dS analogue of the outgoing Hankel Green function in flat space.21
Footnote 21: More precisely, in outgoing EF coordinates the corresponding Green functions in flat space are the outgoing Hankel Green functions given in Eq.(111) multiplied with a prefactor of \(e^{-i\omega r}\). See footnote 22: We review this statement, for the benefit of the reader, around Eq.(111).
The alternate form we have written down above in Eq.(112) is then the dS version of the familiar statement22 that the outgoing Hankel Green function can be written as the sum of a Neumann Green function (which diverges near the origin) and a Bessel J function (which is regular at the origin). Such
a decomposition of the outgoing Green function into a singular Green function and a regular solution is a first step in Dirac's approach to the self-force[109] (the curved space version is sometimes also termed as the Detweiler-Whiting decomposition[110]). We will later show in appendixC.5 that our answer matches in dS\({}_{4}\) with the regular part quoted in [75; 111] using the rules of Deitweiler-Whiting decomposition.
### Renormalised conjugate field and \(K_{\text{Out}}\)
We will now turn to the question of deriving the worldline Green function \(K_{\text{Out}}\) from the outgoing Green function \(G_{\text{N}}^{\text{Out}}\). As we will describe in detail below, the physics here is that of radiation reaction and the main subtlety is how to deal with divergences. Our main strategy here will be to define a renormalised conjugate field which reduces to \(K_{\text{Out}}\) near the source worldline. The idea here is philosophically similar to other radiation reaction computations in the literature[36; 37; 38; 39; 40; 110] as well as the counter-term subtraction in AdS/CFT[112]. The implementation is however sufficiently different that we provide a detailed analysis below.
The radial ODE Eq.(B.7) can be derived by extremising the action
\[\begin{split} S&=-\frac{1}{2}\sum_{\mathbb{L}}\int \frac{d\omega}{2\pi}\oint\frac{r^{\mathcal{N}}dr}{1-r^{2}}\Big{[}(D_{+}\varphi _{{}_{\mathcal{N}}})^{*}D_{+}\varphi_{{}_{\mathcal{N}}}-\omega^{2}\varphi_{ {}_{\mathcal{N}}}^{*}\varphi_{{}_{\mathcal{N}}}\\ &\qquad-\frac{1-r^{2}}{4r^{2}}\Big{\{}(\mathcal{N}-1)^{2}-4 \nu^{2}+[4\mu^{2}-(\mathcal{N}+1)^{2}]r^{2}\Big{\}}\varphi_{{}_{\mathcal{N}}} ^{*}\varphi_{{}_{\mathcal{N}}}\Big{]}+S_{ct}\.\end{split}\] (B.18)
Here \(S_{ct}\) denotes the counter-term action to be determined later. The integration over \(r\) ranges over the regulated dS-SK contour (clockwise from the right static patch to the left static patch) and, in addition, we have indicated an integration over all frequencies and a sum over spherical harmonics. The reality condition in the Fourier domain takes the form
\[\varphi_{{}_{\mathcal{N}}}^{*}(r,w,\ell,\vec{m})=\varphi_{{}_{ \mathcal{N}}}(r,-\omega,\ell,-\vec{m})\.\] (B.19)
Here \(\vec{m}\) denotes the additional labels appearing in the spherical harmonic decomposition.
The canonical conjugate field for radial evolution is obtained by varying the above action with respect to \(\partial_{r}\varphi_{{}_{\mathcal{N}}}^{*}\) which yields \(-r^{\mathcal{N}}D_{+}\varphi_{{}_{\mathcal{N}}}\) after we take into account the fact that \(\varphi_{{}_{\mathcal{N}}}^{*}\) and \(\varphi_{{}_{\mathcal{N}}}\) are related by the reality condition quoted above. The minus sign in the canonical conjugate is because we are looking at evolution along a space-like direction.
Taking into account the powers of \(r\) multiplying the multipole moment \(\mathcal{J}\) in Eq.(B.17), the canonical conjugate of \(\mathcal{J}\) should be defined with the opposite power, viz., we should consider instead
\[-r^{-\nu-\frac{1}{2}(\mathcal{N}-1)}\left[r^{\mathcal{N}}D_{+}\varphi_{{}_{ \mathcal{N}}}\right]\.\] (B.20)
The canonical conjugate field of the radial evolution at the two regulated boundaries is given by evaluating the above expressions at \(r=r_{c}\pm i\varepsilon\). Naively the \(r_{c}\to 0\) limit should then yield the required canonical conjugate that couples to the right/left point multipole source. This limit however does not work: on a generic solution, the \(r_{c}\to 0\) limit is beset with divergences. Appropriate counter-terms need to be added to the above bare expression before a sensible \(r_{c}\to 0\) limit can be taken. The counter-terms arise from adding in a worldline counter-term action
\[S_{ct}=-\frac{1}{2}\sum_{\mathbb{L}}\int\frac{d\omega}{2\pi}r^{\mathcal{N}-1} \mathscr{C}_{\mathcal{N}}(r,\omega,\mathbb{L})\varphi_{{}_{\mathcal{N}}}^{*} \varphi_{{}_{\mathcal{N}}}|_{\text{Bnd}}\.\] (B.21)
Here \(|_{\rm Bnd}\) refers to the fact that we add such a contribution at every boundary. Being a boundary contribution, this addition does not change the equations of motion for the scalar field. If the original variational principle was defined with a Dirichlet boundary condition \(\delta\varphi_{{}_{N}}|_{\rm Bnd}=0\), the counterterm above does not change that boundary condition. The reader might expect, from the discussion at the end of appendix SSA on flat spacetime, that additional counter-terms will be required in even \(d\) to deal with \(\cot\pi\nu\) divergences. As we did there, we will first deal with singularities at the sources before solving the divergences peculiar to even \(d\).
In the above expression, we should take \(\mathscr{C}_{\mathcal{N}}(r,\omega,\mathbb{L})\) to be a real and even function of \(\omega\) to get a real counter-term action. Addition of this worldline action modifies the canonical conjugate evaluated at the radial boundaries to
\[r^{-\nu-\frac{1}{2}(\mathcal{N}-1)}\pi_{\mathcal{N}}\equiv-r^{-\nu-\frac{1}{2} (\mathcal{N}-1)}\left[r^{\mathcal{N}}D_{+}+r^{\mathcal{N}-1}\mathscr{C}_{ \mathcal{N}}\right]\varphi_{{}_{N}}. \tag{110}\]
The \(\mathscr{C}_{\mathcal{N}}\) should then be chosen such that this object evaluated at \(r=r_{c}\pm i\varepsilon\) has a well-defined \(r_{c}\to 0\) limit.
We will now determine \(\mathscr{C}_{\mathcal{N}}\) by studying the outgoing Green function (the counter-terms determined using a generic enough solution should work for every other solution). As we shall see, the boundary value of the _renormalised_ conjugate field in this case is the boundary Green function \(K_{\rm Out}\). Before going into the details of the computation, it might be useful to situate it in a familiar physical context.
In the case of electromagnetism, the worldline Green function \(K_{\rm Out}\) for a charged particle encodes the radiation reaction or self-force due to the particle's EM fields acting on itself. While this statement is broadly true, it is clear that this idea has to be interpreted with some care. If we take the bare electric field produced by the point charge and try to compute the self-force on it naively, the calculation will be dominated by the Coulomb divergence at the origin yielding an infinite answer.
A little bit of thought however reveals that these divergences merely serve to relate the bare properties (e.g., mass) of the fictitious charge-free particle to the properties of the actual physical particle. What we should do instead is to compute the renormalised electric field felt by the particle after adding counter-terms which shift the mass to the experimentally measured value. This renormalised field associated with the radiation is determined from the near field by imposing the outgoing boundary condition and can then be used to compute the self-force of the particle.
With this physical example in mind, we can interpret the first term in Eq.(109) as analogous to the Coulomb field in the near region whose divergent contributions need to be removed by using counter-terms. It is only after this is done that we can extract \(\widehat{K}_{\rm Out}\) as the renormalised worldline Green function.
We will now demand that the renormalised conjugate field computed over the first term in Eq.(109) vanish. This fixes the counter-term function \(\mathscr{C}_{\mathcal{N}}\) to be
\[\frac{\mathscr{C}_{\mathcal{N}}}{1-r^{2}}\equiv-r\frac{d}{dr}\ln\left\{r^{-\nu -\frac{1}{2}(\mathcal{N}-1)}(1-r^{2})^{-\frac{i\omega}{2}}{}_{2}F_{1}\left[ \frac{1-\nu+\mu-i\omega}{2},\frac{1-\nu-\mu-i\omega}{2};1-\nu;r^{2}\right] \right\}. \tag{111}\]
Here we take the branch cut of \((1-r^{2})^{-\frac{i\omega}{2}}\) to be away from the open unit disc \(|r|<1\) in the complex \(r\) plane and, with this choice, \(\mathscr{C}_{\mathcal{N}}\) is analytic everywhere inside each copy of the static patch, and has no discontinuity across the dS-SK branch-cut. While it is not obvious from the expression above, we
can invoke the Euler transformation formula for the hyper-geometric function which states that
\[\begin{split}{}_{2}F_{1}&\left[\frac{1\pm\nu+\mu+i\omega}{ 2},\frac{1\pm\nu-\mu+i\omega}{2};1\pm\nu;r^{2}\right]\\ &=(1-r^{2})^{-i\omega}{}_{2}F_{1}\left[\frac{1\pm\nu+\mu-i\omega} {2},\frac{1\pm\nu-\mu-i\omega}{2};1\pm\nu;r^{2}\right]\,\end{split} \tag{108}\]
to conclude that \(\mathscr{C}_{\mathcal{N}}\) is a real and even function of \(\omega\). Here we have taken the function to be analytic in static patch again and hence \(\mathscr{C}_{\mathcal{N}}\) has a well-behaved small \(r\) expansion. The first few terms in this expansion are given by
\[\begin{split}\mathscr{C}_{\mathcal{N}}&=(1-r^{2}) \left(\nu+\frac{1}{2}(\mathcal{N}-1)\right)+r^{2}\frac{(\nu-\mu-1)(\nu+\mu-1)- \omega^{2}}{2\nu-2}\\ &+r^{4}\frac{[(\nu-\mu-1)^{2}+\omega^{2}][(\nu+\mu-1)^{2}+\omega ^{2}]}{(2\nu-2)^{2}(2\nu-4)}\\ &+r^{6}\frac{[(\nu-\mu-1)^{2}+\omega^{2}][(\nu+\mu-1)^{2}+\omega ^{2}]}{(2\nu-2)^{3}(2\nu-4)(2\nu-6)}\\ &\quad\times[(2\nu-2)(2\nu-4)-2(\nu-\mu-1)(\nu+\mu-1)+2\omega^{2 }]\ +\ldots.\end{split} \tag{109}\]
Note that all terms in the above expansion are indeed real and even functions of \(\omega\) as claimed. Note that all the \(r\) and \(\omega\) factors appear in the numerator implying that this counter-term is local in time/radial direction.
Now that we have the expression for the counter-term, it is straightforward to compute the renormalised conjugate field evaluated over the outgoing Green function. We obtain the following answer
\[\begin{split}\pi^{\text{Out}}_{\mathcal{N}}&\equiv -\left[r^{\mathcal{N}}D_{+}+r^{\mathcal{N}-1}\mathscr{C}_{\mathcal{N}}\right]G ^{\text{Out}}_{\mathcal{N}}\\ &=(1+i\cot\pi\nu)\widehat{K}_{\text{Out}}\mathscr{Z}_{\mathcal{N} }(r,\omega)r^{\nu+\frac{1}{2}(\mathcal{N}-1)}(1+r)^{-i\omega}{}_{2}F_{1}\left[ \frac{1+\nu-\mu-i\omega}{2},\frac{1+\nu+\mu-i\omega}{2};1+\nu;r^{2}\right]\,\end{split} \tag{110}\]
where \(\mathscr{Z}_{\mathcal{N}}(r,\omega)\) is a function given by the expression
\[\frac{\mathscr{Z}_{\mathcal{N}}}{1-r^{2}}\equiv 1-\frac{r}{2\nu}\frac{d}{dr} \ln\left\{\frac{2F_{1}\left[\frac{1-\nu-\mu-i\omega}{2},\frac{1-\nu+\mu-i \omega}{2};1-\nu;r^{2}\right]}{2F_{1}\left[\frac{1+\nu+\mu-i\omega}{2},\frac{1 +\nu-\mu-i\omega}{2};1+\nu;r^{2}\right]}\right\}. \tag{111}\]
This is also a real and even function of \(\omega\) with a well-behaved series expansion near the origin. We thus see that the renormalised conjugate field of the outgoing wave is essentially its regular part, obtained after dropping its singular part and then renormalised by a factor of \(\mathscr{Z}_{\mathcal{N}}\). Taking the \(r\to 0\) limit yields
\[\lim_{r\to 0}r^{-\nu-\frac{1}{2}(\mathcal{N}-1)}\pi^{\text{Out}}_{\mathcal{N} }\equiv-\lim_{r\to 0}r^{-\nu-\frac{1}{2}(\mathcal{N}-1)}\left[r^{\mathcal{N}}D_{+}+r ^{\mathcal{N}-1}\mathscr{C}_{\mathcal{N}}\right]G^{\text{Out}}_{\mathcal{N}}= (1+i\cot\pi\nu)\widehat{K}_{\text{Out}}. \tag{112}\]
This then justifies our original definition for \(\widehat{K}_{\text{Out}}\).
If \(d\) is odd and \(\nu\equiv\frac{d}{2}+\ell-1\in\mathbb{Z}+\frac{1}{2}\), we can set \(\nu\) to its actual value everywhere (i.e., remove dim-reg.) in our result: the value of the renormalised conjugate field at the world line (which we shall henceforth refer to by the symbol \(K_{\text{Out}}\)) is then finite. We can then write
\[K_{\text{Out}}|_{\text{Odd d}}=(1+i\cot\pi\nu)\widehat{K}_{\text{Out}}|_{\text {Odd d}}=-e^{i\nu\pi}\frac{2\pi i}{\Gamma(\nu)^{2}}\frac{\Gamma\left(\frac{1+ \nu-\mu-i\omega}{2}\right)\Gamma\left(\frac{1+\nu+\mu-i\omega}{2}\right)}{ \Gamma\left(\frac{1-\nu+\mu-i\omega}{2}\right)\Gamma\left(\frac{1-\nu-\mu-i \omega}{2}\right)}. \tag{113}\]
For the massless case in odd \(d\), we have \(\mu,\nu\in\mathbb{Z}+\frac{1}{2}\) for all values of interest given in table 1. If we further assume that \(\mu\neq 1+\nu\equiv\frac{d}{2}+\ell\), the above expression is, in fact, an odd polynomial of \(i\omega\) with degree \(2\nu\) (see table 5 for an illustration). An interesting example is that of a conformally coupled scalar in odd \(d\), where we have a closed-form expression
\[K_{\text{Out}}\Big{|}_{\mu=\frac{1}{2}}=\frac{(-1)^{\nu-\frac{1}{2}}}{(2\nu-2)!! ^{2}}\prod_{k=1}^{2\nu}\left[\nu+\frac{1}{2}-k-i\omega\right] \tag{114}\]
In all such cases, for every multipole moment, the Hubble corrections for the radiation correction terminate. Hence, we get a completely Markovian influence phase with no memory/tail terms. Further, as we shall explain in detail in the appendix D, for an arbitrarily moving point-like source, all the multipole contributions add up nicely into a local generally covariant expression for the radiation reaction force.
For the minimally coupled massless scalar (\(\mu=\frac{d}{2}\)), we still obtain a polynomial \(K_{\text{Out}}\) for all multipoles except the monopole (\(\ell=0\)) contribution. The monopole has an extra \(1/\omega\) correction in addition to the polynomial terms odd in \(\omega\) (See tables 6 and 7). An explicit expression for \(\ell=0\) contribution is given by
\[K_{\text{Out}}|_{\mu=1+\nu=\frac{d}{2}}=\frac{(d-2)^{2}}{i\omega}\cosh\frac{ \pi\omega}{2}\ \frac{\Gamma\left(\frac{d-i\omega}{2}\right)\Gamma\left(\frac{d+i\omega}{2} \right)}{\Gamma\left(\frac{d}{2}\right)^{2}} \tag{115}\]
The inverse omega that appears in the front of this expression suggests that the correct variable for a low frequency expansion in this case is the time integral of the scalar source rather than the source itself. Such a mild non-Markovianity for minimally coupled scalars in dS has been noted before[75, 85], and we will review its physical interpretation in appendix D.
\begin{table}
\begin{tabular}{|c|c|} \hline \(\mu=\frac{d}{2}\) & \(\ell=0\) \\ \hline \(d=3\) & \(\omega^{2}+1\) \\ \(d=5\) & \(\omega^{4}+10\omega^{2}+9\) \\ \(d=7\) & \(\frac{\omega^{6}}{9}+\frac{35\omega^{4}}{9}+\frac{259\omega^{2}}{9}+25\) \\ \hline \end{tabular}
\end{table}
Table 6: \(i\omega K_{\text{Out}}\)
For generic values of \((\mu,\nu)\), a small \(\omega\) expansion of \(K_{\rm Out}\) is easy to obtain by expanding out the gamma functions in terms of polygamma functions. We get
\[K_{\rm Out}|_{\rm Odd\ d}=-e^{i\nu\pi}\frac{2\pi i}{\Gamma(\nu)^{2} }\frac{\Gamma\left(\frac{1+\nu-\mu}{2}\right)\Gamma\left(\frac{1+\nu+\mu}{2} \right)}{\Gamma\left(\frac{1-\nu+\mu}{2}\right)\Gamma\left(\frac{1-\nu-\mu}{2 }\right)} \tag{113}\]
where \(\psi^{(k)}(z)\equiv\frac{d^{k+1}}{dz^{k+1}}\ln\Gamma(z)\) is the polygamma function. When both \(\nu+\mu\) or \(\nu-\mu\) are non-negative integers, the terms in the above expressions become indeterminate and should instead be interpreted as a limit. In such cases, explicit computations show that the above exponential terminates yielding an odd polynomial in \(\omega\) when \(\nu\) is half-integer.
We will now comment on the even \(d\)/integer \(\nu\) case. The \(\cot\pi\nu\) diverges in this limit, and we need the analogue of Eq.(112) to figure out the counter-terms needed to remove this divergence. The analogous expansion is given by
\[(1+i\cot(\pi\nu))\widehat{K}_{\rm Out} =\frac{(-)^{n}}{\Gamma(n)^{2}}\frac{\Gamma\left(\frac{1+n-\mu-i \omega}{2}\right)\Gamma\left(\frac{1+n+\mu-i\omega}{2}\right)}{\Gamma\left( \frac{1-n+\mu-i\omega}{2}\right)\Gamma\left(\frac{1-n-\mu-i\omega}{2}\right) }\left[\frac{2}{\nu-n}\right. \tag{114}\] \[\left.+\psi^{(0)}\left(\frac{1+n-\mu-i\omega}{2}\right)+\psi^{(0 )}\left(\frac{1+n+\mu-i\omega}{2}\right)\right.\] \[\left.+\psi^{(0)}\left(\frac{1-n-\mu-i\omega}{2}\right)+\psi^{(0 )}\left(\frac{1-n+\mu-i\omega}{2}\right)-4\psi^{(0)}(n)+O(\nu-n)\right]\.\]
As in the flat spacetime, we can counter-term away the first two terms, and change the \(n\) back to \(\nu\). This yields the renormalised worldline Green function as[7, 73]
\[K_{\rm Out}|_{\rm Even\ d} =\Delta_{\mathcal{N}}(\nu,\mu,\omega)\left[\psi^{(0)}\left(\frac{ 1+\nu-\mu-i\omega}{2}\right)+\psi^{(0)}\left(\frac{1+\nu+\mu-i\omega}{2}\right)\right. \tag{115}\] \[\left.+\psi^{(0)}\left(\frac{1-\nu-\mu-i\omega}{2}\right)+\psi^{ (0)}\left(\frac{1-\nu+\mu-i\omega}{2}\right)-4\psi^{(0)}(\nu)\right]\,\]
where the function \(\Delta_{\mathcal{N}}\) is defined below in Eq.(111). To get this answer, we add to counterterm in Eq.(110), further terms of the form
\[S_{ct,\rm Even}=\sum_{\mathbb{L}}\frac{1}{\nu-n}\int\frac{d\omega}{2\pi}r^{ \mathcal{N}-1+2n}\Delta_{\mathcal{N}}(n,\mu,\omega)\varphi_{{}_{N}}^{*}\varphi _{{}_{N}}|_{r_{c}}\, \tag{116}\]
where \(n=\ell+\frac{d}{2}-1\) and we have defined
\[\begin{split}\Delta_{\mathcal{N}}(n,\mu,\omega)&\equiv \frac{(-)^{n}}{\Gamma(n)^{2}}\frac{\Gamma\left(\frac{1+n-\mu-i\omega}{2}\right) \Gamma\left(\frac{1+n+\mu-i\omega}{2}\right)}{\Gamma\left(\frac{1-n+\mu-i \omega}{2}\right)\Gamma\left(\frac{1-n-\mu-i\omega}{2}\right)}=\frac{1}{ \Gamma(n)^{2}}\prod_{k=1}^{n}\left[\frac{\omega^{2}}{4}+\frac{1}{4}(\mu-n+2k- 1)^{2}\right]\\ &=\Delta_{\mathcal{N}}^{*}(n,\mu,\omega)\.\end{split} \tag{103}\]
Note that the explicit product form we give above is valid for \(n\in\mathbb{Z}_{+}\). This form shows that \(\Delta_{\mathcal{N}}\) is a real and even function of \(\omega\), which is an essential condition for such a counterterm to be admissible. With this counterterm, Eq.(101) is the dS generalisation of the radiation reaction influence phase in flat spacetime described by Eq.(100). The simple logarithmic running in flat spacetime is now replaced by a more complicated RGE with the Hubble constant playing the role of the IR cutoff. A low frequency expansion \(K_{\rm Out}\) can be obtained by using the polygamma series expansion
\[\begin{split}\psi^{(0)}&\left(\frac{1+\nu-\mu-i \omega}{2}\right)+\psi^{(0)}\left(\frac{1+\nu+\mu-i\omega}{2}\right)+\psi^{(0) }\left(\frac{1-\nu-\mu-i\omega}{2}\right)+\psi^{(0)}\left(\frac{1-\nu+\mu-i \omega}{2}\right)\\ &=\sum_{k=0}^{\infty}\frac{\left(\frac{-i\omega}{2}\right)^{k+1}} {(k+1)!}\left[\psi^{(k)}\left(\frac{1+\nu-\mu}{2}\right)+\psi^{(k)}\left( \frac{1+\nu-\mu}{2}\right)\right.\\ &\qquad\qquad\qquad\qquad\qquad\left.+\psi^{(k)}\left(\frac{1- \nu-\mu}{2}\right)+\psi^{(k)}\left(\frac{1-\nu+\mu}{2}\right)\right]\,\end{split} \tag{104}\]
which is well-defined except when any one of the poly-gamma arguments is a negative integer. These results agree with the dS expressions derived in [7].
We will conclude this section with a comment on the flat spacetime limit of the expressions derived in this appendix. Intuitively, we expect that the high-frequency modes in with \(\omega\gg 1\) would be insensitive to the cosmological constant, and would behave like Minkowski modes. This intuition can indeed be made precise by examining the high frequency expansion of \(K_{\rm Out}\). Using Stirling approximation for the Gamma functions, we can indeed check the following valid for \(\omega\gg 1\):
\[K_{\rm Out}\approx\left\{\begin{array}{cc}\frac{2\pi i}{\Gamma(\nu)^{2}} \left(\frac{\omega}{2}\right)^{2\nu}&\text{for $d$ odd}\,\\ \frac{1}{\Gamma(\nu)^{2}}\left(\frac{\omega}{2}\right)^{2\nu}\ln\left(\frac{ \omega^{4}}{H^{4}}\right)&\text{for $d$ even}\.\end{array}\right. \tag{105}\]
Comparing these limits with Eqs.(102) and (100), we conclude \(K_{\rm Out}\) is indeed the dS generalisation of the radiation reaction kernel.
## Appendix C SK Green functions and the cosmological influence phase
We now turn to the problem of constructing the solution on the dS-SK spacetime contour. The construction here closely parallels corresponding derivation in AdS[42; 43; 44; 46; 47] and we include a concise summary here mainly for completeness. The reader is encouraged to see these references for a more extensive discussion and interpretation of the expressions quoted below.
Our discussion in this appendix is structured as follows: we begin by extending our discussion of counter-terms etc. to the _incoming_ Green functions. Physically, such Green functions are relevant while describing the effect of a distant source in the past of the observer. As will be derived below, even if there are no sources present, an observer in dS spacetime sees cosmic background radiation at the dS temperature. We will need the incoming Green function to describe these waves.
### Time reversal, incoming waves and their branch-cut
We would now like to argue that the renormalised conjugate field continues to be finite for the Green function describing incoming waves. The incoming Green function can be computed from the answers we already have by using the time reversal isometry of the dS spacetime. The only non-trivial step involved is to realise how the time reversal isometry acts on EF coordinates.
The action of time reversal is achieved by the diffeomorphism
\[u\mapsto 2\pi i\zeta-u\,\ \omega\mapsto-\omega\, \tag{108}\]
where \(\zeta\) is the mock tortoise coordinate introduced in Eq.(3). One can check that this diffeomorphism preserves the metric in Eq.(107) and is hence an isometry. The map \(\omega\mapsto-\omega\) is necessary maintain the \(\sim e^{-i\omega u}\) factor in Fourier domain. The time reversal is hence achieved by reversing \(\omega\) and then multiplying all Fourier domain functions by a factor \(e^{-2\pi\omega\zeta}\).
Using the time reversal isometry the bulk to worldline Green function with incoming boundary condition takes the form
\[G^{\text{In}}_{\mathcal{N}}(r,\omega,\mathbb{L})=e^{-2\pi\omega\zeta}G^{\text{ Out*}}_{\mathcal{N}}(r,\omega,\mathbb{L}) \tag{109}\]
Unlike \(G^{\text{Out}}_{\mathcal{N}}\), the Green function \(G^{\text{In}}_{\mathcal{N}}\) has a branch-cut on the dS-SK contour, taking different values in the left vs. right static patches. The near origin expansion of \(G^{\text{In}}_{\mathcal{N}}\) can be obtained by using the Euler transformation in Eq.(100):
\[\begin{split} G^{\text{In}}_{\mathcal{N}}(r,\omega,\mathbb{L})& \equiv e^{-2\pi\omega\zeta}G^{\text{Out*}}_{\mathcal{N}}=e^{-2\pi \omega\zeta}\left(\frac{1-r}{1+r}\right)^{-i\omega}\times r^{-\nu-\frac{1}{2}( \mathcal{N}-1)}(1+r)^{-i\omega}\\ &\times\Big{\{}_{2}F_{1}\left[\frac{1-\nu+\mu-i\omega}{2},\frac{ 1-\nu-\mu-i\omega}{2};1-\nu;r^{2}\right]\\ &\qquad-(1-i\cot\pi\nu)\widehat{K}_{\text{In}}\frac{r^{2\nu}}{2 \nu}\ _{2}F_{1}\left[\frac{1+\nu-\mu-i\omega}{2},\frac{1+\nu+\mu-i\omega}{2};1+\nu;r^ {2}\right]\Big{\}}\.\end{split} \tag{110}\]
The branch cuts of the explicit \((1\pm r)^{-i\omega}\) are chosen to lie outside the open unit disc in the complex \(r\) plane and a careful evaluation of the pre-factor above yields
\[e^{-2\pi\omega\zeta}\left(\frac{1-r}{1+r}\right)^{-i\omega}=\begin{cases}1& \text{L contour}\\ e^{-2\pi\omega}&\text{R contour}\.\end{cases} \tag{111}\]
This shows explicitly the branch-cut and jump in the incoming Green function. In the above equation, the symbol \(\widehat{K}_{\text{In}}\) denotes the worldline advanced Green function given by the expression
\[\begin{split}\widehat{K}_{\text{In}}(\omega,\ell)& \equiv[\widehat{K}_{\text{Out}}(\omega,\ell)]^{*}=-e^{-2\pi i\nu} \widehat{K}^{\text{Out}}(-\omega,\ell)\\ &=e^{-i\nu\pi}\frac{2\pi i}{\Gamma(\nu)^{2}}\frac{\Gamma\left(\frac{1+ \nu-\mu+i\omega}{2}\right)\Gamma\left(\frac{1+\nu+\mu+i\omega}{2}\right)}{ \Gamma\left(\frac{1-\nu+\mu+i\omega}{2}\right)\Gamma\left(\frac{1-\nu-\mu+i \omega}{2}\right)}\.\end{split} \tag{112}\]
The comments made in the context of \(\widehat{K}_{\text{Out}}\) below Eq.(102) apply also in this case. The decomposition in Eq.(110)
Given the above definition of \(G^{\text{In}}_{\mathcal{N}}\), it is now straightforward to compute the renormalised conjugate field. Since the incoming mode has a branch cut, it behaves differently at the two boundaries. Adding in the counterterm in Eq.(101), we get the renormalised conjugate field as
\[\begin{split}\pi^{\text{In}}_{\mathcal{N}}&\equiv- \left[r^{\mathcal{N}}D_{+}+r^{\mathcal{N}-1}\mathscr{C}_{\mathcal{N}}\right]G^ {\text{In}}_{\mathcal{N}}\\ &=-e^{-2\pi\omega\zeta}\left[r^{\mathcal{N}}D_{-}+r^{\mathcal{N}- 1}\mathscr{C}_{\mathcal{N}}\right]G^{\text{Out*}}_{\mathcal{N}}\\ &=e^{-2\pi\omega\zeta}\pi^{\text{Out*}}_{\mathcal{N}}.\end{split} \tag{113}\]
Here we have used \(D_{\pm}\equiv(1-r^{2})\partial_{r}\pm i\omega\) as well as the property that \(D_{+}[e^{-2\pi\omega\zeta}\#]=e^{-2\pi\omega\zeta}D_{-}[\#]\). Using Eq.(B.26), we obtain
\[\begin{split}\pi_{N}^{\rm In}&=(1-i\cot\pi\nu) \widehat{K}_{\rm In}e^{-2\pi\omega\zeta}\left(\frac{1-r}{1+r}\right)^{-i\omega }\mathscr{Z}_{\mathcal{N}}(r,\omega)\\ &\quad\times r^{\nu+\frac{1}{2}(\mathcal{N}-1)}(1+r)^{-i\omega}{} _{2}F_{1}\left[\frac{1+\nu-\mu-i\omega}{2},\frac{1+\nu+\mu-i\omega}{2};1+\nu;r ^{2}\right]\.\end{split}\] (C.7)
As in the case of outgoing waves, we see again that the renormalised conjugate field is the regular part of the incoming waves renormalised with the same factor \(\mathscr{Z}_{\mathcal{N}}(r,\omega)\). We can then take \(r\to 0\) limit above and below the branch cut to get
\[\lim_{r\to 0}r^{-\nu-\frac{1}{2}(\mathcal{N}-1)}\pi_{\mathcal{N}}^{\rm In}= \begin{cases}(1-i\cot\pi\nu)\widehat{K}_{\rm In}&\text{L boundary },\\ e^{-2\pi\omega}(1-i\cot\pi\nu)\widehat{K}_{\rm In}&\text{R boundary }.\end{cases}\] (C.8)
This shows that the counter-term we derived also works for the incoming waves. When \(d\) is odd and \(\cot\pi\nu=0\), we can remove the dimensional regularisation without any further counterterms. The analogue of Eq.(B.29) for the incoming waves is
\[K_{\rm In}|_{\text{odd d}}=(1-i\cot\pi\nu)\widehat{K}_{\rm In}|_{\text{odd d}}=(K_{\text{Out}})^{*}|_{\text{odd d}}=e^{-i\nu\pi}\frac{2\pi i}{\Gamma(\nu)^{2}}\frac{\Gamma\left( \frac{1+\nu-\mu+i\omega}{2}\right)\Gamma\left(\frac{1+\nu+\mu+i\omega}{2} \right)}{\Gamma\left(\frac{1-\nu+\mu+i\omega}{2}\right)\Gamma\left(\frac{1- \nu-\mu+i\omega}{2}\right)}\.\] (C.9)
All our statements about \(K_{\text{Out}}\) in odd \(d\) apply mutatis mutandis to \(K_{\rm In}\).
When \(d\) is even and \(\nu\) approaches an integer, there are additional divergences due to \(\cot\pi\nu\). We already encountered such divergences and countertermed them away for outgoing waves. We have to check now that the counterterms in Eq.(B.35) added to cancel such divergences out of outgoing waves, work also for the incoming waves. To see this, we examine the expansion
\[\begin{split}(1-i\cot\pi\nu)\widehat{K}_{\rm In}(\nu)& =\Delta_{\mathcal{N}}(n,\mu,\omega)\left[\frac{2}{\nu-n}\right.\\ &\left.+\psi^{(0)}\left(\frac{1+n-\mu+i\omega}{2}\right)+\psi^{(0 )}\left(\frac{1+n+\mu+i\omega}{2}\right)\right.\\ &\left.+\psi^{(0)}\left(\frac{1-n-\mu+i\omega}{2}\right)+\psi^{(0 )}\left(\frac{1-n+\mu+i\omega}{2}\right)-4\psi^{(0)}(n)+O(\nu-n)\right]\,\end{split}\] (C.10)
where \(\Delta_{\mathcal{N}}(n,\mu,\omega)\) is given by Eq.(B.36). Here, we have used crucially the fact that \(\Delta_{\mathcal{N}}\) is a real, even function of \(\omega\).
From the above expression, we can see that the incoming conjugate field in Eq.(C.7) is also rendered finite by the same counterterms as before. Crucially, the monodromy factors of \(e^{-2\pi\omega\zeta}\) work out correctly to cancel the divergences near both the left/right world lines. We get the final renormalised advanced worldline Green function as
\[\begin{split} K_{\rm In}|_{\text{Even }d}&=\Delta_{ \mathcal{N}}(\nu,\mu,\omega)\left[\psi^{(0)}\left(\frac{1+\nu-\mu+i\omega}{2} \right)+\psi^{(0)}\left(\frac{1+\nu+\mu+i\omega}{2}\right)\right.\\ &\left.+\psi^{(0)}\left(\frac{1-\nu-\mu+i\omega}{2}\right)+\psi^{( 0)}\left(\frac{1-\nu+\mu+i\omega}{2}\right)-4\psi^{(0)}(\nu)\right]\,\end{split}\] (C.11)
To conclude, we have demonstrated a set of counterterms which result in finite answers for conjugate fields evaluated over both outgoing as well as incoming waves. The final renormalised conjugate field is given by
\[\lim_{r\to 0}r^{-\nu-\frac{1}{2}(\mathcal{N}-1)}\pi^{\text{In}}_{ \mathcal{N}}=\begin{cases}K_{\text{In}}&\text{L boundary }\,,\\ e^{-2\pi\omega}K_{\text{In}}&\text{R boundary }\,.\end{cases} \tag{109}\]
Since the most general solution on the dS-SK geometry is a linear combination of outgoing/incoming waves, it follows that our counterterm prescription will yield a finite answer for the cosmological influence phase.
### Point-like sources and Green functions on dS-SK contour
In this subsection, we solve for the unique combination of outgoing and incoming waves corresponding to a point source placed at the centre(s) of left/right static patches in dS-SK geometry. As we will describe subsequently, with some more effort, arbitrary extended sources on the dS-SK background can also be dealt with.
We describe the point source problem first to introduce, within a simpler setting, the ingredients needed for the extended sources. As we shall see, in analogy with AdS, we can think of the problem of point sources placed at the centre of the static patch as one involving boundary-to-bulk Green functions. In contrast, the problem of extended sources is that of bulk-to-bulk Green functions, and it is hence fairly more involved.
As described in the main text, the solution for the bulk field produced by point-like sources is given by Eq.(104). Using Eq.(105), we then have
\[\varphi_{{}_{\mathcal{N}}}=g_{R}\partial_{R}-g_{L}\partial_{L}\, \tag{110}\]
where we have defined
\[\begin{split}& g_{L}\equiv n_{\omega}\Big{(}G^{\text{Out}}_{ \mathcal{N}}-e^{2\pi\omega(1-\zeta)}G^{\text{Out}*}_{\mathcal{N}}\Big{)}\,\\ & g_{R}\equiv(1+n_{\omega})\Big{(}G^{\text{Out}}_{\mathcal{N}}-e^ {-2\pi\omega\zeta}G^{\text{Out}*}_{\mathcal{N}}\Big{)}\.\end{split} \tag{111}\]
These are the dS analogues of the left/right _boundary-to-bulk_ Green functions which tell us how left and right sources affect the solution on the dS-SK geometry. They obey the Kubo-Martin-Schwinger (KMS) relation \(g_{R}(\zeta)=e^{2\pi\omega}g_{L}(1+\zeta)\) as well as the following boundary conditions on the dS-SK contour:
\[\begin{split}&\lim_{\zeta\to 0}r^{\nu+\frac{\mathcal{N}-1}{2}}g_{ L}=-1\,\quad\lim_{\zeta\to 0}r^{\nu+\frac{\mathcal{N}-1}{2}}g_{R}=0\,\\ &\lim_{\zeta\to 1}r^{\nu+\frac{\mathcal{N}-1}{2}}g_{L}=0\,\quad \lim_{\zeta\to 1}r^{\nu+\frac{\mathcal{N}-1}{2}}g_{R}=1.\end{split} \tag{112}\]
This result can be derived directly from the boundary condition in Eq.(108). The above conditions imply that the Green function \(g_{L,R}\) are two different smooth interpolations between the homogeneous solution regular at the origin on one side and a Green function with a source singularity on the other side. Thus, \(g_{R}\) is regular near the left boundary whereas \(g_{L}\) is regular near the right boundary.
The Green functions \(g_{L,R}\) can be written down explicitly. Substituting Eqs.(110) and (106) into Eq.(114), we get the following expressions:
\[g_{L} =n_{\omega}r^{-\nu-\frac{1}{2}(\mathcal{N}-1)}(1+r)^{-i\omega}\] \[\times\left\{\left[1-e^{2\pi\omega(1-\zeta)}\left(\frac{1-r}{1+r} \right)^{-i\omega}\right]\times{}_{2}F_{1}\left[\frac{1-\nu+\mu-i\omega}{2}, \frac{1-\nu-\mu-i\omega}{2};1-\nu;r^{2}\right]\right.\] \[\left.\quad-i\cot\pi\nu\left[\widehat{K}_{\text{Out}}+e^{2\pi \omega(1-\zeta)}\left(\frac{1-r}{1+r}\right)^{-i\omega}\widehat{K}_{\text{In} }\right]\times\frac{r^{2\nu}}{2\nu}\ {}_{2}F_{1}\left[\frac{1+\nu-\mu-i\omega}{2},\frac{1+\nu+\mu-i\omega}{2};1+ \nu;r^{2}\right]\right.\] \[\left.\quad-\left[\widehat{K}_{\text{Out}}-e^{2\pi\omega(1- \zeta)}\left(\frac{1-r}{1+r}\right)^{-i\omega}\widehat{K}_{\text{In}}\right] \times\frac{r^{2\nu}}{2\nu}\ {}_{2}F_{1}\left[\frac{1+\nu-\mu-i\omega}{2},\frac{1+\nu+\mu-i\omega}{2};1+ \nu;r^{2}\right]\right\}\,, \tag{116}\]
and
\[g_{R} =(1+n_{\omega})r^{-\nu-\frac{1}{2}(\mathcal{N}-1)}(1+r)^{-i\omega}\] \[\times\left\{\left[1-e^{-2\pi\omega\zeta}\left(\frac{1-r}{1+r} \right)^{-i\omega}\right]\times{}_{2}F_{1}\left[\frac{1-\nu+\mu-i\omega}{2}, \frac{1-\nu-\mu-i\omega}{2};1-\nu;r^{2}\right]\right.\] \[\left.\quad-i\cot\pi\nu\left[\widehat{K}_{\text{Out}}+e^{-2\pi \omega\zeta}\left(\frac{1-r}{1+r}\right)^{-i\omega}\widehat{K}_{\text{In}} \right]\times\frac{r^{2\nu}}{2\nu}\ {}_{2}F_{1}\left[\frac{1+\nu-\mu-i\omega}{2},\frac{1+\nu+\mu-i\omega}{2};1+ \nu;r^{2}\right]\right.\] \[\left.\quad-\left[\widehat{K}_{\text{Out}}-e^{-2\pi\omega\zeta} \left(\frac{1-r}{1+r}\right)^{-i\omega}\widehat{K}_{\text{In}}\right]\times \frac{r^{2\nu}}{2\nu}\ {}_{2}F_{1}\left[\frac{1+\nu-\mu-i\omega}{2},\frac{1+\nu+\mu-i\omega}{2};1+ \nu;r^{2}\right]\right\}\,. \tag{117}\]
These equations describe the Dirac-Deitweiler-Whiting[109, 110] type decomposition of the left/right Green functions into a singular solution which does not contribute to the radiation reaction, and a regular solution (the terms in the last line of each equation) which contribute to the finite influence phase.
Having said that, the reader should note that the expressions above are fairly complicated, with an elaborate branch cut structure that cannot be easily guessed a priori without the dS-SK prescription. These formulae are more complicated by the fact that we are forced to work with dimensional regularisation for even \(d\). We will simplify the expressions for these dS boundary-to-bulk propagators in the next subsection when we describe extended sources. For present purposes, it is, however, sufficient to note the following: despite the complexity of expressions, given that we have a counterterm procedure that works both for outgoing and incoming waves, we are guaranteed a finite renormalised conjugate field.
To see this explicitly, we construct the corresponding renormalised conjugate field
\[\pi_{\mathcal{N}}(\zeta,\omega,\ell)=-\pi_{\mathcal{N}}^{\text{Out}}(r,\omega,\ell)\mathfrak{J}_{\bar{F}}+e^{2\pi\omega(1-\zeta)}\pi_{\mathcal{N}}^{\text{ Out}*}(r,\omega,\ell)\mathfrak{J}_{\bar{P}}=\pi_{R}(\zeta,\omega,\ell) \mathfrak{J}_{R}-\pi_{L}(\zeta,\omega,\ell)\mathfrak{J}_{L}\, \tag{118}\]
with the left/right boundary-to-bulk Green functions for the conjugate field defined by
\[\pi_{L}(\zeta,\omega,\ell) \equiv-\left[r^{\mathcal{N}}D_{+}+r^{\mathcal{N}-1}\mathscr{C}_{ \mathcal{N}}\right]g_{L}(\zeta,\omega,\ell)=n_{\omega}\Big{(}\pi_{\mathcal{N}} ^{\text{Out}}(r,\omega,\ell)-e^{2\pi\omega(1-\zeta)}\pi_{\mathcal{N}}^{\text{ Out}*}(r,\omega,\ell)\Big{)}\,\] \[\pi_{R}(\zeta,\omega,\ell) \equiv-\left[r^{\mathcal{N}}D_{+}+r^{\mathcal{N}-1}\mathscr{C}_{ \mathcal{N}}\right]g_{R}(\zeta,\omega,\ell)=(1+n_{\omega})\Big{(}\pi_{ \mathcal{N}}^{\text{Out}}(r,\omega,\ell)-e^{-2\pi\omega\zeta}\pi_{\mathcal{N}} ^{\text{Out}*}(r,\omega,\ell)\Big{)}. \tag{119}\]
The equality here follows from a logic similar to that used in Eq.(C.6). The explicit forms of \(\pi^{\rm Out}_{\mathcal{N}}\) and \(e^{-2\pi\omega\zeta}\pi^{\rm Out*}_{\mathcal{N}}\) are given in Eqs.(B.26) and (C.7) respectively. Substituting them in, we get
\[\begin{split}\pi_{L}&=n_{\omega}\left[(1+i\cot\pi \nu)\widehat{K}_{\rm Out}-e^{2\pi\omega(1-\zeta)}\left(\frac{1-r}{1+r}\right)^ {-i\omega}(1-i\cot\pi\nu)\widehat{K}_{\rm In}\right]\mathscr{Z}_{\mathcal{N}}( \omega,r)\\ &\quad\times r^{\nu+\frac{1}{2}(\mathcal{N}-1)}(1+r)^{-i\omega}{} _{2}F_{1}\left[\frac{1+\nu-\mu-i\omega}{2},\frac{1+\nu+\mu-i\omega}{2};1+\nu; r^{2}\right]\,\\ \pi_{R}&=(1+n_{\omega})\left[(1+i\cot\pi\nu)\widehat{ K}_{\rm Out}-e^{-2\pi\omega\zeta}\left(\frac{1-r}{1+r}\right)^{-i\omega}(1-i \cot\pi\nu)\widehat{K}_{\rm In}\right]\mathscr{Z}_{\mathcal{N}}(\omega,r)\\ &\quad\times r^{\nu+\frac{1}{2}(\mathcal{N}-1)}(1+r)^{-i\omega}{} _{2}F_{1}\left[\frac{1+\nu-\mu-i\omega}{2},\frac{1+\nu+\mu-i\omega}{2};1+\nu; r^{2}\right]\.\end{split}\] (C.20)
This picks out the regular part of the solution on dS-SK contour renormalised by \(\mathscr{Z}_{\mathcal{N}}(\omega,r)\), as expected.
For \(\nu\in\mathbb{Z}\), we should subtract the \(\cot\pi\nu\) divergences using further counterterms in Eq.(B.35): once this is done, we can relax the dimensional regularisation and effectively replace
\[(1+i\cot\pi\nu)\widehat{K}_{\rm Out}\to K_{\rm Out}\,\quad(1-i\cot\pi\nu) \widehat{K}_{\rm In}\to K_{\rm In}\.\]
After this is done, we can take \(r\to 0\) limit on both sides of the dS-SK contour to get
\[\lim_{r\to 0}r^{-\nu-\frac{1}{2}(\mathcal{N}-1)}\pi_{\mathcal{N}}=\begin{cases}K _{LR}\mathcal{J}_{R}-K_{LL}\mathcal{J}_{L}&\text{L boundary },\\ K_{RR}\mathcal{J}_{R}-K_{RL}\mathcal{J}_{L}&\text{R boundary },\end{cases}\] (C.21)
where we have defined the Schwinger-Keldysh worldline Green functions defined via
\[\begin{split} K_{LL}&\equiv n_{\omega}K_{\rm Out}-(1+n_{ \omega})K_{\rm In}\,\ \ K_{LR}\equiv(1+n_{\omega})\Big{(}K_{\rm Out}-K_{\rm In}\Big{)}\,\\ K_{RL}&\equiv n_{\omega}\Big{(}K_{\rm Out}-K_{\rm In} \Big{)}\,\ \ K_{RR}\equiv(1+n_{\omega})K_{\rm Out}-n_{\omega}K_{\rm In}\.\end{split}\] (C.22)
These are exactly the expressions for the Schwinger-Keldysh two-point functions of a bosonic system coupled to a thermal bath[53, 54, 62].
Now that we have the near origin values of both the generalised free scalar system as well as its renormalised conjugate field, we are ready to compute the influence phase of the observer in the saddle point approximation by evaluating the on-shell action. We want to compute the action given in Eq.(B.18) along with the counter-term in Eqs.(B.21) and (B.35) over the dS-SK solution we found in Eq.(C.13). We begin with the full action
\[\begin{split} S&=-\frac{1}{2}\sum_{\mathbb{L}}\int \frac{d\omega}{2\pi}\oint\frac{r^{\mathcal{N}}dr}{1-r^{2}}\Big{[}(D_{+}\varphi _{{}_{\mathcal{N}}})^{*}D_{+}\varphi_{{}_{\mathcal{N}}}-\omega^{2}(1-r^{2}) \varphi_{{}_{\mathcal{N}}}^{*}\varphi_{{}_{\mathcal{N}}}\\ &\quad-\frac{1}{4r^{2}}\Big{\{}(\mathcal{N}-1)^{2}-4\nu^{2}+[4\mu ^{2}-(\mathcal{N}+1)^{2}]r^{2}\Big{\}}\varphi_{{}_{\mathcal{N}}}^{*}\varphi_{{ }_{\mathcal{N}}}\Big{]}+S_{ct}\,\end{split}\] (C.23)
integrate by parts over the bulk terms and then use the equation of motion in Eq.(B.7). This results in an on-shell action written purely in terms of boundary terms:
\[S_{\text{On-Shell}}=\frac{1}{2}\sum_{\mathbb{L}}\int\frac{d\omega}{2\pi}\varphi _{{}_{\mathcal{N}}}^{*}\pi_{\mathcal{N}}|_{\text{Bnd}}=-\frac{1}{2}\sum_{ \mathbb{L}}\int\frac{d\omega}{2\pi}\Big{\{}\mathcal{J}_{R}^{*}[K_{RR} \mathcal{J}_{R}-K_{RL}\mathcal{J}_{L}]-\mathcal{J}_{L}^{*}[K_{LR}\mathcal{J}_ {R}-K_{LL}\mathcal{J}_{L}]\Big{\}}\,\] (C.24)
where \(\pi_{\mathcal{N}}\) is the renormalised conjugate field defined in Eq.(B.22). Here we have used the fact that the integrand in the first step can be written as a product
\[[r^{\nu+\frac{1}{2}(N-1)}\varphi_{{}_{\mathcal{N}}}]^{*}r^{-\nu-\frac{1}{2}(N-1 )}\pi_{\mathcal{N}}\,\] (C.25)
and each factor in this product has a finite limit as we remove the regulator at the boundary (i.e. take \(r_{c}\to 0\) limit). The dS-SK contour integral \(\oint\) runs clockwise from the right static patch to the left static patch, thus resulting in the sign of the final expression above.
We can further simplify the above expression using the reality properties of the multipole sources as well as \(1+n_{\omega}+n_{-\omega}=0\). The cosmological influence phase of the point-like dS observer can then be written in the form given in Eq.(3.10).
### Extended sources on dS-SK contour I : bulk-to-bulk propagator
In this section, we will describe the problem of a finite size observer within dS spacetime. One motivation for such an exercise is to give a more physical version of the regularisation, counter-terms and renormalisation described in the previous sub-sections. We will see that indeed a finite size observer has a renormalised cosmological influence phase, which, as its size is reduced, approaches the result for a point-like observer. Apart from this formal motivation, we are also interested in checking whether the conjectured dS-SK saddle point correctly reproduces the finite size physics in dS. As we shall see, this is also a way to naturally generalise our construction to a non-co-moving observer with a peculiar velocity as well as to describe observers made of multiple worldlines (or equivalently the case of a string or a membrane in dS).
The main physics in all the above cases is that of relative time-delays: for an extended source, its effective radiative multipole moments have to be computed by adding up source strengths at various points with different time-delays. This is necessary because the emitted wave takes a finite amount of time to cross an extended source, and this wave-crossing time has to be accounted for when adding up emissions from two farther ends of the source. For spherical sources in flat space, this translates to modulating the source with an appropriate Bessel J function in frequency domain. We will see below that an analogous statement in dS emerges naturally out of dS-SK saddle-point geometry.
Let us begin by describing our setup. Consider an extended source of the generalised/designer scalar field in dS spacetime. This means modifying the radial ODE in Eq.(B.7) by a source term of the form
\[\begin{split}\frac{1}{r^{\mathcal{N}}}D_{+}&[r^{ \mathcal{N}}D_{+}\varphi_{{}_{\mathcal{N}}}]+\omega^{2}\varphi_{{}_{\mathcal{ N}}}\\ &+\frac{1-r^{2}}{4r^{2}}\Big{\{}(\mathcal{N}-1)^{2}-4\nu^{2}+[4 \mu^{2}-(\mathcal{N}+1)^{2}]r^{2}\Big{\}}\varphi_{{}_{\mathcal{N}}}+(1-r^{2} )\varrho_{\mathcal{N}}(\zeta,\omega,\mathbb{L})=0\.\end{split}\] (C.26)
In the context of dS-SK contour, we will let \(\varrho_{\mathcal{N}}\) be a general function over the saddle-point geometry, allowing it to even take completely different values in the two copies of the static patch (i.e., as a function of complex \(r\), it is allowed to have a branch-cut along the static patch). The \((\omega,\mathbb{L})\) arguments of \(\varrho_{\mathcal{N}}\) imply that we also allow the most general time/angle dependence.
The solution for the above ODE can then be written in terms of an appropriate dS-SK contour-ordered, bulk-to-bulk Green function:
\[\varphi_{{}_{\mathcal{N}}}(\zeta,\omega,\mathbb{L})=\oint r_{0}^{\mathcal{N}} dr_{0}\ \mathbb{G}(\zeta|\zeta_{0},\omega,\mathbb{L})\varrho_{{}_{\mathcal{N}}}(\zeta_{0 },\omega,\mathbb{L})\.\] (C.27)
Here \(\oint\) refers to the integral over clockwise dS-SK contour and \(\mathbb{G}\) is the radial Green function satisfying the appropriate boundary conditions (which we will detail below).
According to our proposal in this note, the influence phase of the extended source can be computed by solving the above ODE everywhere on dS-SK and then substituting the solution into the action corresponding to the above ODE, viz., by evaluating
\[\begin{split} S&=-\frac{1}{2}\sum_{\mathbb{L}}\int \frac{d\omega}{2\pi}\oint\frac{r^{\mathcal{N}}dr}{1-r^{2}}\Big{[}(D_{+}\varphi _{{}_{\mathcal{N}}})^{*}D_{+}\varphi_{{}_{\mathcal{N}}}-\omega^{2}\varphi_{{}_ {\mathcal{N}}}^{*}\varphi_{{}_{\mathcal{N}}}\\ &\quad\quad-\frac{1-r^{2}}{4r^{2}}\Big{\{}(\mathcal{N}-1)^{2}-4 \nu^{2}+[4\mu^{2}-(\mathcal{N}+1)^{2}]r^{2}\Big{\}}\varphi_{{}_{\mathcal{N}}}^ {*}\varphi_{{}_{\mathcal{N}}}\Big{]}\\ &\quad\quad+\sum_{\mathbb{L}}\int\frac{d\omega}{2\pi}\oint r^{ \mathcal{N}}dr\ \varphi_{{}_{\mathcal{N}}}^{*}\varrho_{{}_{\mathcal{N}}}+S_{ct}[ \varrho_{{}_{\mathcal{N}}}]\end{split} \tag{102}\]
on the Green function solution above. The last line in the action above gives the source term and the counter-term parts of the action.23 For a truly extended source, counter-terms are not necessary for finiteness and their job is to provide the finite renormalisation of the conservative part of the action.
Footnote 23: The reader should note that the counterterms used here for extended sources need not (and, indeed will not) match with the counterterms used for point sources in the previous subsections.
Using the radial ODE above, on-shell action can be reduced to the following simple form
\[\begin{split} S|_{\mathbf{On-shell}}&=\frac{1}{2} \sum_{\mathbb{L}}\int\frac{d\omega}{2\pi}\oint r^{\mathcal{N}}dr\ \varrho_{{}_{\mathcal{N}}}^{*}\varphi_{{}_{\mathcal{N}}}|_{\mathbf{On-shell}} +S_{ct}[\varrho_{{}_{\mathcal{N}}}]\\ &=\frac{1}{2}\sum_{\mathbb{L}}\int\frac{d\omega}{2\pi}\oint r^{ \mathcal{N}}dr\ \oint r_{0}^{\mathcal{N}}dr_{0}\ [\varrho_{{}_{\mathcal{N}}}(\zeta,\omega, \mathbb{L})]^{*}\mathbb{G}(\zeta|\zeta_{0},\omega,\mathbb{L})\varrho_{{}_{ \mathcal{N}}}(\zeta_{0},\omega,\mathbb{L})+S_{ct}[\varrho_{{}_{\mathcal{N}}}] \.\end{split} \tag{103}\]
Thus, once we solve for the bulk-to-bulk Green function \(\mathbb{G}\), we can substitute it into the above expression to obtain the dS-SK saddle point answer for cosmological influence phase \(S_{\mathrm{CIP}}\). While it is not immediately evident, we will demonstrate in the next subsection that _the dissipative part of the influence phase for the extended sources computed from the expression above, when written in terms of appropriate multipole moments, takes a form identical to that for a point source derived before._ In addition to this radiation reaction, for extended sources, we also expect conservative interactions between its different internal parts.
Let us now derive an explicit expression for the bulk-to-bulk Green function \(\mathbb{G}\). The construction here is analogous to the one in vacuum AdS[66], as well as the contour-ordered bulk-to-bulk Green function in the SK contour corresponding to planar AdS black holes[77; 51]. We will demand that this Green function be regular at the edges of dS-SK contour, viz., we require that
\[\lim_{\zeta\to 0}r^{\nu+\frac{N-1}{2}}\mathbb{G}=\lim_{\zeta\to 1}r^{\nu+ \frac{N-1}{2}}\mathbb{G}=0. \tag{104}\]
Further, to be a Green function, it should obey the ODE
\[\begin{split}&\frac{1}{r^{\mathcal{N}}}D_{+}[r^{\mathcal{N}}D_{+} \mathbb{G}]+\omega^{2}\mathbb{G}\\ &\quad\quad+\frac{1-r^{2}}{4r^{2}}\Big{\{}(\mathcal{N}-1)^{2}-4 \nu^{2}+[4\mu^{2}-(\mathcal{N}+1)^{2}]r^{2}\Big{\}}\mathbb{G}+\frac{1}{r^{ \mathcal{N}}}(1-r^{2})\delta_{c}(r-r_{0})=0\.\end{split} \tag{105}\]
Here \(\delta_{c}(r-r_{0})\) is the contour-ordered delta function on the dS-SK contour. The above ODE implies that \(\mathbb{G}\) is a solution of the homogeneous radial ODE for \(\zeta\neq\zeta_{0}\) with a unit discontinuity in the conjugate field at \(\zeta=\zeta_{0}\). We have already solved the homogeneous radial ODE for point sources to construct the left and right boundary-to-bulk Green functions in Eq.(100). These are solutions characterised by the boundary conditions specified in Eq.(102).
Looking at Eq.(111), we conclude that, we should take \(\mathbb{G}\propto g_{R}\) near the left boundary and \(\mathbb{G}\propto g_{L}\) near the right boundary since these are the solutions that satisfy the necessary regularity conditions in Eq.(114). Demanding continuity, we surmise that
\[\mathbb{G}(\zeta|\zeta_{0},\omega,\mathbb{L}) =\frac{1}{W_{LR}(\zeta_{0},\omega,\mathbb{L})}g_{R}(\zeta_{>}, \omega,\mathbb{L})g_{L}(\zeta_{<},\omega,\mathbb{L}) \tag{115}\] \[\equiv\frac{1}{W_{LR}(\zeta_{0},\omega,\mathbb{L})}\begin{cases}g_ {R}(\zeta,\omega,\mathbb{L})g_{L}(\zeta_{0},\omega,\mathbb{L})&\text{ if }\zeta\succ\zeta_{0}\\ g_{L}(\zeta,\omega,\mathbb{L})g_{R}(\zeta_{0},\omega,\mathbb{L})&\text{ if } \zeta\prec\zeta_{0}\end{cases}\.\]
Here the symbols \(\succ\) and \(\prec\) denote comparison using the radial contour ordering of dS-SK contour. The unit discontinuity condition on the conjugate field fixes the function \(W_{RL}\) to be the Wronskian between right and left boundary-to-bulk Green functions, viz.,
\[W_{RL}(\zeta,\omega,\mathbb{L}) \equiv g_{L}\pi_{R}-g_{R}\pi_{L}=(1+n_{\omega})e^{-2\pi\omega \zeta}\Big{(}G^{\text{Out}}_{\mathcal{N}}\pi^{\text{Out}*}_{\mathcal{N}}-G^{ \text{Out}*}_{\mathcal{N}}\pi^{\text{Out}}_{\mathcal{N}}\Big{)} \tag{116}\] \[=(1+n_{\omega})e^{-2\pi\omega\zeta}\left[(1-i\cot\pi\nu)\widehat{ K}_{\text{In}}-(1+i\cot\pi\nu)\widehat{K}_{\text{Out}}\right]\.\]
Here, the equality in the first line follows from Eqs.(110) and (112). The last equality follows by substituting the expressions for \(G^{\text{Out}}_{\mathcal{N}}\) and \(\pi^{\text{Out}}_{\mathcal{N}}\) from Eqs.(115) and (116), and then invoking the following hypergeometric Wronskian identity
\[\mathscr{Z}_{\mathcal{N}}(r,\omega)r^{\nu+\frac{1}{2}(\mathcal{N}- 1)}(1+r)^{-i\omega}{}_{2}F_{1}\left[\frac{1+\nu-\mu-i\omega}{2},\frac{1+\nu+ \mu-i\omega}{2};1+\nu;r^{2}\right] \tag{117}\] \[\qquad\qquad=\left(\frac{1-r}{1+r}\right)^{i\omega}\left\{r^{- \nu-\frac{1}{2}(\mathcal{N}-1)}(1+r)^{-i\omega}{}_{2}F_{1}\left[\frac{1-\nu+ \mu-i\omega}{2},\frac{1-\nu-\mu-i\omega}{2};1-\nu;r^{2}\right]\right\}^{-1}\.\]
This identity expresses a combination of the derivatives of hypergeometric functions in terms of the hypergeometric functions, and such an identity can be derived from a Wronskian like argument associated with the corresponding radial ODE.
The reader should note an important subtlety in the statement above: the Wronskian here is _not_ a constant function along radial direction, but rather varies as we traverse the dS-SK contour. A similar subtlety was already noted in the AdS context by [51]. As we shall eventually see, the extra \(e^{-2\pi\omega\zeta}\) factor is here for a good physical reason: it ensures that multipole moments that enter into cosmological influence phase are computed using source distributions in standard time-slices, instead of source distributions along Eddington-Finkelstein null time-slices.
To proceed further, we should now substitute the explicit forms of dS-SK boundary-to-bulk propagators given in Eqs.(114) and (115) into the expression for the bulk-to-bulk propagator in Eq.(115), and then perform the dS-SK contour integral in Eq.(116). To this end, we first regroup the expressions for \(g_{L}\) and \(g_{R}\) into somewhat more tractable expressions with clear branch cut structures. For what follows, we will find it convenient to separate out the solutions into a singular (non-normalisable) part \(\Xi_{nn}\) vs a regular (normalisable) part \(\Xi_{n}\), _using the renormalised world line Green functions instead of the bare ones from the start_. The adjectives singular/regular refer here to their behaviour near the worldline (i.e., near \(r=0\)). To this end, let us begin by defining two functions \(\Xi_{nn},\Xi_{n}\) implicitly via
\[\left(\frac{1-r}{1+r}\right)^{-\frac{i\omega}{2}}G^{\text{Out}*}_{ \mathcal{N}}(r,\omega,\mathbb{L}) \equiv\Xi_{nn}(r,\omega,\mathbb{L})-K_{\text{In}}\ \Xi_{n}(r,\omega,\mathbb{L})\, \tag{118}\] \[\left(\frac{1-r}{1+r}\right)^{\frac{i\omega}{2}}G^{\text{Out}*}_{ \mathcal{N}}(r,\omega,\mathbb{L}) \equiv\Xi_{nn}(r,\omega,\mathbb{L})-K_{\text{In}}\ \Xi_{n}(r,\omega,\mathbb{L})\,\]
where \(K_{\rm Out}\) and \(K_{\rm In}\) are the final renormalised world line Green functions. The above equality should be thought of as defining the functions \(\Xi_{n}(r,\omega,\mathbb{L})\) and \(\Xi_{nn}(r,\omega,\mathbb{L})\) as analytic functions on the open static patch \(0<r<1\), viz., in the equations above, we align all the potential branch cuts away from the unit disc in complex radius plane. The above equations can be inverted to give a direct definition of these functions
\[\begin{split}(K_{\rm In}-K_{\rm Out})\ \Xi_{n}& \equiv\left(\frac{1-r}{1+r}\right)^{-\frac{i\omega}{2}}G_{N}^{\rm Out }-\left(\frac{1-r}{1+r}\right)^{\frac{i\omega}{2}}G_{N}^{\rm Out*}\,\\ (K_{\rm In}-K_{\rm Out})\ \Xi_{nn}&\equiv\left(\frac{1-r}{1 +r}\right)^{-\frac{i\omega}{2}}K_{\rm In}G_{N}^{\rm Out}-\left(\frac{1-r}{1+r} \right)^{\frac{i\omega}{2}}K_{\rm Out}G_{N}^{\rm Out*}\.\end{split}\] (C.36)
Since \(K_{\rm In}(\omega,\ell)=K_{\rm Out}(-\omega,\ell)\) and \(G_{N}^{\rm Out*}(\omega,\ell)=G_{N}^{\rm Out}(-\omega,\ell)\), the above expressions imply that both \(\Xi_{n}\) and \(\Xi_{nn}\) are even functions of \(\omega\). Explicit expressions can be written down for these two functions using Eq.(B.10). We have
\[\begin{split}\Xi_{n}&\equiv\frac{1}{2\nu}r^{\nu- \frac{1}{2}(\mathcal{N}-1)}(1-r^{2})^{-\frac{i\omega}{2}}\frac{(1+i\cot\nu\pi )\widehat{K}_{\rm Out}-(1-i\cot\nu\pi)\widehat{K}_{\rm In}}{K_{\rm Out}-K_{ \rm In}}\\ &\qquad\times{}_{2}F_{1}\left[\frac{1+\nu-\mu-i\omega}{2},\frac{ 1+\nu+\mu-i\omega}{2};1+\nu;r^{2}\right]\,\end{split}\] (C.37)
for the normalisable/regular mode and
\[\begin{split}\Xi_{nn}&\equiv r^{-\nu-\frac{1}{2}( \mathcal{N}-1)}(1-r^{2})^{-\frac{i\omega}{2}}\Big{\{}_{2}F_{1}\left[\frac{1- \nu+\mu-i\omega}{2},\frac{1-\nu-\mu-i\omega}{2};1-\nu;r^{2}\right]\\ &-\ \frac{K_{\rm In}(1+i\cot\nu\pi)\widehat{K}_{\rm Out}-K_{\rm Out }(1-i\cot\nu\pi)\widehat{K}_{\rm In}}{K_{\rm In}-K_{\rm Out}}\\ &\qquad\times\frac{r^{2\nu}}{2\nu}{}_{2}F_{1}\left[\frac{1+\nu- \mu-i\omega}{2},\frac{1+\nu+\mu-i\omega}{2};1+\nu;r^{2}\right]\Big{\}}\end{split}\] (C.38)
for the non-normalisable/singular mode. One advantage of working with such renormalised functions is that we can safely remove the dimensional regularisation in the above expressions resulting in a finite limit. When \(d\) is odd and \(\nu\equiv\ell+\frac{d}{2}-1\in\mathbb{Z}+\frac{1}{2}\), we can simply set \(\cot\nu\pi=0\) and take \(\widehat{K}_{\rm Out}\to K_{\rm Out}\) and \(\widehat{K}_{\rm In}\to K_{\rm In}\). All the \(K\)'s then drop out of the above expression, and \(\Xi_{nn}\) and \(\Xi_{n}\) become proportional to single hypergeometric functions.
When \(d\) is even and \(\nu\to n\in\mathbb{Z}\), we can use Eqs.(B.33) and (C.10) to write
\[\begin{split}(1+i\cot\nu\pi)\widehat{K}_{\rm Out}& =\frac{2}{\nu-n}\Delta_{\mathcal{N}}(n,\mu,\omega)+K_{\rm Out}+O( \nu-n)\,\\ (1-i\cot\nu\pi)K_{\rm In}&=\frac{2}{\nu-n}\Delta_{ \mathcal{N}}(n,\mu,\omega)+K_{\rm In}+O(\nu-n)\.\end{split}\] (C.39)
Using these expansions, the \(K\)s cancel out again and we are left with the following limits:
\[\begin{split}\Xi_{n}|_{\rm Even\ d}&\equiv\lim_{ \nu\to n}\frac{1}{2\nu}r^{\nu-\frac{1}{2}(\mathcal{N}-1)}(1-r^{2})^{-\frac{i \omega}{2}}{}_{2}F_{1}\left[\frac{1+\nu-\mu-i\omega}{2},\frac{1+\nu+\mu-i \omega}{2};1+\nu;r^{2}\right]\,\\ \Xi_{nn}|_{\rm Even\ d}&\equiv\lim_{\nu\to n}r^{-\nu- \frac{1}{2}(\mathcal{N}-1)}(1-r^{2})^{-\frac{i\omega}{2}}\Bigg{\{}_{2}F_{1} \left[\frac{1-\nu+\mu-i\omega}{2},\frac{1-\nu-\mu-i\omega}{2};1-\nu;r^{2} \right]\\ &\qquad-\ \frac{r^{2\nu}}{\nu(\nu-n)}\Delta_{\mathcal{N}}(n,\mu,\omega )\ _{2}F_{1}\left[\frac{1+\nu-\mu-i\omega}{2},\frac{1+\nu+\mu-i\omega}{2};1+\nu;r^{2} \right]\Bigg{\}}\.\end{split}\] (C.40)
One can explicitly check that these limits exist and result in finite expressions for both regular/singular modes when \(d\) is even. To summarise, Eq.(C.35) decomposes the outgoing/incoming Green functions into renormalised pieces in any \(d\).
We will now rewrite the full bulk-to-bulk propagator in Eq.(C.32) in terms of these renormalised modes. We begin by rewriting the boundary-to-bulk propagators: using Eq.(C.14), we obtain
\[\begin{split} g_{L}&=n_{\omega}\left(\frac{1-r}{1+ r}\right)^{\frac{i\omega}{2}}\left\{\left[1-e^{2\pi\omega(1-\zeta)}\left(\frac{1-r}{1+ r}\right)^{-i\omega}\right]\Xi_{nn}\right.\\ &\qquad\qquad\left.-\left[K_{\text{Out}}-K_{\text{In}}e^{2\pi \omega(1-\zeta)}\left(\frac{1-r}{1+r}\right)^{-i\omega}\right]\Xi_{n}\right\} \,,\\ g_{R}&=(1+n_{\omega})\left(\frac{1-r}{1+r}\right)^{ \frac{i\omega}{2}}\left\{\left[1-e^{-2\pi\omega\zeta}\left(\frac{1-r}{1+r} \right)^{-i\omega}\right]\Xi_{nn}\right.\\ &\qquad\qquad\left.-\left[K_{\text{Out}}-K_{\text{In}}e^{-2\pi \omega\zeta}\left(\frac{1-r}{1+r}\right)^{-i\omega}\right]\Xi_{n}\right\}\,. \end{split}\] (C.41)
Substituting them back into Eq.(C.32), we get an explicit expression for the bulk-to-bulk propagator of the form
\[\begin{split}\mathbb{G}(\zeta|\zeta_{0},\omega,\mathbb{L})& =\frac{1}{W_{LR}(\zeta_{0},\omega,\mathbb{L})}g_{R}(\zeta_{ \succ},\omega,\mathbb{L})g_{L}(\zeta_{\prec},\omega,\mathbb{L})\\ &=\frac{n_{\omega}e^{2\pi\omega\zeta_{0}}}{K_{\text{In}}-K_{ \text{Out}}}\left(\frac{1-r}{1+r}\right)^{\frac{i\omega}{2}}\left(\frac{1-r_{0 }}{1+r_{0}}\right)^{\frac{i\omega}{2}}\\ &\times\left\{\left[1-e^{-2\pi\omega\zeta_{\succ}}\left(\frac{1 -r_{\succ}}{1+r_{\succ}}\right)^{-i\omega}\right]\Xi_{nn}(r_{\succ})\right.\\ &\qquad\qquad\left.-\left[K_{\text{Out}}-K_{\text{In}}e^{-2\pi \omega\zeta_{\succ}}\left(\frac{1-r_{\succ}}{1+r_{\succ}}\right)^{-i\omega} \right]\Xi_{n}(r_{\succ})\right\}\\ &\times\left\{\left[1-e^{2\pi\omega(1-\zeta_{\prec})}\left( \frac{1-r_{\prec}}{1+r_{\prec}}\right)^{-i\omega}\right]\Xi_{nn}(r_{\prec}) \right.\\ &\qquad\qquad\left.-\left[K_{\text{Out}}-K_{\text{In}}e^{2\pi \omega(1-\zeta_{\prec})}\left(\frac{1-r_{\prec}}{1+r_{\prec}}\right)^{-i \omega}\right]\Xi_{n}(r_{\prec})\right\}\,.\end{split}\] (C.42)
Here, since all quantities are already renormalised, we have removed the dimensional regularisation24 in the Wronskian given in Eq.(C.33). To conclude, given an arbitrary extended source on the dS-SK geometry, the above bulk-to-bulk propagator, we can get the bulk field by substituting the above bulk-to-bulk Green function into Eq.(C.27). Further, we can also compute the on-shell action Eq.(C.29), which, according to our prescription, should yield the influence phase of that extended source.
Footnote 24: For odd \(d\), we set \(\cot\nu\pi=0\) and remove the hats on \(K\)s. For even \(d\), we use Eq.(C.39).
### Extended sources on dS-SK contour II: Radiative multipoles
In this subsection, we would like to evaluate both the field and the influence of an extended source. We will find it convenient to discretise the source into a set of spherical shells around the centre of the right/left static patches. Let \(\zeta=1+\zeta_{i}\) characterise the radial position of the \(i^{th}\) spherical shell
in the right patch, the same radial position in the left patch is then characterised by \(\zeta=\zeta_{i}\). We will let the \(i\) vary over \(1\) to \(N_{s}\), where \(N_{s}\) is the number of shells in each copy of the static patch. We will take the strength of the scalar source on these spherical shells to be
\[r^{\mathcal{N}}\varrho_{{}_{N}}(\zeta,\omega,\mathbb{L})=\sum_{i}\sigma^{R}_{i }(\omega,\mathbb{L})\ \delta_{c}(\zeta|1+\zeta_{i})-\sum_{i}\sigma^{L}_{i}(\omega,\mathbb{L})\ \delta_{c}(\zeta|\zeta_{i}). \tag{104}\]
Here, as before, we work in frequency domain/orthonormal spherical harmonic basis, and allow arbitrary time/angle dependence. Any arbitrary source distribution confined within the open static patch can be approximated to any desired accuracy as being built from such spherical shell sources. As we shall see, such a discrete model regularises the divergences associated with the self-interactions.
We will begin by writing down the bulk field due to the spherical shell sources described above. We have, using Eq.(102), a superposition of fields produced by each shell source, i.e.,
\[\varphi_{{}_{N}}(\zeta,\omega,\mathbb{L}) =\oint r_{0}^{\mathcal{N}}dr_{0}\ \mathbb{G}(\zeta|\zeta_{0},\omega,\mathbb{L})\varrho_{{}_{N}}(\zeta_{0}, \omega,\mathbb{L})\] \[=\sum_{i}\frac{1}{W_{LR}(\zeta_{i},\omega,\mathbb{L})}\left\{ \begin{array}{ll}e^{2\pi\omega}g_{L}(\zeta,\omega,\mathbb{L})\ \Big{[}g_{R}(1+\zeta_{i},\omega,\mathbb{L})\ \sigma^{R}_{i}-g_{L}(1+\zeta_{i},\omega,\mathbb{L})\ \sigma^{L}_{i}\Big{]}& \text{ if }\zeta\prec 1+\zeta_{i}\,\\ &\\ g_{R}(\zeta_{i},\omega,\mathbb{L})\ \Big{[}g_{R}(\zeta_{i},\omega,\mathbb{L})\ \sigma^{R}_{i}-g_{L}(\zeta_{i},\omega,\mathbb{L})\ \sigma^{L}_{i}\Big{]}& \text{ if }1+\zeta_{i}\prec\zeta\prec\zeta_{i}\,\\ &\\ g_{R}(\zeta,\omega,\mathbb{L})\ \Big{[}g_{R}(\zeta_{i},\omega,\mathbb{L})\ \sigma^{R}_{i}-g_{L}(\zeta_{i},\omega,\mathbb{L})\ \sigma^{L}_{i}\Big{]}& \text{ if }\zeta\succ\zeta_{i}\.\end{array}\right. \tag{105}\]
We remind the reader that \(\prec\) and \(\succ\) are comparisons using the radial contour-ordering of the dS-Sk contour. We also remind the reader that \(\zeta\) changes from \(1\) to \(0\), as we traverse the clockwise dS-SK contour, starting from the right static patch (See Fig.10). The reader should note that the above superposition of fields is continuous everywhere, but its derivative (and hence the conjugate field) is discontinuous at each spherical shell, with the discontinuity being determined by the strength of the scalar source at that shell. This is expected since the bulk-to-bulk Green function was constructed in the last subsection with precisely these boundary conditions in mind.
Figure 8: Spherical shell sources centred around the right/left static patches shown in the complex r plane. Their positions on the \(L\) contour are related to their position on the \(R\) contour by the branch cut discontinuity in \(\zeta\).
Given the above field, computing the on-shell action is straightforward. We use Eq.(C.29) to write
\[\begin{split} S|_{\mathbf{On-shell}}&=\frac{1}{2}\sum_ {\perp}\int\frac{d\omega}{2\pi}\oint r^{\mathcal{N}}dr\ \varrho_{{}_{\mathcal{N}}}^{*}\varphi_{{}_{\mathcal{N}}}|_{\mathbf{On-shell}} \\ &=\frac{1}{2}\sum_{ij\mathbb{L}}\int\frac{d\omega}{2\pi}\frac{g_{R} (\zeta_{i},\omega,\mathbb{L})}{W_{LR}(\zeta_{i},\omega,\mathbb{L})}\Big{\{} \sigma_{j}^{R*}\ \Big{[}g_{R}(1+\zeta_{j},\omega,\mathbb{L})\sigma_{i}^{R}-g_{L}(1+\zeta_{j}, \omega,\mathbb{L})\sigma_{i}^{L}\Big{]}\\ &\qquad\qquad-\sigma_{j}^{L*}\ \Big{[}g_{R}(\zeta_{j},\omega, \mathbb{L})\sigma_{i}^{R}-g_{L}(\zeta_{j},\omega,\mathbb{L})\sigma_{i}^{L} \Big{]}\Big{\}}\.\end{split}\] (C.45)
Even though we are working with distributional sources/fields, given the continuity of \(\varphi_{{}_{\mathcal{N}}}\), the computation above is unambiguous. Next, we substitute explicit forms of the boundary-to-bulk Green functions as well as the Wronskian in terms of renormalised quantities. We have, using Eqs.(C.41) and (C.4), the following set of equalities:
\[\begin{split} W_{LR}(\zeta_{i},\omega,\mathbb{L})& =-(1+n_{\omega})\left(\frac{1-r_{i}}{1+r_{i}}\right)^{i\omega}[K_{ \mathrm{Out}}-K_{\mathrm{In}}]\,\\ \frac{g_{R}(\zeta_{i},\omega,\mathbb{L})}{W_{LR}(\zeta_{i}, \omega,\mathbb{L})}&=\left(\frac{1-r_{i}}{1+r_{i}}\right)^{- \frac{i\omega}{2}}\Xi_{n}(r_{i},\omega,\mathbb{L})\,\\ g_{L}(\zeta_{i},\omega,\mathbb{L})&=-\left(\frac{1-r_ {i}}{1+r_{i}}\right)^{\frac{i\omega}{2}}\left\{\Xi_{nn}(r_{i},\omega,\mathbb{L })+[n_{\omega}K_{\mathrm{Out}}-(1+n_{\omega})K_{\mathrm{In}}]\,\Xi_{n}(r_{i}, \omega,\mathbb{L})\right\}\,\\ g_{L}(1+\zeta_{i},\omega,\mathbb{L})&=-n_{\omega} \left(\frac{1-r_{i}}{1+r_{i}}\right)^{\frac{i\omega}{2}}[K_{\mathrm{Out}}-K_{ \mathrm{In}}]\,\Xi_{n}(r_{i},\omega,\mathbb{L})\,\\ g_{R}(\zeta_{i},\omega,\mathbb{L})&=-(1+n_{\omega} )\left(\frac{1-r_{i}}{1+r_{i}}\right)^{\frac{i\omega}{2}}[K_{\mathrm{Out}}-K_{ \mathrm{In}}]\,\Xi_{n}(r_{i},\omega,\mathbb{L})\,\\ g_{R}(1+\zeta_{i},\omega,\mathbb{L})&=\left(\frac{1- r_{i}}{1+r_{i}}\right)^{\frac{i\omega}{2}}\left\{\Xi_{nn}(r_{i},\omega, \mathbb{L})-[(1+n_{\omega})K_{\mathrm{Out}}-n_{\omega}K_{\mathrm{In}}]\,\Xi_{ n}(r_{i},\omega,\mathbb{L})\right\}\.\end{split}\] (C.46)
Substituting these expressions back into the on-shell action yields the following double sum:
\[\begin{split} S|_{\mathbf{On-shell}}&=\frac{1}{2} \sum_{ij\mathbb{L}}\int\frac{d\omega}{2\pi}\left(\frac{1-r_{i}}{1+r_{i}}\right) ^{-\frac{i\omega}{2}}\left(\frac{1-r_{j}}{1+r_{j}}\right)^{\frac{i\omega}{2}} \\ &\qquad\times\Big{\{}\Xi_{n}(r_{i},\omega,\mathbb{L})\ \Xi_{nn}(r_{j},\omega,\mathbb{L})\ [\sigma_{j}^{R*}\sigma_{i}^{R}-\sigma_{j}^{L*}\sigma_{i}^{L}]\\ &\qquad-\Xi_{n}(r_{i},\omega,\mathbb{L})\ \Xi_{n}(r_{j},\omega, \mathbb{L})\ K_{\mathrm{Out}}(\sigma_{j}^{R}-\sigma_{j}^{L})^{*}[(1+n_{\omega} )\sigma_{i}^{R}-n_{\omega}\sigma_{i}^{L}]\\ &\qquad-\Xi_{n}(r_{i},\omega,\mathbb{L})\ \Xi_{n}(r_{j},\omega, \mathbb{L})\ K_{\mathrm{In}}(\sigma_{i}^{R}-\sigma_{i}^{L})[(1+n_{-\omega}) \sigma_{j}^{R*}-n_{-\omega}\sigma_{j}^{L*}]\Big{]}\.\end{split}\] (C.47)
Let us begin by interpreting the terms in the above double sum. We first note that the last two lines in the above expression are related by the relabelling \(\omega\to-\omega\) and are hence equal. The physical meaning of the last two lines is clarified by defining the _radiative multipole moments_:
\[\begin{split}\mathcal{J}_{R}(\omega,\mathbb{L})& \equiv\sum_{i}\left(\frac{1-r_{i}}{1+r_{i}}\right)^{-\frac{i\omega}{2}}\Xi_{n}(r _{i},\omega,\mathbb{L})\ \sigma_{i}^{R}\equiv\int_{R}dr\ r^{\mathcal{N}}\Xi_{n}(r,\omega,\mathbb{L})\ \left(\frac{1-r}{1+r}\right)^{-\frac{i\omega}{2}}\varrho_{{}_{ \mathcal{N}}}(\zeta,\omega,\mathbb{L})\,\\ \mathcal{J}_{L}(\omega,\mathbb{L})&\equiv\sum_{i} \left(\frac{1-r_{i}}{1+r_{i}}\right)^{-\frac{i\omega}{2}}\Xi_{n}(r_{i},\omega, \mathbb{L})\ \sigma_{i}^{L}\equiv-\int_{L}dr\ r^{\mathcal{N}}\Xi_{n}(r,\omega, \mathbb{L})\ \left(\frac{1-r}{1+r}\right)^{-\frac{i\omega}{2}}\varrho_{{}_{ \mathcal{N}}}(\zeta,\omega,\mathbb{L})\.\end{split}\] (C.48)
The integrals here are performed over right/left _half_ of the dS-SK contour respectively. We will also find it convenient to define the average/difference multipole moments via
\[\mathcal{J}_{A}(\omega,\mathbb{L})\equiv\frac{1}{2}[\mathcal{J}_{R}(\omega, \mathbb{L})+\mathcal{J}_{L}(\omega,\mathbb{L})]\,\]
and
\[\mathcal{J}_{D}(\omega,\mathbb{L})\equiv\mathcal{J}_{R}(\omega,\mathbb{L})- \mathcal{J}_{L}(\omega,\mathbb{L})\]
Here we deliberately use the same notation as we did for multipole moments in flat spacetime (see Eq.(100)) and for point-like dS sources (See Eq.(101)). One reason for this is as follows: the last two lines of Eq.(101) can be recast in terms of the above definitions, into the cosmological influence phase of a point-source
\[S_{\rm CIP}^{\rm Pt}\equiv-\sum_{\mathbb{L}}\int\frac{d\omega}{2\pi}K_{\rm Out }(\mathcal{J}_{R}-\mathcal{J}_{L})^{*}[(1+n_{\omega})\mathcal{J}_{R}-n_{ \omega}\mathcal{J}_{L}]=-\sum_{\mathbb{L}}\int\frac{d\omega}{2\pi}K_{\rm Out }\ \mathcal{J}_{D}^{*}\ \left[\mathcal{J}_{A}+\left(n_{\omega}+\frac{1}{2}\right) \mathcal{J}_{D}\right]\,. \tag{102}\]
We recognise here the exact influence phase derived for a point-like dS observer in Eq.(100), using a detailed counterterm procedure. More evidence for this identification will be presented in appendix D, where we describe how these multipole moments correctly reproduce the flat space answers with Hubble corrections.
For now, we turn our attention to the remaining terms, viz., the first double sum in Eq.(101). The presence of the singular Green solution \(\Xi_{nn}\), as well as the right/left factorised form of this sum, indicates that these terms incorporate non-dissipative/conservative _self-energy_ corrections of the extended source. The final on-shell action can then be written as \(S|_{\mathbf{On-shell}}=S_{\rm CIP}^{\rm Pt}+S_{\rm Int}\), where \(S_{\rm Int}\) denotes the internal potential energy of the spherical shells:
\[S_{\rm Int}\equiv\frac{1}{2}\sum_{i\neq j\mathbb{L}}\int\frac{d\omega}{2\pi} \left(\frac{1-r_{i}}{1+r_{i}}\right)^{-\frac{i\omega}{2}}\left(\frac{1-r_{j}} {1+r_{j}}\right)^{\frac{i\omega}{2}}\Xi_{n}(r_{i},\omega,\mathbb{L})\ \Xi_{nn}(r_{j},\omega, \mathbb{L})\ [\sigma_{j}^{R*}\sigma_{i}^{R}-\sigma_{j}^{L*}\sigma_{i}^{L}]. \tag{103}\]
Another instructive way to rewrite this potential energy contribution is to define radially averaged mean fields on the right/left static patch via
\[\overline{\varphi}_{R,\rm Int}(\omega,\mathbb{L})\equiv\sum_{i} \left(\frac{1-r_{i}}{1+r_{i}}\right)^{-\frac{i\omega}{2}}\Xi_{nn}(r_{i}, \omega,\mathbb{L})\ \sigma_{i}^{R}\equiv\int_{R}dr\ r^{N}\Xi_{nn}(r,\omega, \mathbb{L})\ \left(\frac{1-r}{1+r}\right)^{-\frac{i\omega}{2}}\varrho_{{}_{N}}( \zeta,\omega,\mathbb{L})\,\] \[\overline{\varphi}_{L,\rm Int}(\omega,\mathbb{L})\equiv\sum_{i} \left(\frac{1-r_{i}}{1+r_{i}}\right)^{-\frac{i\omega}{2}}\Xi_{nn}(r_{i}, \omega,\mathbb{L})\ \sigma_{i}^{L}\equiv-\int_{L}dr\ r^{N}\Xi_{nn}(r,\omega, \mathbb{L})\ \left(\frac{1-r}{1+r}\right)^{-\frac{i\omega}{2}}\varrho_{{}_{N}}( \zeta,\omega,\mathbb{L}). \tag{104}\]
We can then rewrite the potential energy as that of multipole moments placed in such an average field, viz.,
\[S_{\rm Int}=\frac{1}{2}\sum_{\mathbb{L}}\int\frac{d\omega}{2\pi}[\mathcal{J}_ {R}^{*}\overline{\varphi}_{R,\rm Int}-\mathcal{J}_{L}^{*}\overline{\varphi}_{ L,\rm Int}]. \tag{105}\]
### Relation to Detweiler-Whiting decomposition
We will begin by writing down the non-normalisable solution \(\Xi_{nn}\) and the normalisable solution \(\Xi_{n}\) in the Schwarzschild time valid for odd \(d\) ( restoring all factors of \(H\)):
\[\begin{split}\Xi_{nn}&\equiv r^{-\nu-\frac{d}{2}+ \frac{1}{2}(d-1-\mathcal{N})}(1-H^{2}r^{2})^{-\frac{i\omega}{2H}}\\ &\quad\times{}_{2}F_{1}\left[\frac{1}{2}\left(1+\mu-\nu-\frac{i \omega}{H}\right),\frac{1}{2}\left(1-\mu-\nu-\frac{i\omega}{H}\right),1-\nu,H ^{2}r^{2}\right]\,\\ \Xi_{n}&\equiv\frac{1}{2\nu}r^{\nu-\frac{d}{2}+1+ \frac{1}{2}(d-1-\mathcal{N})}(1-H^{2}r^{2})^{-\frac{i\omega}{2H}}\\ &\quad\times{}_{2}F_{1}\left[\frac{1}{2}\left(1+\mu+\nu-\frac{i \omega}{H}\right),\frac{1}{2}\left(1-\mu+\nu-\frac{i\omega}{H}\right),1+\nu,H ^{2}r^{2}\right]\,\\ K_{\text{Out}}&\equiv 2\frac{\Gamma\left(\frac{1+\nu- \mu-\frac{i\omega}{H}}{2}\right)\Gamma\left(\frac{1+\nu+\mu-\frac{i\omega}{H} }{2}\right)\Gamma\left(1-\nu\right)}{\Gamma\left(\frac{1-\nu+\mu-\frac{i \omega}{H}}{2}\right)\Gamma\left(\frac{1}{2}\nu\right)\Gamma\left(\nu\right)} \.\end{split} \tag{102}\]
We have also quoted above the retarded two-point function on the world line. The outgoing Green function can then be decomposed into \(K_{\text{Out}}\)\(\Xi_{n}\) and \(\Xi_{nn}\): we will now argue that this should be thought of as the regular/singular Green functions ala Deitweiler-Whiting(DW)[110] corresponding to dS spacetime.
The relation to DW decomposition is not prima facie clear, since DW formulated their rules for general curved spacetimes in time domain, whereas the above expressions are quoted in frequency domain. So, to substantiate our assertion, we need to Fourier transform the complicated expressions above into time domain, and then show that the DW axioms are satisfied. Rather than do that exercise in general, we will content ourselves with showing how this works in the particular example of a massless scalar field in \(dS_{4}\), whose DW decomposition is described in [75; 111].
The regular term for DW decomposition in this case was calculated by the authors of [75] in FLRW-like coordinates as
\[\begin{split} G_{R}&=\frac{\eta\eta^{\prime}}{2| \mathbf{x}-\mathbf{x}^{\prime}|}\left[\delta(\eta-\eta^{\prime}-|\mathbf{x}- \mathbf{x}^{\prime}|)-\delta(\eta-\eta^{\prime}+|\mathbf{x}-\mathbf{x}^{ \prime}|)\right]\\ &+\frac{1}{2}\left[\theta(\eta-\eta^{\prime}-|\mathbf{x}- \mathbf{x}^{\prime}|)+\theta(\eta-\eta^{\prime}+|\mathbf{x}-\mathbf{x}^{ \prime}|)\right]\,\end{split} \tag{103}\]
To check this expression against \(K_{\text{Out}}\Xi_{n}\), we will convert it into static coordinates and then Fourier transform the result to frequency domain.
The coordinate transformation between static and FLRW coordinates is given by
\[\eta=-\frac{e^{-Ht}}{\sqrt{1-r^{2}H^{2}}}\,\quad\rho=\frac{re^{-Ht}}{\sqrt{1-r^{2}H ^{2}}}. \tag{104}\]
We will assume the source to be at the origin \(\rho^{\prime}=0\), so that only the \(\ell=0\) term survives by spherical
symmetry. With this choice, \(G_{R}\) becomes
\[\begin{split} G_{R}&=\frac{e^{H(t-t^{\prime})}}{2r}\\ &\times\left[\sqrt{\frac{1-Hr}{1+Hr}}\delta\left(t^{\prime}-t- \frac{1}{H}\ln\left(\sqrt{\frac{1-Hr}{1+Hr}}\right)\right)-\sqrt{\frac{1+Hr}{1 -Hr}}\delta\left(t^{\prime}-t-\frac{1}{H}\ln\left(\sqrt{\frac{1+Hr}{1-Hr}} \right)\right)\right]\\ &+\frac{1}{2}\left[\theta\left(t^{\prime}-t-\frac{1}{H}\ln\left( \sqrt{\frac{1-Hr}{1+Hr}}\right)\right)+\theta\left(t^{\prime}-t-\frac{1}{H} \ln\left(\sqrt{\frac{1+Hr}{1-Hr}}\right)\right)\right]\.\end{split} \tag{102}\]
This expression can be readily Fourier transformed with respect to \(t-t^{\prime}\) yielding
\[\widetilde{G}_{R}=\frac{1}{2r}\left[\left(\frac{1-Hr}{1+Hr}\right)^{-\frac{i \omega}{2H}}-\left(\frac{1+Hr}{1-Hr}\right)^{-\frac{i\omega}{2H}}\right]- \frac{H^{2}}{2i\omega}\left[\left(\frac{1-Hr}{1+Hr}\right)^{-\frac{i\omega}{2 H}}+\left(\frac{1+Hr}{1-Hr}\right)^{-\frac{i\omega}{2H}}\right]. \tag{103}\]
Regularity near the origin is manifest in frequency domain. Further, the above expression is also an odd function of the frequency \(\omega\), signalling that these terms encode the dissipation due to radiation reaction.
Similarly, we can consider the singular Green's function quoted in [75]:
\[\begin{split} G_{S}&=\frac{\eta\eta^{\prime}}{2| \mathbf{x}-\mathbf{x}^{\prime}|}\left[\delta(\eta-\eta^{\prime}-|\mathbf{x}- \mathbf{x}^{\prime}|)+\delta(\eta-\eta^{\prime}+|\mathbf{x}-\mathbf{x}^{ \prime}|)\right]\\ &\qquad\qquad+\frac{1}{2}\left[\theta(\eta-\eta^{\prime}-| \mathbf{x}-\mathbf{x}^{\prime}|)-\theta(\eta-\eta^{\prime}+|\mathbf{x}- \mathbf{x}^{\prime}|)\right]\end{split} \tag{104}\]
whose Fourier transform at \(\rho^{\prime}=0\) is
\[\tilde{G}_{S}=\frac{1}{2r}\left[\left(\frac{1-Hr}{1+Hr}\right)^{-\frac{i \omega}{2H}}+\left(\frac{1+Hr}{1-Hr}\right)^{-\frac{i\omega}{2H}}\right]- \frac{H^{2}}{2i\omega}\left[\left(\frac{1-Hr}{1+Hr}\right)^{-\frac{i\omega}{2 H}}-\left(\frac{1+Hr}{1-Hr}\right)^{-\frac{i\omega}{2H}}\right]. \tag{105}\]
This expression has a \(\sim\frac{1}{r}\) behaviour near the origin and is an even function of \(\omega\). The expressions in Eq.(103) and Eq.(103) can then be matched against \(K_{\text{Out}}\)\(\Xi_{n}\) and \(\Xi_{nn}\) respectively. This is done by taking Eq.(100), setting \(\mathcal{N}=d-1,\mu=\frac{d}{2},\nu=\ell+\frac{d}{2}-1\), and then taking the limit \(d=3\) and \(\ell=0\).
## Appendix D Radiation reaction due to light scalar fields
In this section, we will evaluate the radiation reaction force on a dS point particle coupled to a scalar field. We will do this in a small curvature approximation, i.e., we begin with the leading order result in flat spacetime[76; 79] and then systematically correct it for curvature effects. In dS, Hubble constant \(H\) parametrises the deviation from flat spacetime, so the small curvature expansion is an expansion in \(H\). We will also work within a non-relativistic expansion and a multipole expansion, and then eventually covariantise the final answer for the radiation reaction(RR).
To this end, consider a point-like source moving along a time-like worldline \(x(\tau)\) in dS, where \(\tau\) denotes the proper time of the source. We will assume that the particle trajectory is close to the south pole (\(rH\ll 1\)) and the radiation wavelength is taken to be much larger than the length scale of the particle trajectory (\(\omega r\ll 1\)), but much smaller than the curvature length scale (\(\omega\gg H\)). Further,
we work in a non-relativistic limit (\(v\ll 1\)). Thus, we consider the following hierarchy of scales (See Fig.7):
\[H\ll\omega\ll 1/r. \tag{114}\]
In analogy with flat spacetime, we will refer to this expansion as the post-Newtonian(PN) expansion in dS.
The source density for a moving source in dS is given by
\[\widetilde{\rho}(x^{\prime})=\int\delta^{d+1}(x-x^{\prime})d\tau= \sqrt{1-H^{2}r^{2}-\frac{\dot{r}^{2}}{1-H^{2}r^{2}}+\dot{r}^{2}-v^{2}}\ \delta^{d}(\vec{x}-\vec{x}^{\prime})\, \tag{115}\]
where we have defined \(v=\sqrt{\sum_{i=1}^{d}\dot{x}_{i}^{2}}\). Here the dots denote the derivative with respect to the standard time \(t\). The time-dilation/length-contraction factor for the particle worldline can then be expanded as follows:
\[\begin{split}&\sqrt{1-H^{2}r^{2}-\frac{\dot{r}^{2}}{1-H^{2}r^{2}}+ \dot{r}^{2}-v^{2}}\\ &=-\sum_{n,s,k}\binom{n}{k}\frac{(2s+2n+2k-5)!!(2s+2k-5)!!}{2^{ n+s-1}(2s+4k-5)!!\ \eta!(s-1)!}(Hr)^{2n}\dot{r}^{2k}v^{2s-2}\.\end{split} \tag{116}\]
Here the sum is over all non-negative integers, and the binomial coefficient vanishes for all values of \(k\) outside the range \(0\leq k\leq n\). This expansion describes the red-shift of the particle within dS spacetime in a slow-motion approximation, assuming that both the cosmological redshift \(Hr\) as well as the Doppler red-shifts due to peculiar motions (proportional to \(\dot{r}\) and \(v\)) are small. Our strategy below will be to use the above expansion to compute the symmetric trace-free(STF) multipole moments of the source, which can then be fed into the cosmological influence phase to compute the RR force.
We caution the reader that the source form given in Eq.(115) is specific to the KG scalar case with \(\mathcal{N}=d-1\). This is _not_ the correct form of the source for the scalar/vector/tensor sectors of EM field and gravity. For such cases, the explicit form of sources involves extra velocity/time-dilation factors, e.g., EM vector sector source is the electric current carried by the particle which has additional velocity dependence not captured by Eq.(115). Another related comment on EM/gravity sources is that the RR force coming from just one of the sectors is not expected to be covariant[76]: one should add in the contributions from all sectors to derive a covariant force expression. To do this for EM/linearised gravity, we need a theory of vector/tensor STF expansions (i.e., a formalism analogous to the one described in appendix A). We will derive such a formalism elsewhere[78]: in this note, we will limit our RR force analysis to KG scalars.
In the dS-SK geometry the above source will be doubled to a \(\widetilde{\rho}_{L}\) and a \(\widetilde{\rho}_{R}\) coming from left/right trajectories \(x_{L}(\tau_{L})\) and \(x_{R}(\tau_{R})\). The degrees of freedom of our open system are thus two copies of the position of the particle and its time derivatives: \(\{x_{L},x_{R},\dot{x}_{L},\dot{x}_{R},\ddot{x}_{L},\ddot{x}_{R},\dots\}\). The scalar ALD force and its post-Newtonian corrections only require expressions linear in \(x_{D},\dot{x}_{D},\ddot{x}_{D},\dots\) which are the difference in the positions and their derivatives. We will also keep terms up to cubic powers of \(x_{D}\). In this approximation, the average and difference functions of the sources can be written in a simple way. Consider for illustration, the average and difference functions of just the position:
\[\frac{1}{2}\left[\mathfrak{f}\left(x_{A}+\frac{x_{D}}{2}\right)+ \mathfrak{f}\left(x_{A}-\frac{x_{D}}{2}\right)\right]=\mathfrak{f}(x_{A})+ \frac{x_{D}^{2}}{4}\frac{\partial^{2}\mathfrak{f}}{\partial x_{A}^{2}}+O(x_{D }^{4}) \tag{117}\] \[\mathfrak{f}\left(x_{A}+\frac{x_{D}}{2}\right)-\mathfrak{f}\left( x_{A}-\frac{x_{D}}{2}\right)=x_{D}\frac{\partial\mathfrak{f}}{\partial x_{A}}+ \frac{x_{D}^{3}}{24}\frac{\partial^{3}\mathfrak{f}}{\partial x_{A}^{3}}+O(x_{ D}^{4}) \tag{118}\]
In general, the sources will be functions not only of the positions but also their time derivatives: in such cases, the above formula should be interpreted as a multi-variable Madhava-Taylor expansion.
We will now substitute the particle source Eq.(114) into the multipole moments defined in Eq.(112) and obtain the Lagrangian for RR force in PN expansion. We begin by expressing the influence phase in terms of STF moments of the particle density: we proceed similarly to how we rewrote the RR influence phase in flat spacetime (Eq.(111)) in terms of STF multipole moments (Eq.(110)). Using the STF addition theorem in Eq.(111), we can rewrite the dissipative part of Eq.(110) in time-domain as:
\[S_{RR}^{\text{Odd }d}=\sum_{\ell}\int\frac{d\omega}{2\pi}\frac{K_{\text{ Out}}}{\mathcal{N}_{d,\ell}|\mathbb{S}^{d-1}}\frac{1}{\ell!}\mathbb{O}_{D,STF}^{ \ast<i_{1}i_{2}\dots i_{\ell}>}\mathcal{O}_{<i_{1}i_{2}\dots i_{\ell}>}^{A,STF}. \tag{115}\]
where we have defined the time-domain STF multipole moments in dS as
\[\begin{split}\mathbb{O}_{A,STF}^{i_{1}\dots i_{\ell}}(t)& \equiv\Pi_{<j_{1}j_{2}\dots j_{\ell}>}^{<i_{1}i_{2}\dots i_{\ell}>} \int r^{d-1}dr\ \hat{r}^{j_{1}}\hat{r}^{j_{2}}\dots\hat{r}^{j_{\ell}}\ \Xi_{n}(i \partial_{t},r)\widetilde{\rho}_{A}(t,\vec{r})\,\\ \mathbb{O}_{D,STF}^{i_{1}\dots i_{\ell}}(t)&\equiv \Pi_{<j_{1}j_{2}\dots j_{\ell}>}^{<i_{1}i_{2}\dots i_{\ell}>}\int r^{d-1}dr\ \hat{r}^{j_{1}}\hat{r}^{j_{2}}\dots\hat{r}^{j_{\ell}}\ \Xi_{n}(i \partial_{t},r)\widetilde{\rho}_{D}(t,\vec{r})\.\end{split} \tag{116}\]
We will now use the PN expansion for \(\widetilde{\rho}_{D}\) in Eq.(115), the expansion for \(K_{\text{Out}}\) from Eq.(112) and the expansion for \(\Xi_{n}\) from Eq.(116) respectively. Keeping all terms in the action up to quartic order in amplitudes (i.e. in position \(x\)) and quartic order in Hubble constant \(H\), we get an effective Lagrangian of the form
\[\begin{split}|\mathbb{S}^{d-1}|(d-2)!!\times(-1)^{\frac{d-1}{2} }L=&-[x_{i}]_{D}\mathbb{D}_{1}[x^{i}]_{A}+\frac{1}{2}\left[x_{i} x_{j}-\frac{x^{2}}{d}\delta_{ij}\right]_{D}\mathbb{D}_{2}\left[x^{i}x^{j}- \frac{x^{2}}{d}\delta_{ij}\right]_{A}\\ &-\left\{\frac{1}{2}(x_{i})_{D}\mathbb{D}_{1}^{X}[x^{i}x^{2}]_{A} +\frac{1}{2}(x^{2}x_{i})_{D}\mathbb{D}_{1}^{X}[x^{i}]_{A}\right\}\\ &+\left\{\frac{1}{2}(x_{i})_{D}\mathbb{D}_{1}^{V}[x^{i}v^{2}]_{A} +\frac{1}{2}(v^{2}x_{i})_{D}\mathbb{D}_{1}^{V}[x^{i}]_{A}\ \right\}\\ &+\frac{1}{2d}[x^{2}]_{D}\mathbb{D}_{0}^{XX}[x^{2}]_{A}+\frac{1} {4}[v^{2}]_{D}\mathbb{D}_{0}^{VV}[v^{2}]_{A}\\ &-\frac{1}{4}[x^{2}]_{D}\mathbb{D}_{0}^{XV}[v^{2}]_{A}-\frac{1} {4}[v^{2}]_{D}\mathbb{D}_{0}^{XV}[x^{2}]_{A}\.\end{split} \tag{117}\]
Here, we have the seven differential operators, each built out of a finite number of time-derivatives with constant coefficients, and labelled by \(\{\mathbb{D}_{1},\mathbb{D}_{2},\mathbb{D}_{1}^{X},\mathbb{D}_{1}^{V},\mathbb{ D}_{0}^{XX},\mathbb{D}_{0}^{VV},\mathbb{D}_{0}^{XV}\}\). We use the subscripts of these operators to denote the multipole number, whereas the superscripts are used to distinguish between different structures occurring at the same multipole number. The explicit form of these operators is tabulated in table 8. We note that terms beyond quadrupole moment do not contribute to the quartic influence phase.
We can expand out the average and the difference multipole moments in terms of the average/difference in the particle position by using the following identities:
\[\begin{split}[Z]_{d}[Y^{3}]_{a}&=Z_{d}Y_{a}^{3}+ \frac{1}{4}Z_{d}\ (3Y_{d}^{2}Y_{a})\\ [Z^{2}]_{d}[Y^{2}]_{a}&=(2Z_{d}Z_{a})Y_{a}^{2}+\frac{ 1}{4}(2Z_{d}Z_{a})Y_{d}^{2}\\ [Z^{3}]_{d}[Y]_{a}&=(3Z_{a}^{2}Z_{d})Y_{a}+\frac{1}{ 4}Z_{d}^{3}Y_{a}\.\end{split} \tag{118}\]
After integration by parts, the above Lagrangian can be cast into the form:
\[L=\frac{(-1)^{\frac{d-1}{2}}}{|\mathbb{S}^{d-1}|(d-2)!!}\left[F_{i}(x_{A})x_{D}^{ i}+\frac{1}{4}N_{i}(x_{D})x_{A}^{i}\right] \tag{111}\]
where \(F^{i}\) are the Euler-Lagrange derivatives of the terms linear in \(x_{D}\) with respect to \(x_{D}^{i}\). Similarly, \(N^{i}\) are the Euler-Lagrange derivatives of the terms linear in \(x_{A}\) with respect to \(x_{A}^{i}\).
The terms in the Lagrangian which are cubic in \(x_{D}\) give rise to noise terms \(N^{i}\). These terms are different from pure noise terms, i.e. those quartic in \(x_{D}\). The \(N^{i}\) contribute to the radiation reaction with a \(x_{A}^{i}\) dependent term. It should be noted that this noise is not thermal in origin. Rather, the origin of this noise can be understood as follows: Despite the scalar field coupling linearly to the multipole moments, the moments themselves are non-linear functions of the positions. Hence, the open system described in terms of position has extra noise terms.
Both the \(F^{i}\) and the \(N^{i}\) can be written in terms of the operators given in table 8 as:
\[F^{i}= -\mathbb{D}_{1}[x^{i}]+x_{j}\mathbb{D}_{2}[x^{i}x^{j}]-\frac{x^{i }}{d}\mathbb{D}_{2}[x^{2}] \tag{112}\] \[-\left\{\frac{1}{2}\mathbb{D}_{1}^{X}[x^{i}x^{2}]+\frac{1}{2}x^{ 2}\mathbb{D}_{1}^{X}[x^{i}]+x^{i}x^{j}\mathbb{D}_{1}^{X}[x_{j}]\right\}+ \left\{\frac{1}{2}\mathbb{D}_{1}^{V}[x^{i}v^{2}]+\frac{1}{2}v^{2}\mathbb{D}_{ 1}^{V}[x^{i}]-\partial_{t}\left(v^{i}x^{j}\mathbb{D}_{1}^{V}[x_{j}]\ \right)\right\}\] \[+\frac{x^{i}}{d}\mathbb{D}_{0}^{XX}[x^{2}]-\frac{1}{2}\partial_{t }\left(v^{i}\mathbb{D}_{0}^{VV}[v^{2}]\ \right)+\left\{\frac{1}{2}\partial_{t}\left(v^{i}\mathbb{D}_{0}^{XV}[x^{2}] \ \right)-\frac{1}{2}x^{i}\mathbb{D}_{0}^{XV}[v^{2}]\right\}\,\]
as well as
\[N^{i}(x)= x_{j}\mathbb{D}_{2}[x^{i}x^{j}]-\frac{x^{i}}{d}\mathbb{D}_{2}[x^{2}] \tag{113}\] \[+\left\{\frac{1}{2}\mathbb{D}_{1}^{X}[x^{i}x^{2}]+\frac{1}{2}x^{ 2}\mathbb{D}_{1}^{X}[x^{i}]+x^{i}x^{j}\mathbb{D}_{1}^{X}[x_{j}]\right\}-\left\{ \frac{1}{2}\mathbb{D}_{1}^{V}[x^{i}v^{2}]+\frac{1}{2}v^{2}\mathbb{D}_{1}^{V}[x^ {i}]-\partial_{t}\left(v^{i}x^{j}\mathbb{D}_{1}^{V}[x_{j}]\ \right)\right\}\] \[+\frac{x^{i}}{2d}\mathbb{D}_{0}^{XX}[x^{2}]-\frac{1}{2}\partial_{t }\left(v^{i}\mathbb{D}_{0}^{VV}[v^{2}]\ \right)+\left\{\frac{1}{2}\partial_{t}\left(v^{i}\mathbb{D}_{0}^{XV}[x^{2}] \ \right)-\frac{1}{2}x^{i}\mathbb{D}_{0}^{XV}[v^{2}]\right\}\]
The RR force \(F^{i}\) then covariantises to the expression given in Eq.(4.3). Beyond the expressions quoted in the main text, we have also covariantised RR force expressions in \(d=9,11\). they are given by
\[{}^{0}f_{9}^{\mu}\equiv\frac{P^{\mu\nu}}{9!!}\left\{- a_{\nu}^{(7)}+30\ (a\cdot a)\ a_{\nu}^{(5)}+210\ (a\cdot a^{(1)})\ a_{\nu}^{(4)}+378\ (a\cdot a^{(2)})\ a_{\nu}^{(3)}\right.\] \[\qquad\qquad\left.+420\ (a\cdot a^{(3)})\ a_{\nu}^{(2)}+300\ (a\cdot a^{(4)})\ a_{\nu}^{(1)}+108\ (a\cdot a^{(5)})\ a_{\nu}\right.\] \[\qquad\qquad\left.+336\ (a^{(1)}\cdot a^{(1)})\ a_{\nu}^{(3)}+1050\ (a^{(1)}\cdot a^{(2)})\ a_{\nu}^{(2)}+960\ (a^{(1)}\cdot a^{(3)})\ a_{\nu}^{(1)}+420\ (a^{(1)}\cdot a^{(4)})\ a_{\nu}\right.\] \[\qquad\qquad\left.+675\ (a^{(2)}\cdot a^{(2)})\ a_{\nu}^{(1)}+756\ (a^{(2)}\cdot a^{(3)})\ a_{\nu}+O(a^{5})\right\}\] \[-H^{2}\frac{P^{\mu\nu}}{9!!}\left\{a_{\nu}^{(5)}+97\ (a\cdot a)\ a_{\nu}^{(3)}+433\ (a\cdot a^{(1)})\ a_{\nu}^{(2)}+408\ (a\cdot a^{(2)})\ a_{\nu}^{(1)}+199\ (a\cdot a^{(3)})\ a_{\nu}\right.\] \[\qquad\qquad\left.+339\ (a^{(1)}\cdot a^{(1)})\ a_{\nu}^{(1)}+448\ (a^{(1)}\cdot a^{(2)})\ a_{\nu}+O(a^{5})\right\}\] \[+H^{4}\frac{P^{\mu\nu}}{9!!}\left\{-a_{\nu}^{(3)}+157\ (a\cdot a)\ a_{\nu}^{(1)}+296\ (a\cdot a^{(1)})\ a_{\nu}\right\}+O(H^{6})\,\]
as well as
\[\begin{split}{}^{0}f_{11}^{\mu}&\equiv\frac{P^{\mu\nu}}{1 1!!}\left\{-a_{\nu}^{(9)}+55\ (a\cdot a)\ a_{\nu}^{(7)}+495\ (a\cdot a^{(1)})\ a_{\nu}^{(6)}+1188\ (a\cdot a^{(2)})\ a_{\nu}^{(5)}\right.\\ &\qquad\qquad\left.+1848\ (a\cdot a^{(3)})\ a_{\nu}^{(4)}+1980\ (a\cdot a^{(4)})\ a_{\nu}^{(3)}\right.\\ &\qquad\qquad\left.+1485\ (a\cdot a^{(5)})\ a_{\nu}^{(2)}+770\ (a\cdot a^{(6)})\ a_{\nu}^{(1)}+220\ (a\cdot a^{(7)})\ a_{\nu}\right.\\ &\qquad\qquad\left.+1056\ (a^{(1)}\cdot a^{(1)})\ a_{\nu}^{(5)}+4620\ (a^{(1)}\cdot a^{(2)})\ a_{\nu}^{(4)}\right.\\ &\qquad\qquad\left.+6336\ (a^{(1)}\cdot a^{(3)})\ a_{\nu}^{(3)}+5775\ (a^{(1)}\cdot a^{(4)})\ a_{\nu}^{(2)}+3520\ (a^{(1)}\cdot a^{(5)})\ a_{\nu}^{(1)}+1155\ (a^{(1)}\cdot a^{(6)})\ a_{\nu}\right.\\ &\qquad\qquad\left.+4455\ (a^{(2)}\cdot a^{(2)})\ a_{\nu}^{(3)}+10395\ (a^{(2)}\cdot a^{(3)})\ a_{\nu}^{(2)}+7700\ (a^{(2)}\cdot a^{(4)})\ a_{\nu}^{(1)}+2970\ (a^{(2)}\cdot a^{(5)})\ a_{\nu}\right.\\ &\qquad\qquad\left.+4928\ (a^{(3)}\cdot a^{(3)})\ a_{\nu}^{(1)}+4620\ (a^{(3)}\cdot a^{(4)})\ a_{\nu}\right\}\\ &\qquad\qquad-H^{2}\frac{P^{\mu\nu}}{11!!}\left\{a_{\nu}^{(7)}+3 42\ (a\cdot a)\ a_{\nu}^{(5)}+2294\ (a\cdot a^{(1)})\ a_{\nu}^{(4)}+3826\ (a\cdot a^{(2)})\ a_{\nu}^{(3)}\right.\\ &\qquad\qquad\left.+3737\ (a\cdot a^{(3)})\ a_{\nu}^{(2)}+2066\ (a\cdot a^{(4)})\ a_{\nu}^{(1)}+622\ (a\cdot a^{(5)})\ a_{\nu}\right.\\ &\qquad\qquad\left.+3231\ (a^{(1)}\cdot a^{(1)})\ a_{\nu}^{(3)}+8490\ (a^{(1)}\cdot a^{(2)})\ a_{\nu}^{(2)}+5663\ (a^{(1)}\cdot a^{(3)})\ a_{\nu}^{(1)}+1974\ (a^{(1)}\cdot a^{(4)})\ a_{\nu}\right.\\ &\qquad\qquad\left.+3785\ (a^{(2)}\cdot a^{(2)})\ a_{\nu}^{(1)}+3210\ (a^{(2)}\cdot a^{(3)})\ a_{\nu}+O(a^{5})\right\}\,\\ &\qquad\left.-H^{4}\frac{P^{\mu\nu}}{11!!}\left\{a_{\nu}^{(5)}+13 40\ (a\cdot a)\ a_{\nu}^{(3)}+6108\ (a\cdot a^{(1)})\ a_{\nu}^{(2)}+6148\ (a\cdot a^{(2)})\ a_{\nu}^{(1)}+2599\ (a\cdot a^{(3)})\ a_{\nu}\right.\\ &\qquad\qquad\left.+5182\ (a^{(1)}\cdot a^{(1)})\ a_{\nu}^{(1)}+5876\ (a^{(1)}\cdot a^{(2)})\ a_{\nu}+O(a^{5})\right\}\.\end{split} \tag{102}\]
We have not yet succeeded in finding similar covariant expressions for the \(N^{i}\).
### Near flat expansions for odd \(d\)
In this subsection, we will describe how normalisable modes of the generalised scalar equation in dS can be thought of as perturbations of the corresponding Bessel J modes in the flat spacetime. In the context of radiation reaction problems, these modes are essential in defining the radiative multipole moments: their role is to appropriately smear the sources to take into account time-delay effects. Once such an expansion is obtained, it is easy to find the flat space expansion of the non-normalisable mode for even dimensional dS just by analytical continuation.
The solution of the generalised scalar wave equation, regular at \(r=0\), is given by
\[\begin{split}\Xi_{n}&\equiv\frac{1}{2\nu}r^{\nu- \frac{d}{2}+1+\frac{1}{2}(d-1-\mathcal{N})}(1-H^{2}r^{2})^{-\frac{i\omega}{2H }}\\ &\quad\times{}_{2}F_{1}\left[\frac{1}{2}\left(1+\mu+\nu-\frac{i \omega}{H}\right),\frac{1}{2}\left(1-\mu+\nu-\frac{i\omega}{H}\right),1+\nu,H^ {2}r^{2}\right]\.\end{split} \tag{103}\]
Here we have made all \(H\) factors explicit so that the \(H\to 0\) limit can readily be taken. In such a limit, the above expression reduces to a Bessel \(J\) function. More explicitly, we will find it convenient to define a sequence of scaled Bessel \(J\) functions of the form
\[\mathfrak{B}_{k}\equiv\frac{r^{\nu-\frac{d}{2}+1+\frac{1}{2}(d-1-\mathcal{N})+ 2k}}{2\nu(\nu+1)\dots(\nu+k)}\ {}_{0}F_{1}\left[1+k+\nu,-\frac{\omega^{2}r^{2}}{4}\right]=\frac{\Gamma(\nu) \ r^{\frac{1}{2}(1-\mathcal{N})+k}}{2(\omega/2)^{k+\nu}}J_{k+\nu}(\omega r) \tag{104}\]
in flat spacetime. In terms of these functions, the dS wavefunction \(\Xi_{n}\) has a small \(H\) expansion
\[\Xi_{n}=\sum_{k=0}^{\infty}\mathfrak{p}_{k}(\nu,H^{2},\omega^{2})\mathfrak{B}_{ k}\, \tag{105}\]
with \(\mathfrak{p}_{k}(H^{2},\omega^{2})\) being a homogeneous polynomial of degree \(k\) in the variables \(H^{2}\) and \(\omega^{2}\). An explicit expression is given by
\[\begin{split}\mathfrak{p}_{k}\equiv\frac{H^{2k}}{k!}\sum_{m=0}^{k}& (-)^{m}\binom{k}{m}\sum_{n=0}^{m}(-)^{n}\binom{m}{n}\sigma^{2k-2m} \frac{\Gamma(\alpha+m)\Gamma(1+\nu+m)}{\Gamma(\alpha+m-n)\Gamma(1+\nu+m-n)}\\ &\times\frac{\Gamma(\alpha+i\sigma+m-n)\Gamma(\alpha-i\sigma+m-n )}{\Gamma(\alpha+i\sigma)\Gamma(\alpha-i\sigma)}\,\end{split} \tag{108}\]
where we have defined the variables
\[\alpha\equiv\frac{1}{2}(1+\nu-\mu)\,\quad\sigma=\frac{\omega}{2H}. \tag{109}\]
A useful fact about these polynomials is the leading \(H\) scaling at small \(H\) of these polynomials, given by
\[\mathfrak{p}_{3n-2}\,\ \mathfrak{p}_{3n-1}\,\ \mathfrak{p}_{3n}\ \propto H^{2n}. \tag{110}\]
Thus, to obtain an answer accurate up to \(H^{2n}\), we only need the polynomials till \(\mathfrak{p}_{3n}\).
We will now outline a derivation for the above expansion as follows: first, we use the Euler transformation on the hypergeometric functions to write
\[\Xi_{n}=\frac{1}{2\nu}r^{\nu-\frac{\alpha}{2}+\frac{1}{2}(d-1- \mathcal{N})}(1-H^{2}r^{2})^{-\alpha}\ {}_{2}F_{1}\left[\alpha+i\sigma,\alpha-i\sigma,1+\nu,-\frac{H^{2}r^{2}}{1-H^{2} r^{2}}\right]\, \tag{111}\]
where the variables \(\alpha\) and \(\sigma\) are as defined above. In the next step, we employ the Mellin-Barnes representation of the hypergeometric function, viz.[87],
\[{}_{2}F_{1}\left[a,b,c,x\right]=\int_{-i\infty}^{i\infty}\frac{ dz}{2\pi i}(-x)^{z}\Gamma(-z)\frac{\Gamma(a+z)\Gamma(b+z)\Gamma(c)}{\Gamma(a) \Gamma(b)\Gamma(c+z)}. \tag{112}\]
and expand the resultant integrand using
\[(1-H^{2}r^{2})^{-\alpha-z}\left(H^{2}r^{2}\right)^{z}=\sum_{k=0}^ {\infty}\frac{(H^{2}r^{2})^{k+z}}{k!}\frac{\Gamma\left[k+z+\frac{1}{2}(1+\nu- \mu)\right]}{\Gamma\left[\alpha+z+\frac{1}{2}(1+\nu-\mu)\right]}. \tag{113}\]
Shifting the Mellin-Barnes integration variable, we get the following Mellin-integral representation for \(\Xi_{n}\):
\[\Xi_{n}=\frac{1}{2\nu}r^{\nu-\frac{\alpha}{2}+\frac{1}{2}(d-1- \mathcal{N})}\int_{-i\infty}^{i\infty}\frac{dz}{2\pi i}\left(\frac{\omega r}{ 2}\right)^{2z}\Gamma(-z)\frac{\Gamma(c)}{\Gamma(c+z)}\widetilde{\Xi}(z)\, \tag{114}\]
where the Mellin-transform
\[\begin{split}\widetilde{\Xi}(z)&\equiv\left(\frac{ 2H}{\omega}\right)^{2z}\sum_{k=0}^{\infty}(-)^{k}\binom{z}{k}\frac{\Gamma(a+z -k)\Gamma(b+z-k)\Gamma(c+z)}{\Gamma(a)\Gamma(b)\Gamma(c+z-k)}\frac{\Gamma(z+ \alpha)}{\Gamma(z-k+\alpha)}\\ &=\left(\frac{2H}{\omega}\right)^{2z}\frac{\Gamma(a+z)\Gamma(b+ z)}{\Gamma(a)\Gamma(b)}{}_{3}F_{2}\left[\begin{array}{ccc}1-c-z,&-z,&1- \alpha-z\\ 1-a-z&\quad,&1-b-z\end{array};1\right]\.\end{split} \tag{115}\]
Here, \(\alpha\equiv\frac{1}{2}(1+\nu-\mu)\) and \(a,b,c\) denote the parameters of the hyper-geometric function appearing in Eq.(111). The Mellin-transform \(\widetilde{\Xi}(z)\) evaluated at integer \(z\) is, in fact, a polynomial of degree
in the variable \((H/\omega)^{2}\): this can be gleaned from the fact that the series above truncates in this case with polynomial coefficients.
To determine the polynomials \(\mathfrak{p}_{n}\), we should compare the polynomials \(\widetilde{\Xi}(n)\) against the Mellin-transform of \(\sum_{k}\mathfrak{p}_{k}\mathfrak{B}_{k}\). This can be done using the Mellin-Barnes representation of \({}_{0}F_{1}\), viz.[87],
\[{}_{0}F_{1}\left[c,x\right]=\int_{-i\infty}^{i\infty}\frac{dz}{2\pi i}(-x)^{z} \Gamma(-z)\frac{\Gamma(c)}{\Gamma(c+z)}. \tag{111}\]
This, in turn, yields an expression of the form
\[\widetilde{\Xi}(z)=\sum_{k=0}^{\infty}\left(\frac{2}{\omega}\right)^{2k} \mathfrak{p}_{k}(\nu,H^{2},\omega^{2})\ \frac{\Gamma(k-z)}{\Gamma(-z)}. \tag{112}\]
This series also truncates for integer \(z\) and the above relation can then be inverted to give
\[\mathfrak{p}_{k}=\frac{1}{k!}\left(\frac{\omega}{2}\right)^{2k}\sum_{m=0}^{k} (-)^{m}\binom{k}{m}\ \widetilde{\Xi}(m). \tag{113}\]
The explicit expression quoted before follows from this equation. The first few polynomials are given by
\[\mathfrak{p}_{0} =1\, \tag{114}\] \[\mathfrak{p}_{1} =\frac{H^{2}}{2^{2}}(1+\nu+\mu)(1+\nu-\mu)\,\] \[\mathfrak{p}_{2} =\frac{H^{2}}{2^{3}}\left\{-\omega^{2}(2\nu+3)+\frac{H^{2}}{2^{2 }}(1+\nu+\mu)(1+\nu-\mu)(3+\nu+\mu)(3+\nu-\mu)\right\}\,\] \[\mathfrak{p}_{3} =\frac{H^{2}}{2^{2}\times 3!}\Bigg{\{}\omega^{4}+\frac{H^{2} \omega^{2}}{2^{2}}[3\mu^{2}(2\nu+5)-(103+132\nu)-3(17\nu^{2}+2\nu^{3})]+H^{4} (\ldots)\Bigg{\}}\,\]
The polynomials \(\mathfrak{p}_{4}\) and higher are proportional to \(H^{4}\) and, hence the above expressions are sufficient to obtain an answer accurate up to order \(H^{2}\) terms. To get terms accurate up to order \(H^{4}\), we also need the leading terms of the next three polynomials:
\[\mathfrak{p}_{4} =\frac{H^{4}}{2^{5}\times 4!}\Bigg{\{}\omega^{4}[-4\mu^{2}+8\nu(2 \nu+13)+157]+H^{2}(\ldots)\Bigg{\}}\, \tag{115}\] \[\mathfrak{p}_{5} =\frac{H^{4}}{2^{2}\times 5!}\Bigg{\{}\omega^{6}(5\nu+18)+H^{2} (\ldots)\Bigg{\}}\,\] \[\mathfrak{p}_{6} =\frac{H^{4}}{2^{7}\times 3^{2}}\Bigg{\{}\omega^{8}+H^{2}( \ldots)\Bigg{\}}\.\]
The polynomials \(\mathfrak{p}_{7}\) and higher are proportional to \(H^{6}\), and hence can be ignored at this order.
For odd values of \(d\), the function \(\Xi_{nn}\) is related to \(\Xi_{n}\) simply by the transformation: \(\nu\to-\nu\). This allows us to also obtain the flat space expansion for \(\Xi_{nn}\) in odd \(d\):
\[\Xi_{nn}|_{\text{Odd }d}=\sum_{k=0}^{\infty}\mathfrak{p}_{k}(-\nu,H^{2}, \omega^{2})\mathfrak{G}_{k}\, \tag{116}\]
where the functions \(\mathfrak{G}_{k}\) are related to the \(\mathfrak{B}_{k}\) by \(\nu\rightarrow-\nu\):
\[\mathfrak{G}_{k}\equiv\frac{r^{-\nu-\frac{d}{2}+\frac{1}{2}(d-1- \mathcal{N})+2k}}{(-\nu+1)\ldots(-\nu+k)}\ {}_{0}F_{1}\left[1+k-\nu,-\frac{\omega^{2}r^{2}}{4}\right]=-2\nu\frac{ \Gamma(-\nu)\ r^{\frac{1}{2}(1-\mathcal{N})+k}}{2(\omega/2)^{k-\nu}}J_{k-\nu }(\omega r). \tag{111}\]
We will conclude by giving the near-flat/high-frequency expansion of \(K_{\text{Out}}\) in odd \(d\). This can be achieved using Stirling approximation, i.e.,
\[\Gamma\left(z\right)\sim\exp\left\{\left(z-\tfrac{1}{2}\right)\ln z -z+\tfrac{1}{2}\ln\left(2\pi\right)+\sum_{k=1}^{\infty}\frac{B_{2k}}{2k(2k-1)z ^{2k-1}}\right\}\, \tag{112}\]
an approximation valid as long \(z\rightarrow\infty\) away from the negative real axis. We then obtain the following expansion for \(K_{\text{Out}}\) in odd \(d\):
\[\begin{split} K_{\text{Out}}|_{\text{Odd }d}=&\ \frac{2\pi i}{\Gamma(\nu)^{2}}\left(\frac{\omega}{2} \right)^{2\nu}\left[1+(\nu^{2}+3\mu^{2}-1)\frac{\nu}{3!!}\frac{H^{2}}{\omega^{ 2}}\right.\\ &+\left.\frac{5\nu^{4}-4\nu^{3}+(30\mu^{2}-14)\nu^{2}-(60\mu^{2}- 16)\nu+(45\mu^{2}-90\mu^{2}+21)}{2\times 3}\frac{\nu(\nu-1)}{5!!}\frac{H^{4}}{ \omega^{4}}\right.\\ &\qquad+O\left(\frac{H^{6}}{\omega^{6}}\right)\Bigg{]}\.\end{split} \tag{113}\]
This expression describes how the radiation reaction kernel gets corrected due to the non-zero cosmological constant.
\begin{tabular}{||c||c|c|c||} \hline \hline Symbol & \({}^{0}f_{d}^{a}\) & \({}^{0}f_{d-2}^{a}\) & \({}^{0}f_{d-4}^{a}\) \\ \hline \hline \(\mathbb{D}_{1}\) & \(\frac{\partial^{d}}{2t}\) & \(\frac{\partial^{d-2}}{(d-2)!!}\) & \(\frac{\partial^{d-4}}{(d-4)!!}\) \\ \hline \(\mathbb{D}_{2}\) & \(\frac{\partial^{d+2}}{(d+2)!!}-\frac{H^{2}}{3!}(d+1)\frac{\partial^{d}}{4!}+ \frac{H^{4}}{3!}\frac{7}{3}(d^{2}-1)\frac{\partial^{d-2}}{(d-2)!!}\) & \(\frac{\partial^{d}}{2!}-\frac{H^{2}}{3!}(d-1)\frac{\partial^{d-2}}{(d-2)!!}+ \frac{H^{4}}{3!}\frac{7}{3}(d-1)(d-3)\frac{\partial^{d-4}}{(d-2)!!}\) & \(\frac{\partial^{d-2}}{(d-2)!!}\) \\ \hline \(\mathbb{D}_{1}^{X}\) & \(\frac{\partial^{d+2}}{(d+2)!!}+\frac{H^{2}}{3}(d-2)\frac{\partial^{d}}{2!}- \frac{H^{4}}{45}(d^{2}-1)\frac{\partial^{d-2}}{(d-2)!!}\) & \(\frac{\partial^{d}}{2!}+\frac{H^{2}}{3}(d-4)\frac{\partial^{d-2}}{(d-2)!!}- \frac{H^{4}}{45}(d-1)(d-3)\frac{\partial^{d-4}}{(d-4)!!}\) & \(\frac{\partial^{d-2}}{(d-2)!!}\) \\ \hline \(\mathbb{D}_{1}^{V}\) & \(\frac{\partial^{d}}{2!}\) & \(\frac{\partial^{d-3}}{(d-2)!!}\) & \(\frac{\partial^{d-4}}{(d-4)!!}\) \\ \hline \(\mathbb{D}_{0}^{XX}\) & \(\frac{d+2}{2}\frac{\partial^{d+2}}{(d+2)!!}+\frac{H^{2}}{4!}(25d^{2}-15d-2) \frac{\partial^{d}}{4!}\) & \(\frac{d+2}{2}\frac{\partial^{d}}{2!}+\frac{H^{2}}{4!}(25d^{2}-25d+2)\frac{ \partial^{d-2}}{(d-2)!!}\) & \(\frac{d+2}{2}\frac{\partial^{d-2}}{(d-2)!!}\) \\ & \(+\frac{H^{4}}{6!}(67d^{3}-526d^{2}+833d-14)\) & \(+\frac{H^{4}}{6!}(67d^{3}-794d^{2}+2125d+42)\) & \\ \hline \(\mathbb{D}_{0}^{VV}\) & \(\frac{\partial^{d-4}}{(d-2)!!}+\frac{H^{2}}{3!}(d-1)\frac{\partial^{d-4}}{(d- 4)!!}+\frac{H^{4}}{5!}(d-1)(d-3)\frac{\partial^{d-4}}{(d-6)!!}\) & \(\frac{\partial^{d-4}}{(d-4)!!}+\frac{H^{2}}{3!}(d-3)\frac{\partial^{d-4}}{(d- 6)!!}+\frac{H^{4}}{3!}(d-3)(d-5)\frac{\partial^{d-6}}{(d-6)!!}\) & \(\frac{\partial^{d-6}}{(d-6)!!}\) \\ \hline \(\mathbb{D}_{0}^{XV}\) & \(\frac{\partial^{d}}{2!}+\frac{H^{2}}{2!}(d-3)\frac{\partial^{d-2}}{(d-2)!!}+ \frac{H^{4}}{4!}(d-1)(d-7)\frac{\partial^{d-4}}{(d-4)!!}\) & \(\frac{\partial^{d-2}}{(d-2)!!}+\frac{H^{2}}{2!}(d-5)\frac{\partial^{d-4}}{(d-4 )!!}+\frac{H^{4}}{4!}(d-3)(d-9)\frac{\partial^{d-4}}{(d-6)!!}\) & \(\frac{\partial^{d-4}}{(d-4)!!}\) \\ \hline \end{tabular}
\begin{table}
\begin{tabular}{||c||c|c|c||} \hline \hline Symbol & \({}^{0}f_{d}^{a}\) & \({}^{0}f_{d-2}^{a}\) & \({}^{0}f_{d-4}^{a}\) \\ \hline \hline \(\mathbb{D}_{1}\) & \(\frac{\partial^{d}}{2!}\) & \(\frac{\partial^{d-2}}{(d-2)!!}\) & \(\frac{\partial^{d-4}}{(d-4)!!}\) \\ \hline \(\mathbb{D}_{2}\) & \(\frac{\partial^{d+2}}{(d+2)!!}-\frac{H^{2}}{3!}(d+1)\frac{\partial^{d}}{4!}+ \frac{H^{4}}{3!}\frac{7}{3}(d^{2}-1)\frac{\partial^{d-2}}{(d-2)!!}\) & \(\frac{\partial^{d}}{2!}-\frac{H^{2}}{3!}(d-1)\frac{\partial^{d-2}}{(d-2)!!}+ \frac{H^{4}}{3!}\frac{7}{3}(d-1)(d-3)\frac{\partial^{d-4}}{(d-4)!!}\) & \(\frac{\partial^{d-2}}{(d-2)!!}\) \\ \hline \(\mathbb{D}_{1}^{X}\) & \(\frac{\partial^{d+2}}{(d+2)!!}+\frac{H^{2}}{3}(d-2)\frac{\partial^{d}}{2!}- \frac{H^{4}}{45}(d^{2}-1)\frac{\partial^{d-2}}{(d-2)!!}\) & \(\frac{\partial^{d}}{2!}+\frac{H^{2}}{3!}(d-4)\frac{\partial^{d-2}}{(d-2)!!}- \frac{H^{4}}{45}(d-1)(d-3)\frac{\partial^{d-4}}{(d-4)!!}\) & \(\frac{\partial^{d-2}}{(d-2)!!}\) \\ \hline \(\mathbb{D}_{1}^{V}\) & \(\frac{\partial^{d}}{2!}\) & \(\frac{\partial^{d-3}}{(d-2)!!}\) & \(\frac{\partial^{d-3}}{(d-2)!!}\) & \(\frac{\partial^{d-4}}{(d-4)!!}\) \\ \hline \(\mathbb{D}_{0}^{XX}\) & \(\frac{d+2}{2}\frac{\partial^{d+2}}{(d+2)!!}+\frac{H^{2}}{4!}(25d^{2}-15d-2)\frac{ \partial^{d}}{4!}\) & \(\frac{d+2}{2}\frac{\partial^{d}}{2!}+\frac{H^{2}}{4!}(25d^{2}-25d+2)\frac{ \partial^{d-2}}{(d-2)!!}\) & \(\frac{d+2}{2}\frac{\partial^{d-2}}{(d-2)!!}\) \\ & \(+\frac{H^{4}}{6!}(67d^{3}-526d^{2}+833d-14)\) & \(+\frac{H^{4}}{6!}(67d^{3}-794d^{2}+2125d+42)\) & \\ \hline \(\mathbb{D}_{0}^{VV}\) & \(\frac{\partial^{d-4}}{(d-2)!!}+\frac{H^{2}}{3!}(d-1)\frac{\partial^{d-4}}{(d-4)!! }+\frac{H^{4}}{5!}(d-1)(d-3)\frac{\partial^{d-4}}{(d-6)!!}+\frac{H^{4}}{3!}( d-3)\frac{\partial^{d-4}}{(d-6)!!}+\frac{H^{4}}{3!}(d-3)(d-5)\frac{\partial^{d- 6}}{(d-6)!!}\) & \(\frac{\partial^{d-6}}{(d-6)!!}\) \\ \hline \(\mathbb{D}_{0}^{XV}\) & \(\frac{\partial^{d}}{2!}+\frac{H^{2}}{2!}(d-3)\frac{\partial^{d-2}}{(d-2)!!}+ \frac{H^{4}}{4!}(d-1)(d-7)\frac{\partial^{d-4}}{(d-4)!!}\) & \(\frac{\partial^{d-2}}{(d-2)!!}+\frac{H^{2}}{2!}(d-5)\frac{\partial^{d-4}}{(d-4)!! }+\frac{H^{4}}{4!}(d-3)(d-9)\frac{\partial^{d-4}}{(d-6)!!}\) & \(\frac{\partial^{d-4}}{(d-4)!!}\) \\ \hline \end{tabular}
\end{table}
Table 8: The differential operators that appear in dS radiation reaction (for \(d\) odd). We have divided up the sum into three columns where each column combines into an expression covariant under dS isometries. The entries in the second column must be multiplied by a relative factor of \(-\frac{H^{2}}{4\cdot 4\cdot 4}c_{\alpha}\) with \(c_{\alpha}\equiv 12\mu^{2}+d^{2}-4\) and then added to the first column. Similarly, the third column should be multiplied by a relative factor of \(\frac{H^{4}}{8\pi c!!}[5c_{\alpha}^{2}-40(d+2)c_{\alpha} |
2302.14820 | Experimental Communication Through Superposition of Quantum Channels | Information capacity enhancement through the coherent control of channels has
attracted much attention of late, with work exploring the effect of coherent
control of channel causal orders, channel superpositions, and information
encoding. Coherently controlling channels necessitates a non-trivial expansion
of the channel description, which for superposing qubit channels, is equivalent
to expanding the channel to act on qutrits. Here we explore the nature of this
capacity enhancement for the superposition of channels by comparing the maximum
coherent information through depolarizing qubit channels and relevant
superposed and qutrit channels. We show that the expanded qutrit channel
description in itself is sufficient to explain the capacity enhancement without
any use of superposition. | Arthur O. T. Pang, Noah Lupu-Gladstein, Hugo Ferretti, Y. Batuhan Yilmaz, Aharon Brodutch, Aephraim M. Steinberg | 2023-02-28T18:18:10Z | http://arxiv.org/abs/2302.14820v3 | # Experimental Communication Through Superposition of Quantum Channels
###### Abstract
Information capacity enhancement through the coherent control of channels has attracted much attention of late, with work exploring the effect of coherent control of channel causal orders, channel superpositions, and information encoding. Coherently controlling channels necessitates a non-trivial expansion of the channel description, which for superposing qubit channels, is equivalent to expanding the channel to act on qutrits. Here we explore the nature of this capacity enhancement for the superposition of channels by comparing the maximum coherent information through depolarizing qubit channels and relevant superposed and qutrit channels. We show that the expanded qutrit channel description in itself is sufficient to explain the capacity enhancement without any use of superposition.
## I Introduction
In a surprising paper, Ebler _et al._[1] showed that when two fully depolarizing channels are put in a superposition of causal orders [2], information can be transmitted. In their scheme, the order of two depolarizing channels which act sequentially on a system state is conditioned on a control qubit's state in the computational basis (states \(\ket{\mathbf{a}}\) and \(\ket{\mathbf{b}}\) in this paper), and by preparing and postselecting the control qubit in \(\ket{+}\equiv\left(\ket{\mathbf{a}}+\ket{\mathbf{b}}\right)/\sqrt{2}\), information encoded on the system state can be transmitted. This has been experimentally demonstrated in [3; 4; 5; 6; 7; 8] and further theoretically explored in [9; 10; 11; 12; 13; 14].
This capacity enhancement phenomenon is, however, not unique to channels being placed in an indefinite causal order, and a similar capacity enhancement phenomenon has also been demonstrated by [11; 15; 16; 17; 18]. Notably, all of these schemes involve coherent control of the noise channel through the use of an ancillary qubit, where depending on the control state, the system state experiences indefinite ordering of noise channel, superposition of information encoding, or superposition of channels [8].
Here, we experimentally explore the nature of the capacity enhancement achieved by superposing two independent noisy qubit channels. As in the cases of the previously demonstrated schemes involving coherent control of noise channels, the superposition of two channels necessitates an expanded description of those channels to account for this extra control state. For the case of superposing qubit channels, this expanded description takes the form of a qutrit channel, as has been discussed briefly in the past [11; 17; 8; 18].
Expanding the channel description in this fashion is generally non-trivial, and the expansion is also not unique without additional information[19]. Experimentally, for the case of superposition of channels, this expansion depends on the physical implementation of the channel. We shall demonstrate that different physical implementations of the same qubit channel can lead to a different expanded description. Notably, the choice of this expanded description of channels completely characterizes the channel behaviour under superposition. This demonstrates that the physical implementation, rather than the act of superposing channels, is the origin of the capacity enhancement.
This paper is organized in the following manner. We begin by describing our experimental setup in section II followed by experimental results concerning the dependence of the post-selected channel and qutrit channel on the physical implementation of the qubit channel in sections III and IV. Finally, we construct a hierarchy of channel models based on their complexity and completeness in describing the superposed channel in section V, and discuss the implications of our results in section VI.
## II Experimental setup
The construction of our superposition of channels is based on heralded single photons, with the polarization degree of freedom playing the role of the system qubit and the path degree of freedom playing the role of the control qubit. In our setup, illustrated in figure 1, we prepare the photon polarization and path by using a set of waveplates and a Sagnac interferometer respectively. After the preparation, the photon goes into either path \(\mathbf{a}\) or path \(\mathbf{b}\) of a Mach-Zehnder interferometer, with the paths corresponding to random unitary channels \(N_{a}\) and \(N_{b}\) respectively. These random unitary channels each consist
of three liquid crystal waveplates (LCWP), where the random unitary is implemented by changing the LCWP voltage. Path **a** also passes through a glass plate that controls the phase between the two paths, after which the two paths interfere at a beam splitter (BS).
The unitary performed by the Mach-Zehnder is given by
\[\hat{U}_{i,j}^{(MZ)}=\left(\left|\textbf{a}\right\rangle\bra{\textbf{a}}\otimes \hat{U}_{i}^{(a)}+\left|\textbf{b}\right\rangle\bra{\textbf{b}}\otimes\hat{U}_ {j}^{(b)}\right) \tag{1}\]
where \(\hat{U}_{i}^{(a)}\) and \(\hat{U}_{j}^{(b)}\) are the unitary operators corresponding to the polarization rotation given by LCWPs in \(N_{a}\) and \(N_{b}\), \(\left|\textbf{a}\right\rangle\bra{\textbf{a}}\) and \(\left|\textbf{b}\right\rangle\bra{\textbf{b}}\) correspond to the projectors on the path qubit, and the indices \(i,j\) denote elements in the set of all possible unitary operators. To simulate the effects of a random unitary channel, we perform our experiment using all possible combinations of unitaries \(\{U_{i}^{(a)}\}_{i}\) and \(\{U_{j}^{(b)}\}_{j}\) and taking a weighted average of the results with weightings \(p_{i}^{(a)}\) and \(p_{j}^{(b)}\) that correspond to the probabilities of those unitaries in the respective random unitary channel. This effectively emulates random unitary channels for \(N_{a}\) and \(N_{b}\), with Kraus operators given by \(\hat{K}_{i}^{(a)}=\sqrt{p_{i}^{(a)}}\hat{U}_{i}^{(a)}\) and \(\hat{K}_{j}^{(b)}=\sqrt{p_{j}^{(b)}}\hat{U}_{j}^{(b)}\). The overall channel given by our Mach-Zehnder interferometer acting on some input path and polarization state \(\rho^{(MZ)}\) is then also a random unitary channel, given by
\[\boldsymbol{\Phi}^{(MZ)}\left(\rho^{(MZ)}\right)=\sum_{i,j}p_{i}^{(a)}p_{j}^{( b)}\hat{U}_{i,j}^{(MZ)}\rho^{(MZ)}\hat{U}_{i,j}^{(MZ)\dagger}, \tag{2}\]
with the summation operation summing over the entire set of possible unitary operators \(\hat{U}_{i}^{(a)}\) and \(\hat{U}_{j}^{(b)}\) weighted over the probability of those unitaries \(p_{i}^{(a)}p_{j}^{(b)}\).
To understand the nature of the implementation-dependence of superposing qubit channels, we will compare the superposed channels created by superposing two different implementations of the depolarizing channel. First, we remind readers of some mathematical properties of the depolarizing channels. We note that a set of Kraus operators in the qubit Hilbert space \(\hat{K}_{i}\) can be expressed in terms of the qubit operator basis formed by the identity and the three Pauli operators, such that
\[\sum_{k,l}d_{k,l}\,\hat{\sigma}_{k}\rho\hat{\sigma}_{l}^{\dagger}, \tag{3}\]
\[d_{k,l}=\sum_{i}Tr\left[\hat{K}_{i}\hat{\sigma}_{k}^{\dagger}\right]\cdot Tr \left[\hat{K}_{i}\hat{\sigma}_{l}^{\dagger}\right]^{*}/4, \tag{4}\]
where \(\hat{\sigma}_{1}\), \(\hat{\sigma}_{2}\), and \(\hat{\sigma}_{3}\) are Pauli matrices (and will be used interchangeably with \(\hat{\sigma}_{x}\), \(\hat{\sigma}_{y}\), and \(\hat{\sigma}_{z}\)), and \(\hat{\sigma}_{0}\) is the identity. The channel formed by the set of Kraus operators \(\{\hat{K}_{i}\}_{i}\) describes a depolarizing channel if it can be described by a random unitary channel, where \(\hat{\sigma}_{1}\), \(\hat{\sigma}_{2}\), and \(\hat{\sigma}_{3}\) are performed with probability \(\alpha/4\) and the identity operator with probability \(1-3\alpha/4\). Mathematically, the decomposition of a depolarizing channel from equation 4 gives
\[d_{k,l}=\begin{cases}1-3\alpha/4&k=l=0\\ \alpha/4&k=l\neq 0\\ 0&k\neq l\end{cases} \tag{5}\]
where \(\alpha\) is the degree of depolarization, with \(\alpha=1\) corresponding to a completely depolarizing channel.
We will denote the two implementations of the depolarizing channel as the phase-coherent and phase-incoherent
Figure 1: The experimental setup is divided into two parts. The photon first goes through the state preparation setup. This setup prepares the photon polarization using a set of half and quarter waveplates (HWP/QWP), which is followed by a Sagnac interferometer that prepares the photon path through a pair of HWPs. The pair of HWPs in the Sagnac is rotated in a correlated manner, and for the preparation of the \(\left|+\right\rangle\) path state, the two HWPs are set to be at \(\pi/2\) off the horizontal axis. After the Sagnac, the photon goes through a set of HWPs and QWPs to correct for the path-dependent polarization imposed by the Sagnac and restore the photon polarization to the initial preparation. The settings for the sets of HWPs and QWPs immediately after the Sagnac, unlike the initial set of HWPs and QWPs or the HWPs in the Sagnac, is independent of the polarization and path state prepared. The photon then experiences either the channel \(N_{a}\) or \(N_{b}\) depending on the state of the control. Finally, a balanced beam splitter re-interferes the two paths, with state tomography being performed on one of the output ports of the beam splitter. In the order of interaction with an incoming photon, the three LCWPs in channel \(N_{a}\) apply a controlled rotation in the \(Z\) axis followed by two rotations in the \(X\) axis, whereas the three LCWPs in channel \(N_{b}\) apply a controlled rotation in the \(X\) axis followed by two rotations in the \(Z\) axis.
implementations, with all implementation-specific symbols being denoted with the unbracketed superscript \(coh\) and \(inc\) respectively. More precisely, in the phase-coherent implementation, four different operators are randomly implemented for each channel. These operators are, for \(N_{a}\) and \(N_{b}\) respectively,
\[\begin{split}\{\hat{K}_{i}^{(a),coh}\}_{i}=\{\sqrt{p_{0}}s_{0}^{(a )}\hat{\sigma}_{0},\sqrt{p_{1}}s_{1}^{(a)}\hat{\sigma}_{1},\\ \sqrt{p_{2}}s_{2}^{(a)}\hat{\sigma}_{2},\sqrt{p_{3}}s_{3}^{(a)} \hat{\sigma}_{3}\};\\ \{\hat{K}_{j}^{(b),coh}\}_{j}=\{\sqrt{p_{0}}s_{0}^{(b)}\hat{ \sigma}_{0},\sqrt{p_{1}}s_{1}^{(b)}\hat{\sigma}_{1},\\ \sqrt{p_{2}}s_{2}^{(b)}\hat{\sigma}_{2},\sqrt{p_{3}}s_{3}^{(b)} \hat{\sigma}_{3}\}.\end{split} \tag{6}\]
where \(\hat{U}_{i}^{(a)}=s_{i}^{(a)}\hat{\sigma}_{i}\) and \(\hat{U}_{j}^{(b)}=s_{j}^{(b)}\hat{\sigma}_{j}\), and \(s_{i}^{(a)}\) and \(s_{j}^{(b)}\) are the phase factors associated with each Pauli operators in the phase-coherent implementation. The probabilities \(p_{m}\) and phase factors \(s_{i}^{(a)}\) and \(s_{j}^{(b)}\) are given by
\[p_{m}=\begin{cases}1-3\alpha/4&m=0;\\ \alpha/4&m\neq 0,\end{cases} \tag{7}\]
\[\begin{split} s_{m}^{(a)}=s_{m}^{(b)}=1\text{ for }\;m\neq 2;\\ s_{2}^{(a)}=i;\\ s_{2}^{(b)}=-i.\end{split} \tag{8}\]
The \(s_{2}^{(a)}\) and \(s_{2}^{(b)}\) phase factors corresponds to the configuration of the LCWPs in \(N_{a}\) and \(N_{b}\) in our physical setup, where a \(Z\) aligned LCWP comes before the \(X\) aligned LCWP in \(N_{a}\) and the reverse being true for \(N_{b}\). In the phase-incoherent implementation, we add four additional operators to the two operator sets in the phase-coherent implementation. These four additional operators correspond to the operators in the phase-coherent implementation with an additional \(\pi\) phase. The phase-incoherent implementation, therefore, has operators
\[\begin{split}\{\hat{K}_{i}^{(a),inc}\}_{i}=\{\hat{K}_{j}^{(b), inc}\}_{j}=\\ \{\sqrt{(p_{0}/2)}\,s_{0}^{(a)}\hat{\sigma}_{0},\sqrt{(p_{1}/2)} \,s_{1}^{(a)}\hat{\sigma}_{1},\\ \sqrt{(p_{2}/2)}\,s_{2}^{(a)}\hat{\sigma}_{2},\sqrt{(p_{3}/2)} \,s_{3}^{(a)}\hat{\sigma}_{3},\\ -\sqrt{(p_{0}/2)}\,s_{0}^{(a)}\hat{\sigma}_{0},-\sqrt{(p_{1}/2)} \,s_{1}^{(a)}\hat{\sigma}_{1},\\ -\sqrt{(p_{2}/2)}\,s_{2}^{(a)}\hat{\sigma}_{2},-\sqrt{(p_{3}/2)} \,s_{3}^{(a)}\hat{\sigma}_{3}\}.\end{split} \tag{9}\]
Note in the phase-incoherent implementation, the random unitary operators implemented in \(N_{a}\) and \(N_{b}\) are not necessarily equal, but rather are drawn independently from the same set of unitaries. Experimentally, this \(\pi\) phase is implemented by the third LCWP with an optical axis aligned perpendicularly to the second LCWP. By tuning the voltages of both LCWP simultaneously, a polarization independent phase shift can be applied to the photons. This phase acts as a global phase when channels \(N_{a}\) and \(N_{b}\) are not in superposition.
Figure 2: Bloch sphere representation of the normalized post-selected qubit channel; note the re-scaled axes for the centre and right plots, which run from -0.5 to 0.5. The left plot (a) shows the Bloch sphere representation of an identity channel and can be used as a reference to interpret other Bloch sphere plots. The centre plot (b) shows the post-selected channel for the phase-coherent implementation with \(\alpha=1\), resulting in a Bloch representation that takes the shape of a slanted ellipsoid centred at \((0.10,-0.01,0.14)\) parameterized by three semi-axes \((0.16,-0.018,0.15)\), \((0.055,0.082,-0.017)\), and \((0.011,-0.0088,-0.014)\). Red lines which align with the axis and go through the origin have also been drawn for visual clarity. In theory, this ellipsoid should be an elliptical disc parameterized by the two semi-axes \((\sqrt{2}/9,0,\sqrt{2}/9)\) and \((0,1/9,0)\) centred at \((1/9,0,1/9)\). This deviation from theory is caused by systematics described in appendix C. The existence of a disc with a definite non-vanishing area is distinct to that of a completely depolarizing channel, such as the one on the right (c), where the plot shows the post-selected channel for the phase-incoherent implementation, resulting in a point-like shape in its Bloch representation. Theoretically, this plot should represent a completely depolarizing channel and the channel should be shown as a point on the Bloch sphere plot.
## III Implementation dependence
A distinct feature of superposing channels is the dependence of the superposed channel map on the operators used to implement the non-superposed channels used in the superposition. As a consequence, the choice of using implementations 6 or 9 for the simple qubit channels \(N_{a}\) and \(N_{b}\) will result in two different versions of the two-qubit channel \(\mathbf{\Phi}^{(MZ)}\left(\rho^{(MZ)}\right)\).
When the path qubit is prepared and post-selected on \(\left|+\right\rangle=\frac{1}{\sqrt{2}}\left(\left|\mathbf{a}\right\rangle+ \left|\mathbf{b}\right\rangle\right)\), the resulting post-selected qubit channel
\[\mathbf{\Phi}^{(p)}\left(\rho^{(p)}\right)=\sum_{i,j}\hat{K}_{i,j}^{(p)}\rho^ {(p)}\hat{K}_{i,j}^{(p)\dagger}, \tag{10}\]
will have Kraus operators \(\hat{K}_{i,j}^{(p)}\) given by
\[\hat{K}_{i,j}^{(p)}=\sqrt{p_{i}^{(a)}p_{j}^{(b)}}\,\frac{\hat{U}_{i}^{(a)}+ \hat{U}_{j}^{(b)}}{2}. \tag{11}\]
The implementation-dependence of the post-selected qubit channel can be seen as a consequence of \(\hat{K}_{ij}^{(p)}\) being sensitive to the global phases, which are reflected in \(s_{i}^{(a)}\) and \(s_{j}^{(b)}\), of the two channels \(N_{a}\) and \(N_{b}\). These global phases of the two channels become phases between states \(\left|\mathbf{a}\right\rangle\) and \(\left|\mathbf{b}\right\rangle\) in the control qubit when the channels are put in a superposition. The Pauli decomposition of the post-selected channel helps us illuminate the nature of the post-selected channel's dependence on channel implementation. For the phase-coherent implementation, the Pauli decomposition of the set of Kraus operators \(\hat{K}_{i,j}^{(p)}\) given by \(f_{k,l}=\sum_{i,j}Tr\left[\hat{K}_{i,j}^{(p)}\hat{\sigma}_{k}^{\dagger}\right] \cdot Tr\left[\hat{K}_{i,j}^{(p)}\hat{\sigma}_{l}^{\dagger}\right]^{*}/4\) results in
\[f_{k,l}^{\mathrm{coh}}=\begin{cases}\frac{p_{k}}{2}+\frac{p_{l}^{2}}{2}\, \mathrm{Re}\left[s_{k}^{(a)}s_{k}^{(b)*}\right]&k=l;\\ \frac{p_{k}p_{l}}{4}\left(s_{k}^{(a)}s_{l}^{(b)*}+s_{k}^{(b)}s_{l}^{(a)*} \right)&k\neq l.\end{cases} \tag{12}\]
The phase-dependence of the post-selected channel can thus be seen by noting equation 12's dependence on the \(s_{k}^{(a)}s_{l}^{(b)*}\) terms.
Examining figure 2, which shows the Bloch sphere representation of the post-selected channel, we can see that the eigenstate with the positive eigenvalue for the Hadamard unitary \(\hat{H}=1/\sqrt{2}\left(\hat{\sigma}_{x}+\hat{\sigma}_{z}\right)\) experiences the least amount of depolarization from the post-selected channel in the phase-coherent implementation. An explanation for this is given in appendix D.
In contrast, the phase-incoherent implementation has a Pauli decomposition that yields
\[f_{k,l}^{inc}=\begin{cases}\frac{p_{k}}{2}&k=l;\\ 0&k\neq l.\end{cases} \tag{13}\]
The inclusion of a duplicate set of operators with a \(\pi\) phase shift causes the contribution of the operator phase components \(s_{k}^{(a)}\) and \(s_{l}^{(b)*}\) to cancel out in the Pauli decomposition. Substituting the probabilities of equation 7 into 13, we have \(f_{k,l}^{inc}=\frac{1}{2}d_{k,l}\). This indicates that the post-selected channel given by the phase-incoherent implementation results in a depolarizing channel with a post-selection probability of \(1/2\), exactly the same as the channels \(N_{a}\) and \(N_{b}\).
Figures 2 and 3 highlight this implementation-dependence quantitatively through the tomographic reconstruction of the phase-coherent and phase-incoherent channels, and qualitatively in the channels' shape on the Bloch sphere. As seen from the figures, for the phase-coherent implementation with Kraus operators \(\{\hat{K}_{i}^{(a),coh}\}_{i}\) and \(\{\hat{K}_{i}^{(b),coh}\}_{j}\) with \(\alpha=1\), the post-selection results in a channel that is not completely depolarizing, even though the individual channels themselves are. This is in contrast to the phase-incoherent implementation with Kraus operators \(\{\hat{K}_{i}^{(a),inc}\}_{i}\) and \(\{\hat{K}_{j}^{(b),inc}\}_{j}\), where due to the randomization of channel phases, the effect of post-selection reduces to a classical mixture of depolarizing channels, resulting in a completely depolarizing channel. The result of superposing two phase-incoherent implementations also extends to
Figure 3: Plot of the density matrix for normalized post-selected qubit channel dual state with \(N_{a}\) and \(N_{b}\) for the phase-coherent (incoherent) implementation with \(\alpha=1\) on the left (right) with experimental results on the top and theory at the bottom. The dual state Hilbert space is labelled by two labels \(H\) or \(V\) first denoting the input polarization followed by the output polarization. The height of the bars and their colours represent the amplitudes and phase respectively of each element. Without phase mixing (left plots), post-selection partially restores coherence to the channel map. It also preferentially transmits H polarization, as indicated by the larger amplitude in \(\left|HH\right\rangle\left\langle HH\right|\) than the rest of the diagonal elements. With phase mixing (right plots), post-selection does not restore any coherence, with the resulting channel being a completely depolarizing channel.
the superposition of one phase-coherent and one phase-incoherent implementation.
## IV Qutrit model
The inability of the simple qubit channel map to translate directly to a unique description of the superposed channel demonstrates that it is insufficient for describing the physical operations of the channel under superposition. This is because of the introduction of the control qubit, which requires that we can effectively 'turn off' a quantum channel by sending in zero photons. Thus, apart from the description of the channel's action on the polarization qubit, an additional description of the channel action on the vacuum state needs to be included, as noted by [8; 17; 18], which in turn results in the simple qubit channels becoming qutrit channels: two states from the original polarization qubit channel, and one new state corresponding to the vacuum ('off') state. Here, we perform qutrit channel tomography for the aforementioned qutrit channels explicitly through the procedures described in appendix E. Our qutrit Hilbert space consists of the vacuum (zero photon) state \(\left|0\right\rangle\), the polarization state H \(\left|H\right\rangle\), and the polarization state V \(\left|V\right\rangle\). When performing tomography for channel \(N_{a}(N_{b})\), the preparation for the \(\left|0\right\rangle\) state is equivalent to not sending the photon into the channel \(N_{a}(N_{b})\). Conceptually, the qutrit tomography procedures described in appendix E are equivalent as follows. The preparation of the qutrit state is done by first preparing the photon in the H polarization and path state in either \(\left|\mathbf{a}\right\rangle\) or \(\left|\mathbf{b}\right\rangle\). Then, for the channel tomography for \(N_{a}(N_{b})\), we vary the photon polarization only if the photon is in the \(\left|\mathbf{a}\right\rangle\)(\(\left|\mathbf{b}\right\rangle\)) path state. This has the effect of preparing a state in the Hilbert space spanned by \(\left|0\right\rangle\), \(\left|V\right\rangle\), and \(\left|H\right\rangle\). For the tomography of channel \(N_{a}(N_{b})\), channel \(N_{b}(N_{a})\) is always set to the identity channel. Similar to the input qutrit preparation, the qutrit is measured by performing polarization measurement only on the \(\left|\mathbf{a}\right\rangle\)(\(\left|\mathbf{b}\right\rangle\)) path and a measurement of the path qubit. A setup equivalent to the qutrit tomographic procedure is outlined in figure 6.
Figures 4 and 5 show the tomographic results for qutrit channels of \(N_{a}\) and \(N_{b}\) for both implementations with \(\alpha=1\). In these figures, the effects of the operator phase \(s_{i}\) of the two implementations can be observed in the coherences between the polarizations and the vacuum state. Both channels in the phase-coherent implementation partially preserve coherence between \(\left|00\right\rangle\) and \(\left|HH\right\rangle\) but differ by the coherence between \(\left|00\right\rangle\) and \(\left|VH\right\rangle\) (which is partially preserved for \(N_{a}\) and not for \(N_{b}\)), and the coherence between \(\left|00\right\rangle\) and \(\left|HV\right\rangle\) (which is partially preserved for \(N_{b}\) and not for \(N_{a}\)). On the other hand, neither channel preserves any coherences in the phase-incoherent implementation, resulting in a purely diagonal channel dual state.
## V Channel hierarchy
Fundamentally, the simple qubit (\(\mathbf{\Phi}^{(a)}\left(\rho^{(a)}\right)\) and \(\mathbf{\Phi}^{(b)}\left(\rho^{(b)}\right)\)), post-selected (\(\mathbf{\Phi}^{(p)}\left(\rho^{(p)}\right)\)), Mach-Zehnder (\(\mathbf{\Phi}^{(MZ)}\left(\rho^{(MZ)}\right)\)), and qutrit (\(\mathbf{\Phi}^{(T0)}\left(\rho^{(T0)}\right)\) and \(\mathbf{\Phi}^{(T1)}\left(\rho^{(T1)}\right)\)) channel model(s) are descriptions of the same setup with differing levels of detail. It is therefore helpful to order those different descriptions based on these levels. The simple qubit and post-selected channels can be derived from the Mach-Zehnder channel by preparing and measuring the control in the appropriate state, and the Mach-Zehnder channel can in turn be derived by preparing and measuring the input and output in the one-photon subspace of the two-qutrit channel. Based on the channel description's level of detail, the channel models can be partially ordered in the following way
\[\mathbf{\Phi}^{(2T)}\succ\mathbf{\Phi}^{(MZ)}\succ\mathbf{\Phi}^ {(a)},\mathbf{\Phi}^{(b)}, \tag{14a}\] \[\mathbf{\Phi}^{(2T)}\succ\mathbf{\Phi}^{(T0)}\succ\mathbf{\Phi}^ {(a)},\] (14b) \[\mathbf{\Phi}^{(2T)}\succ\mathbf{\Phi}^{(T1)}\succ\mathbf{\Phi}^ {(b)}, \tag{14c}\]
where any channel description ordered lower on the partial ordering can be extracted from a channel description ordered higher on the partial ordering, either by taking a subspace or post-selecting on the channel higher on the ordering. More broadly speaking, our partial ordering here is a specific kind of channel divisibility [20; 21] that involves dimensional reduction.
With this partial ordering in mind, we compare the maximum coherent information of our depolarizing channels under different channel models and implementations. The maximum coherent information is given by
\[I_{c}\left(\mathbf{\Phi}\right)=\] \[\max_{\left\{\rho\right\}}\left[H\left(\mathbf{\Phi}\left(\rho \right)\right)-H\left(\left(\mathbf{\Phi}\otimes\mathbb{I}\right)\left(vec \left(\sqrt{\rho}\right)\cdot vec\left(\sqrt{\rho}\right)^{\dagger}\right) \right)\right], \tag{15}\]
where \(vec\left(\sqrt{\rho}\right)\) is the purification of the state \(\rho\) and \(H\) is the von Neumann entropy, given by
\[H\left(\rho\right)=\mathrm{Tr}\left[\rho\log\left(\rho\right)\right]. \tag{16}\]
Figure 7 shows the maximum coherent information at various levels of depolarization. The capacity enhancement when simple qubit channels are post-selected in a superposition can be seen at \(\alpha\) going from \(0\) to approximately \(0.4\), where the maximum coherent information from the post-selected channel is strictly greater than that of the qubit channel.
It is important to stress that the maximum coherent information for the two-qutrit channel, unsurprisingly, is strictly greater than all other channel models under investigation. Therefore, no capacity enhancement would be found if one used the appropriate two-qutrit channel model or superposition channel model to describe the post-selected channel.
## VI Discussion
The capacity enhancement in coherently controlled channels may seem surprising at first, yet this surprise is perhaps due to the implicit assumption that a coherently controllable channel can be trivially implemented using its non-controlled counterpart. This is not true, as the inclusion of the control qubit necessitates an expanded description of the channels to accommodate the transmission of the extra control qubit. For the case of superposing qubit channels, the channel action on vacuum needs to be accounted for. To illustrate this, we have experimentally reconstructed relevant qutrit channels for three different implementations of the depolarizing channel. Indeed, for depolarizing channels, the quantum information that one can send through the superposition is strictly less than that of the relevant qutrit channels. It is therefore more appropriate to attribute the increase in channel capacity of superposing channels to the required expansion of the channel's input and out
Figure 4: Plot of channel dual state for \(N_{a}\) (\(N_{b}\)) for the phase-coherent implementation with the degree of depolarization \(\alpha=1\) on the left (right) with the qutrit model in the top row, the qutrit model from theory in the middle row and qubit model in the bottom row. The vacuum state is labelled as zero for compactness. The difference between the channels \(N_{a}\) and \(N_{b}\) is highlighted in the qutrit models by the coherence between \(|00\rangle\) and \(|VH\rangle\) (\(|HV\rangle\)), despite them both having an identical qubit model description. It is important to note that any terms associated with an input polarization state and an output zero state or vise-versa (e.g. \(|0V\rangle\) or \(|H0\rangle\)) is set to zero in our fitting model. Given our method of tomography, an input polarization state becoming an output zero (or vise-versa) corresponds to a photon disappearing in one path of the Mach Zehnder and reappearing in the other path, which does not correspond to any physical mechanism present in our experiment. Appendix C elaborates on the systematic errors that contribute to the deviations between experiment and theory.
put Hilbert space, rather than the act of superposition itself.
While the implementation-dependence of coherently controlled channels on its non-coherently controlled counterpart is a feature of the superposed channel that is not shared for all coherently controlled channels, all coherent control schemes require the transmission of an extra control qubit. Our work provides insight into how channel expansion contributes to the capacity enhancement in coherently controlled channels.
###### Acknowledgements.
This work was supported by grant number "FQXi FFF Grant number FQXi-RFP-1819" from the Foundational Questions Institute and Fetzer Franklin Fund, a donor advised fund of Silicon Valley Community Foundation; the Natural Sciences and Engineering Research Council (NSERC) of Canada; and CIFAR. It also made use of some equipment purchased with the assistance of the Fetzer Franklin Fund of the John E. Fetzer Memorial Trust. A.M.S is a fellow of CIFAR.
Figure 5: Plot of qutrit channel dual state with \(N_{a}\) (\(N_{b}\)) for the phase-incoherent implementation with the degree of depolarization \(\alpha=1\) on the left (right) with the qutrit model in the top row, the qutrit model from theory in the middle row and qubit model in the bottom row. The phase randomization of the qubit channels manifests as a loss of coherence between \(|00\rangle\) and \(|VH\rangle\) (\(|HV\rangle\)) in the qutrit map, resulting in a purely diagonal channel dual state. Appendix C elaborates on the systematic errors that contribute to the deviations between experiment and theory.
## Appendix A LCWP Characterization
We used a total of six LCWPs in our experiment, two from Thorlabs and four from Meadowlark Optics. They are distributed in Mach-Zehnder interferometer such that there are two Meadowlark LCWPs and one Thorlabs LCWP in each arm, and ordered such that incoming light always goes through the two Meadowlark LCWPs first. The voltage to phase retardance for each LCWPs is calibrated by sending horizontally polarized light through the LCWP with its optical axis at 45 degrees, with measurement of the subsequent light in the horizontal/vertical basis. Input voltage-dependent absorption was also characterized, and was determined to be at most a 3% change.
## Appendix B Photon Souce
We generated \(808nm\) SPDC photons using PPKTP type-II co-linear down-conversion, where one of the photons is used as a herald, resulting in \(\sim 63,000\) heralded single photons per second. After the single-mode fibre-coupling into a detector, we detect \(\sim 8,000\) heralded single photons per second, with losses due to unwanted absorption, reflection, and single-mode coupling inefficiencies.
## Appendix C Systematic errors
Deviations between the theoretical and experimental results in figures 2, 4 and 5 are dominated by three sources of systematic error. The first is due to phase fluctuation in our Mach-Zehnder interferometer of about \(0.3\ rad\) (comparable to, in figure 4 the phase error between \(|VH\rangle\) and \(\langle 00|\) of \(~{}0.43\ rad\) for the phase-coherent implementation). The second is the calibration error in the polarization rotation axis of the LCWPs of about \(0.06\ rad\). Finally, there is a polarization dependence absorbance of at most \(\sim 7\%\) present in our setup.
## Appendix D Post-selected channel map of phase-coherent implementation
There are three main reasons for the post-selected channel to take its form given in figure 2b. Firstly, due to phases \(s_{2}^{(a)}\) and \(s_{2}^{(b)}\), the post-selection filters out the cases where both \(N_{a}\) and \(N_{b}\) implement \(\hat{\sigma}_{2}\), which takes one eigenstates of the Hadamard to the other, as the random unitary. Secondly, when one of the channels implements \(\hat{\sigma}_{1}\) and the other implements \(\hat{\sigma}_{3}\), the resulting operator that an input state experiences is the Hadamard, which leaves the unitary's eigenstate unchanged. Thirdly, when one of the channels implements identity and the other implements either \(\hat{\sigma}_{1}\) or \(\hat{\sigma}_{3}\), the resulting operator is a projector onto the eigenstate with the positive eigenvalue for \(\hat{\sigma}_{1}\) and \(\hat{\sigma}_{3}\) respectively. These two states have a higher overlap to the eigenstate with the positive eigenvalue (than the negative eigenvalue) for the Hadamard unitary. Overall, this results in the post-selected channel leaving the positive-eigenvalued eigenstate for the Hadamard to be the least disturbed, resulting in a channel that has a Bloch sphere representation of an elliptical disc with its longest semi-axis in the direction of the Hadamard eigenstates and its center displaced towards the positive-eigenvalued Hadamard eigenstate, as illustrated in figure 2.
## Appendix E Qutrit Tomography
Here, we perform qutrit channel tomography for the aforementioned qutrit channels explicitly. To perform tomography on the vacuum state, we note that the unitary operators for the qutrit channels \(N_{a}\) and \(N_{b}\) can be
Figure 6: The conceptually equivalent setup for qutrit channel tomography. This setup allows the preparation and measurement of a quantum state spanned by the bases \(|0\rangle\) (which is equivalent to having the photon in the interferometer path without the qutrit channel), \(|V\rangle\), and \(|H\rangle\). In the qutrit preparation stage, \(|H\rangle\) polarized light is sent to a beamsplitter, with one arm of the beamsplitter sent to an HWP and a QWP which turns the \(|H\rangle\) polarized light into arbitrary polarizations. A similar but reversed setup is used for qutrit measurement. Phase between \(|0\rangle\) and the polarization states for both the preparation and measurement is tuned by a common phase plate, which is possible due to the assumption that the qutrit channel does not induce amplitude exchange between the \(|0\rangle\) state and the polarization states. Finally, a tomographically complete preparation and measurement can be accomplished by blocking one or none of the paths in the interferometer.
described by
\[\hat{U}_{ij}^{(2T)}=\\ \hat{U}_{i}^{(T0)}\otimes\hat{U}_{j}^{(T1)}=\\ \left(\left|0\right\rangle\left\langle vac\right|^{(a)}\oplus\hat{ U}_{i}^{(a)}\right\rangle\otimes\left(\left|0\right\rangle\left\langle vac \right|^{(b)}\oplus\hat{U}_{j}^{(b)}\right), \tag{10}\]
with \(\hat{U}_{i}^{(T0)}\) and \(\hat{U}_{j}^{(T1)}\) being the unitary operators for channels \(N_{a}\) and \(N_{b}\) described under the qutrit channel model. These sets of qutrit unitaries form the random unitary qutrit channels \(\mathbf{\Phi}^{(T0)}\) and \(\mathbf{\Phi}^{(T1)}\) (and that \(\mathbf{\Phi}^{(2T)}=\mathbf{\Phi}^{(T0)}\otimes\mathbf{\Phi}^{(T1)}\)), which has the feature where it preserves photon number.
For any accurate tomographic reconstruction of the channel \(\mathbf{\Phi}\), we require a way to extract \(\left|c\right|^{2}\) where
\[\left|c\right|^{2}=\left\langle\psi^{\prime}\middle|\mathbf{\Phi}\left(\left| \psi\right\rangle\left\langle\psi\right|\right)\left|\psi^{\prime}\right\rangle \tag{11}\]
for a completely spanning set of states \(\left|\psi\right\rangle\)\(\left|\psi^{\prime}\right\rangle\). In our experiment, we have direct access to the one photon subspace of the two-qutrit channel \(\mathbf{\Phi}^{(2T)}\) in the form of our Mach-Zehnder channel \(\mathbf{\Phi}^{(MZ)}\). In the Mach-Zehnder channel, the vacuum state for channel \(N_{a}\) (\(N_{b}\)) can be accessed by preparing the path state to be in \(\left|\mathbf{b}\right\rangle\)(\(\left|\mathbf{a}\right\rangle\)).
The probability of measuring a certain output state \(\left|\psi_{path}^{\prime}\right\rangle\otimes\left|\psi_{pol}^{\prime}\right\rangle\) given an input state \(\left|\psi_{path}\right\rangle\otimes\left|\psi_{pol}\right\rangle\) is given by \(\left|c_{MZ}\right|^{2}\), where
\[c_{MZ}=\left\langle\psi_{pol}^{\prime}\middle|\hat{U}_{i}^{(a)} \left|\psi_{pol}\right\rangle\cdot\left\langle\psi_{path}^{\prime}\middle| \mathbf{a}\right\rangle\!\left\langle\mathbf{a}\middle|\psi_{path}\right\rangle\\ +\left\langle\psi_{pol}^{\prime}\middle|\hat{U}_{i}^{(b)}\left| \psi_{pol}\right\rangle\cdot\left\langle\psi_{path}^{\prime}\middle|\mathbf{b }\right\rangle\!\left\langle\mathbf{b}\middle|\psi_{path}\right\rangle. \tag{12}\]
To perform qutrit channel tomography on channel \(N_{a}\), we need to set \(\hat{U}_{j}^{(b)}\left|\psi_{pol}\right\rangle=\left|\psi_{pol}^{\prime}\right\rangle\), where \(\hat{U}_{j}^{(b)}\) takes the input polarization state to the output polarization state. Thus, the probability amplitude \(c_{T0}\) for the qutrit channel can be found by substituting this condition into equation 12, we have
\[c_{T0}=\left\langle\psi_{pol}^{\prime}\middle|\hat{U}_{i}^{(a)} \left|\psi_{pol}\right\rangle\cdot\left\langle\psi_{path}^{\prime}\middle| \mathbf{a}\right\rangle\!\left\langle\mathbf{a}\middle|\psi_{path}\right\rangle \\ +\left\langle\psi_{path}^{\prime}\middle|\mathbf{b}\right\rangle \!\left\langle\mathbf{b}\middle|\psi_{path}\right\rangle\\ =\left\langle\psi_{trial}^{\prime}\right|\hat{U}_{i}^{(T0)}\left| \psi_{trial}\right\rangle, \tag{13}\]
where
\[\left|\psi_{tri}\right\rangle=\sqrt{1-\left|a_{0}\right|^{2}}\left|\mathbf{a} \right\rangle\otimes\left|\psi_{pol}\right\rangle+a_{0}\left|\mathbf{b} \right\rangle\otimes\left|0\right\rangle^{(a)} \tag{14}\]
and
\[\left|\psi_{tr\neq i}^{\prime}\right\rangle=\sqrt{1-\left|a_{0}\right|^{\prime 2}} \left|\mathbf{a}\right\rangle\otimes\left|\psi_{pol}^{\prime}\right\rangle+a_{0}^ {\prime}\left|\mathbf{b}\right\rangle\otimes\left|0\right\rangle^{(a)} \tag{15}\]
where \(a_{0}\) (\(a_{0}^{\prime}\)) is the probability amplitude of zero photons being sent to (measured from) channel \(N_{a}\). \(a_{0}\) and \(a_{0}^{\prime}\) can take on the values of \(1\) and \(0\) when one of the paths of the interferometer is physically blocked, or the value of \(1/\sqrt{2}\) when both paths are unblocked. The probability of measuring some output state given any input state will, as a result, be independent of polarization when the path qubit is in the \(\left|\mathbf{b}\right\rangle\) state, effectively reducing the dimensions of the channel from a two-qubit channel to a qutrit channel where three orthogonal states exist for the entire apparatus - one for the photon in path \(1\), one for photon in path \(0\) and horizontally polarized, and one for the photon to be in path \(0\) and vertically polarized.
Figure 7: The maximum coherent information for different channel models in the phase-coherent implementation. The plot for the two-qutrit channel in both theory and experiment is calculated by adding the maximum coherent information for qutrit path \(0\) and path \(1\) instead of optimizing over possible input state as we did for the other channels. The two qutrit channel coherent information should therefore be interpreted as the lower bound for the maximum coherent information. Nonetheless, the plot still indicates that the channels follow the established hierarchy in that no channel model higher in the partial ordering has less maximum coherent information than a channel model lower in the ordering. This is true for both experiment and theory. |
2309.13583 | Strangelets formation in high energy heavy-ion collisions | The properties of phase diagram of strange quark matter in equilibrium with
hadronic matter at finite temperature are studied, where the quark phase and
hadron phase are treated by baryon density-dependent quark mass model and
hadron resonance gas model with hard core repulsion factor, respectively. The
thermodynamic conditions for the formation of metastable strange quark droplets
("strangelets") in relativistic nuclear collisions are discussed. We obtained a
rich structure of the phase diagram at finite temperature, and study the
dynamical trajectories of an expanding strange fireball. Our results indicate
that the strangeness fraction fs, perturbation parameter C, and confinement
parameter D have strong influence on the properties of phase diagram and the
formation of strangelets. Consider the isentropic expansion process, we found
that the initial entropy per baryon is less than or equal to 5, which gives a
large probability for the formation of strangelets. Furthermore, a sufficiently
large strangeness fraction fs and one-gluon-exchange interaction and
sufficiently small confinement interaction create possibilities for the
formation of strangelets. On the contrary, the fireball will always complete
the hadronization process when fs=0 or C>=0 or D^{1/2}>=170 MeV. | Huai-Min Chen, Cheng-Jun Xia, Guang-Xiong Peng | 2023-09-24T08:53:52Z | http://arxiv.org/abs/2309.13583v2 | # Phase structure in the baryon density-dependent quark mass model
###### Abstract
The properties of phase diagram of strange quark matter in equilibrium with hadronic matter at finite temperature are studied, where the quark phase and hadron phase are treated by baryon density-dependent quark mass model and hadron resonance gas model with hard core repulsion factor, respectively. Our results indicate that the strangeness fraction \(f_{s}\), perturbation parameter \(C\), and confinement parameter \(D\) have strong influence on the properties of phase diagram and the formation of strangelets, where a large \(f_{s}\), small \(C\) and \(D\) favor the formation of strangelets. Consider the isentropic expansion process, we found that the initial entropy per baryon is about 5, which gives a large probability for the formation of strangelets. Furthermore, as the strangeness fraction \(f_{s}\) and one gluon-exchange interaction strength \(C\) decrease and confinement parameter \(D\) increases, the reheating effect becomes more significant, reducing the possibility of forming strangelets. The new phase diagram could support a massive compact star with the maximum mass exceeding twice the solar mass and have a significant impact on the mass-radius relationship for hybrid stars.
## I Introduction
The phase transition between hadronic and quark matter is one of the significant and challenging fields of modern physics related to heavy-ion collisions, hybrid stars and hadronization in the early universe. In nature, on the one hand, pulsars provide a unique natural astrophysical laboratory for exploring the properties of strongly interacting matter, with hadronic matter at the core likely to undergo deconfinement phase transitions [1; 2]. If true, the hadron-quark phase diagram and phase transition mechanism have important implications for the structure and evolution of pulsar-like objects. In particular, the mass-radius relationship of pulsars is directly related to the equation of state of strongly interacting matter, so the observation of pulsars becomes the strongest constraints on strongly interacting matter at large densities and provides insight into dense matter, astrophysics, and cosmology [3; 4].
On the other hand, the Relativistic Heavy Ion Collider (RHIC) carried out in the early 2000s has reached the collision energy that could not be achieved in the previous heavy-ion experiments, and created matter with properties never seen at lower beam energies [5; 6; 7; 8]. It was supposed that the collisions could recreate the conditions of the early universe and discover new state of matter in which quarks and gluons have been liberated from confinement [9; 10; 11; 12; 13]. The state of matter is a hot Quark-Gluon Plasma (QGP). The droplets of strange quark matter (SQM), i.e., strangelets, may be formed during the cooling process of the QGP [10; 11; 12; 14], which could serve as an unmistakable signature for the QGP formation in the laboratory. In reality, there are many heavy ion experiments searching for strangelets [15; 16; 17; 18; 19; 20]. Therefore, it is necessary to estimate the phase transitions process in the compact stars and heavy-ion collisions.
At present, several effective theories have been developed to study phase diagram of cold and hot strongly interacting matter, drawing important conclusions. Based on the MIT bag model, Lee and Heinz [21] have presented a detailed discussion of the phase structure of strange quark matter with finite strangeness and the thermodynamic conditions for the formation of strangelets in relativistic nuclear collisions, and studied the isentropic expansion process of strange quark matter systems in phase diagram. On the basis of previous study, the influence of finite volume effect on phase diagram and evolution of strangelet is further considered [22]. Using a Brueckner-Hartree-Fock approach for the hadronic equation of state and a generalized MIT bag model for the quark part, Maruyama and coworkers [23] investigated the hadron-quark phase transition occurring in beta-stable matter in hyperon stars and analysed the differences between Gibbs and Maxwell phase transition constructions. Lugones and Grunfeld found that the surface tension under the MIT bag model is lower than the critical value in favor of the existence of the strangelets [24; 25]. In Shao's work, they studied the influence of vector interactions on the hadron-quark/gluon phase transition in the two-phase model, where quark matter is described by the PNJL model, and hadron matter by the nonlinear Walecka model [26].
In addition, there are several other effective models describing quark matter, such as quasiparticle model [27; 28; 29; 30], quark-cluster model [31; 32; 33], perturbation model [34; 35; 36; 37; 38; 39], and so on [40; 41; 42; 43]. In the present paper, we apply the baryon density-dependent quark mass model considering both confinement and first-order perturbation interactions to comprehensively study the phase diagram
of quark-gluon plasma phase in equilibrium with a finite hadronic gas and analyse carefully the formation of strangelets in isentropic expansion processes. The model was proved to be thermodynamically self-consistent in the previous paper [45; 46]. Based on the equation of state from the phase diagram, we obtain the mass-radius relations of hybrid stars for different parameter, and compare with the experimental mass-radius data of star.
The paper is organized as follows. In Sec. II, we give the thermodynamic treatment and equation of state of quark phase at finite temperatures in the framework of the baryon density-dependent quark mass model. In Sec. III, we consider the Hagedorn factor in hadronic phase, and give the thermodynamic treatment and equation of state. In Sec. IV, we present the numerical results about the properties of phase transition at finite temperature, where the phase equilibrium condition, phase diagram, and isentropic expansion process are discussed. In Sec. V, we calculate the numerical results for the properties of hybrid stars. Finally, a summary is given in Sec. VI.
## II The quark-gluon plasma phase
Following the previous papers [46], we consider that the system comprises quarks, electrons, their antiparticles, and gluons at finite temperatures. In the framework of a baryon density-dependent quark mass model, the contribution of various particles to the thermodynamic potential density can be written as
\[\Omega_{0}=\Omega_{0}^{+}+\Omega_{0}^{-}+\Omega_{0}^{g}. \tag{1}\]
The contribution of particle (\(+\)) and antiparticle (\(-\)) is
\[\Omega_{0}^{\pm}=\sum_{i}-\frac{d_{i}T}{2\pi^{2}}\int_{0}^{\infty}\ln\left[1+ e^{-(\sqrt{p^{2}+m_{i}^{2}}\mp\mu_{i}^{*})/T}\right]p^{2}\mathrm{d}p. \tag{2}\]
\[\Omega_{0}^{g}=\frac{d_{g}T}{2\pi^{2}}\int_{0}^{\infty}\ln\left[1-e^{-\sqrt{p ^{2}+m_{g}^{2}}/T}\right]p^{2}\mathrm{d}p, \tag{3}\]
where \(i=q(q=u,d,s)\), \(d_{q}=3(colors)\times 2(spins)=6\) and \(i=e\), \(d_{e}=2\) and \(i=g\), \(d_{g}=8(colors)\times 2(spins)=16\).
In this work, we adopt a baryon density-dependent quark mass model to describe the quark mass, i.e. \(m_{i}=m_{i0}+m_{\mathrm{I}}(n_{b},T)\), where \(m_{i0}\) and \(n_{b}\) represent the current mass (\(m_{ub}=5\) MeV, \(m_{d0}=5\) MeV, \(m_{s0}=120\) MeV) [44] and baryon density respectively. We note that the mass of particles and antiparticles vary with state variables, which corresponds to quark interactions. The quark mass scaling used is
\[m_{\mathrm{i}} = m_{i0}+\frac{D}{n_{\mathrm{b}}^{1/3}}\left(1+\frac{8T}{\Lambda }e^{-\Lambda/T}\right)^{-1} \tag{4}\] \[+\,Cn_{\mathrm{b}}^{1/3}\left(1+\frac{8T}{\Lambda}e^{-\Lambda/T}\right)\]
with \(\Lambda=280\) MeV, where \(D\) corresponds to the confinement parameter and \(C\) represents the strength of perturbative interactions. If \(C\) takes negative values, it represents the one-gluon-exchange interaction strength [47]. The baryon density-dependent quark mass model has been proved to satisfy thermodynamic self consistency in previous studies [45; 46].
By self-consistent thermodynamic treatment, we obtain various thermodynamic density quantities of the system as follows:
\[n_{i}^{Q}=-\frac{\partial\Omega_{0}}{\partial\mu_{i}^{*}}, \tag{5}\]
\[S^{Q}=-\frac{\partial\Omega_{0}}{\partial T}-\sum_{i=u,d,s,g}\frac{\partial m _{i}}{\partial T}\frac{\partial\Omega_{0}}{\partial m_{i}}, \tag{6}\]
\[P^{Q}=-\Omega_{0}+n_{b}\frac{\partial m_{I}}{\partial n_{b}}\frac{\partial \Omega_{0}}{\partial m_{I}}, \tag{7}\]
where the up index Q is used to label the quark phase.
Considering the contribution of gluons to the system, we need to know the effective mass of gluons. Recently, Borsanyi _et al._ gave 48 pressure values from the lattice simulation [48]. Based on pressure in lattice data, we could describe the gluon mass according to the fast convergence expression of QCD coupling. By the corresponding 48 pressure values, we use the least square method to obtain the most effective fitting results. Here, we define the scaled temperature as \(x=T/T_{c}\), where \(T_{c}\) is the critical temperature. At \(T<T_{c}\), the expression of gluon's equivalent mass is
\[\frac{m_{g}}{T}=\sum_{i}a_{i}x^{i}=a_{0}+a_{1}x+a_{2}x^{2}+a_{3}x^{3}, \tag{8}\]
where \(a_{0}=67.018\), \(a_{1}=-189.089\), \(a_{2}=212.666\), \(a_{3}=-83.605\). At \(T>T_{c}\), the expression of gluon's equivalent mass is
\[\frac{m_{g}}{T}=\sum_{i}b_{i}\alpha^{i}=b_{0}+b_{1}\alpha+b_{2}\alpha^{2}+b_{3 }\alpha^{3}, \tag{9}\]
where \(\alpha\) is the strong coupling constant and \(b_{0}=0.218\), \(b_{1}=3.734\), \(b_{2}=-1.160\), \(b_{3}=0.274\). As is well know, the QCD coupling \(\alpha\) is running and depends on the solution of the renormalization-group equation for the coupling. Recently, we have solved the renormalization group equations for the QCD coupling by a mathematically strict
way and obtained a fast convergence expression of \(\alpha\)[49]. Here, we only take the leading order term, i.e
\[\alpha=\frac{\beta_{0}}{\beta_{0}^{2}\ln(u/\Lambda)+\beta_{1}\ln\ln(u/\Lambda)}, \tag{10}\]
where \(\beta_{0}=11/2-N_{f}/3\), \(\beta_{1}=51/4-19N_{f}/12\), \(u/\Lambda=\sum_{i}c_{i}x^{i}=c_{0}+c_{1}x\), \(c_{0}=1.054\), \(c_{1}=0.479\).
## III Hadronic phase
We consider hadronic phase as a weakly interacting mixed gas of strange hadrons \(K^{+},K^{0},\Lambda,\Sigma,\Xi,\Omega\), non-strange hadrons \(\pi,\eta,N,\Delta(1232)\), and their anti-particles. According to the quark component of various hadrons, the chemical potential of hadrons is composed of quark chemical potential as follows
\[\mu_{i}=\sum_{q}(n_{q}^{i}-n_{\bar{q}}^{i})\mu_{q}, \tag{11}\]
where \(i\) and \(q\) represent the hadronic species and quark flavor respectively. \((n_{q}^{i}-n_{\bar{q}}^{i})\) is the net number of the quark q for \(i\)-th baryon.
Based on Bose\(-\)Einstein and Fermi\(-\)Dirac statistics, the expressions of the thermodynamic quantities for \(i\)-th noninteracting hadrons are
\[\varepsilon_{i}^{\rm pt}=\frac{d_{i}}{2\pi^{2}}\int_{0}^{\infty}\frac{p^{2} \varepsilon_{i}}{e^{(\varepsilon_{i}-\mu_{i})/T}\pm 1}+\frac{p^{2}\varepsilon_{i} }{e^{(\varepsilon_{i}+\mu_{i})/T}\pm 1}dp, \tag{12}\]
\[P_{i}^{\rm pt}=\frac{d_{i}}{6\pi^{2}}\int_{0}^{\infty}\frac{p^{4}}{\varepsilon _{i}(e^{(\varepsilon_{i}-\mu_{i})/T}\pm 1)}+\frac{p^{4}}{\varepsilon_{i}(e^{( \varepsilon_{i}+\mu_{i})/T}\pm 1)}dp, \tag{13}\]
\[n_{i}^{\rm pt}=\frac{d_{i}}{2\pi^{2}}\int_{0}^{\infty}\frac{p^{2}}{e^{( \varepsilon_{i}-\mu_{i})/T}\pm 1}+\frac{p^{2}}{e^{(\varepsilon_{i}+\mu_{i})/T}\pm 1 }dp, \tag{14}\]
\[S_{i}^{\rm pt} = \pm\frac{d_{i}}{2\pi^{2}}\int_{0}^{\infty}\left\{\ln[1\pm e^{-( \varepsilon_{i}-\mu_{i})/T}]\pm\frac{(\epsilon_{i}-\mu_{i})/T}{e^{(\varepsilon _{i}-\mu_{i})/T}\pm 1}\right. \tag{15}\] \[+\ln[1\pm e^{-(\varepsilon_{i}+\mu_{i})/T}]\pm\frac{(\epsilon_{i }+\mu_{i})/T}{e^{(\epsilon_{i}+\mu_{i})/T}\pm 1}\right\}p^{2}{\rm d}p,\]
where \(\epsilon_{i}=\sqrt{p^{2}+m_{i}^{2}}\), and the upper and the lower operation symbol denotes the fermions and bosons respectively. The parameter \(d_{i}\) represent degeneracy factors \(d_{i}=\) spin \(\times\) isospin. Naturally, the total energy density, pressure, and baryon number density for the hadronic phase are
\[\varepsilon^{\rm pt} = \sum_{i}\varepsilon_{i}^{\rm pt}, \tag{16}\] \[P^{\rm pt} = \sum_{i}P_{i}^{\rm pt},\] (17) \[n_{\rm b}^{\rm pt} = \sum_{i}b_{i}n_{i}^{\rm pt}. \tag{18}\]
Here, a proper volume correction of point-like hadron is used to consider the hard core repulsion, which is known as the Hagedorn correction factor [50; 21]. Then, the energy density, pressure, baryon number density, and entropy density are modified, i.e.,
\[E^{\rm H} = \frac{1}{1+\varepsilon^{\rm pt}/4B}\sum_{i}\varepsilon_{i}^{\rm pt}, \tag{19}\] \[P^{\rm H} = \frac{1}{1+\varepsilon^{\rm pt}/4B}\sum_{i}P_{i}^{\rm pt},\] (20) \[n_{\rm b}^{\rm H} = \frac{1}{1+\varepsilon^{\rm pt}/4B}\sum_{i}b_{i}n_{i}^{\rm pt},\] (21) \[S^{\rm H} = \frac{1}{1+\varepsilon^{\rm pt}/4B}\sum_{i}S_{i}^{\rm pt}, \tag{22}\]
where \(b_{i}\) corresponds to the baryon number of \(i\)-th hadron. The factor \((1+\varepsilon^{\rm pt}/4B)^{-1}\) is the proper volume correction and limits the energy density to \(4B\), where the bag constant \(B^{1/4}=180\) MeV [44].
The number density of strange quarks is
\[n_{\rm s}^{\rm H}=\frac{1}{1+\varepsilon^{\rm pt}/4B}\sum_{i}s_{i}n_{i}^{\rm pt}, \tag{23}\]
where \(s_{i}\) is the strange valence quark number of \(i\)-th hadron.
## IV Phase diagram and Isentropic expansion process at finite temperature
We consider the isolated system with finite strangeness that undergoes a first order phase transition from the hadronic phase to the QGP phase. In the equilibrium phase diagram, eigen quantities satisfy Gibbs equilibrium conditions, i.e., chemical equilibrium \(\mu_{i}^{\rm Q}=\mu_{i}^{\rm H}\), mechanic equilibrium \(P^{\rm Q}=P^{\rm H}\), and thermodynamic equilibrium \(T^{\rm Q}=T^{\rm H}\). In the isolated system, the total net baryon number in the system is kept a constant. In addition, the total net strangeness in the compact system is conserved since the collision in heavy-ion collisions is too short to establish flavor equilibrium [22]. The strangeness fraction is defined as
\[f_{\rm s}=n_{\rm s}^{\rm tot}/n_{\rm b}^{\rm tot}. \tag{24}\]
We consider that the strangeness fraction is in the range \(0\leq f_{\rm s}<3\). The system maintains a fixed strangeness fraction, resulting in a smooth variation of the chemical potentials during the conversion from hadronic matter to QGP.
The phase transitions occur through a mixed phase. For the quark phase, it is difficult for the system to achieve mechanical equilibrium since the quark mass will become infinite when \(n_{\rm b}^{\rm Q}\to 0\). Referring to the method used by He et al. [22], we define a ratio of the hadronic
phase volume to the total volume as \(\alpha=V^{\rm H}/V^{\rm tot}\). \(\alpha=0\) and \(\alpha=1\) correspond to the beginning and end of hadronization respectively, and then we obtain the boundary between mixed phase and hadronic or quark phase. Similarly, we define \(n_{\rm b}^{\rm Q}=N_{\rm b}^{\rm Q}/V^{\rm Q}\), \(n_{\rm b}^{\rm H}=N_{\rm b}^{\rm H}/V^{\rm H}\) to represent the baryon number density in the quark and hadronic phases. Generally, common light quarks chemical potentials \(\mu_{u}=\mu_{d}\) are assumed [22; 50]. According to the Gibbs conditions and baryon/strangeness density conservation condition, the system satisfies
\[P^{\rm Q}(T,\mu_{q},\mu_{\rm s},n_{\rm b}^{\rm Q})=P^{\rm H}(T, \mu_{q},\mu_{\rm s}), \tag{25}\] \[n_{\rm b}^{\rm tot}=n_{\rm b}^{\rm Q}(T,\mu_{q},\mu_{\rm s},n_{ \rm b}^{\rm Q})(1-\alpha)+n_{\rm b}^{\rm H}(T,\mu_{q},\mu_{\rm s})\alpha,\] (26) \[n_{\rm tot}^{\rm tot}=n_{\rm b}^{\rm Q}(T,\mu_{\rm s},n_{\rm b}^ {\rm Q})(1-\alpha)+n_{\rm b}^{\rm H}(T,\mu_{q},\mu_{\rm s})\alpha,\] (27) \[S^{\rm tot}=S^{\rm Q}(T,\mu_{q},\mu_{\rm s},n_{\rm b}^{\rm Q})(1 -\alpha)+S^{\rm H}(T,\mu_{q},\mu_{\rm s})\alpha. \tag{28}\]
In the framework of the thermodynamic treatment method of strange quark matter and hadronic matter given in Sec. II and III, we could obtain the phase structure by solving Eqs. (25)-(27). Considering Eq. (28), we will get the isentropic expansion process with baryon density-dependent quark mass model.
At fixed strangeness fraction, we get the phase diagram in Fig. 1 by solving Eqs. (25)-(27). The dashed curves obtained at \(\alpha\)=0 represents the boundary between the quark-gluon phase and the mixed phase. When the value of \(\alpha\) approaches 1, we obtain the boundary between the hadron phase and the mixed phase, which is indicated by solid curves. From the phase diagram, we can see that the quark phase is on the right side of each diagram, the hadron phase is on the left side, and the mixed phase is in the middle. The strangeness fraction has a significant impact on the boundary between the hadron phase and the mixed phase. As the strangeness fraction increases, the area of the hadron phase dramatically decrease, and the boundary curve between the hadron phase and the mixed phase will approach the temperature axis. The boundary between the quark phase and the mixed phase does not vary significantly with the strangeness fraction, but only slightly expands to the right. Therefore, the area of the mixed phase is constantly expanding. Furthermore, we found a narrow high temperature and low density mixed phase region in the phase diagram, which implies the possibility of forming strangelets at HICs. In addition, our research shows that a large strangeness fraction is beneficial for the formation of strangelets during the process of quark-hadron phase transition. This is consistent with the conclusion of previous model studies such as QMDTD model [50].
In Fig. 2, we present phase diagrams adopting different strengths for one-gluon-exchange interactions. We can see that the boundary curve between the hadron phase and the mixed phase will expand to the right, the boundary between the quark phase and the mixed phase also expands to the right and has a more significant impact as the one gluon-exchange interaction strength \(C\) decreases. In other words, the area of the mixed phase and hadron phase is continuously decreasing with the strength of one-gluon-exchange interactions. Therefore, a larger one gluon-exchange interaction strength \(C\) is conducive to the formation of strangelets.
The phase diagrams under different confinement parameters is presented in Fig. 3. As the confinement parameter \(D\) increases, the boundary curve between the hadron phase and the mixed phase shifts to the right, and the boundary between the quark phase and the mixed phase also shifts to the right, where the mixed phase covers a larger density range. Therefore, a smaller confinement parameter \(D\) is beneficial for the formation of strangelets.
Next, we discuss the isentropic expansion process with the initial entropies per baryon under fixed strangeness fraction, in which the system is adiabatic and total entropy is conserved. Based on Eqs. (25)-(28), We obtained the expansion trace under entropy conservation. The phase diagram and isentropic expansion process un
Figure 1: Phase diagram of quark and hadron phases at different fixed strangeness fraction.
Figure 2: Phase diagram of quark and hadron phases under different perturbation parameters \(C\).
der different strangeness fraction \(f_{s}\) are shown in Fig. 4. The two dashed curves represent the isentropic expansion trace, and the initial entropy per baryon is 5 and 10, respectively. We find that a high initial entropies per baryon will prevent the occurrence of strangelets at the final stage of evolution. At the strangeness fraction \(f_{s}=0.1,0.3,0.5\), the initial entropy per baryon is about 5, which is beneficial for the formation of strangelets. Compared with the MIT bag model [21; 22; 50], the baryon density-dependent quark mass model predicts a similar isentropic expansion trajectory for the formation of strangelets. However, the difference is that the mixed phase in the baryon density-dependent quark mass model has a narrow region of mixed phase at high temperature and low density. In case of high entropy, the isentropic curve for the mixed phase is shorter, and the reheating effect of baryon density-dependent quark mass model is more significant than that of the bag model. Moreover, as the strangeness fraction \(f_{s}\) decreases, the area of hadron phase will expand and the area of mixed phase will shrink in the phase diagram, the reheating effect is more significant, reducing the chances for the formation of strangelets.
In Fig. 5, the phase diagram and isentropic expansion process under different confinement parameter \(D\) are shown. Based on the analysis of phase diagrams with different strangeness fraction, we take \(f_{s}\)=0.5. As the confinement parameter \(D\) increases, the area of hadron phase and mixed phase will expand in the phase diagram, the reheating effect is more significant, decreasing the possibility for the formation of strangelets.
In Fig. 6, we present the phase diagram and isentropic expansion process under different confinement parameter \(D\). As before, \(f_{s}\) is taken as 0.5. As the one-gluon-exchange interaction strength \(C\) increases, the area of hadronic phase and mixed phase in the phase diagram will be reduced, and the reheating effect will not change significantly, increasing the possibility for the formation of strangelets.
## V Properties of hybrid stars
Compact stars are formed at the end of the life cycle of massive stars. The study of dense matters has always been an important subject in physics and astrophysics, which could be found in compact stars. When nuclear matter reaches a large enough density at the core of a compact star, nucleons will overlap and transform into quarks via the phase transition process, forming a hybrid star with a core composed of quark matter [51]. The obtained equation of state (EOS) of matter inside hybrid stars at zero temperature is presented in Fig. 7, where the slope of EOS increases with \(C\) and decrease with \(D\). This is consistent with the conclusions of our previous study [46].
Based on the EOSs, we solve Tolman-Oppenheimer
Figure 4: The phase diagram and isentropic expansion process have been shown at different fixed strangeness fraction.
Figure 5: The phase diagram and isentropic expansion process have been shown under different perturbation parameters \(C\).
Figure 3: Phase diagram of quark and hadron phases under different confinement parameter \(D\).
Volkov equation
\[\frac{\mathrm{d}P}{\mathrm{d}r}=-\frac{GmE}{r^{2}}\frac{(1+P/E)(1+4\pi r^{3}P/m)}{ 1-2Gm/r}, \tag{29}\]
with the subsidiary condition
\[\mathrm{d}m=4\pi Er^{2}\mathrm{d}r. \tag{30}\]
The obtained mass-radius relationships of hybrid stars corresponding to different parameters are shown in Fig. 8. On the basis of the above numerical results, we apply the baryon density-dependent quark mass model firstly to study the hadron-quark transition in a hybrid stars. The most massive stars are depicted by black dots. We can see that the maximum mass of the hybrid star increases with \(C\) and decreases with \(D\). Futhermore, we fix the parameters in the model to describe the recent discovered compact stars as hybrid stars. Here, we label parameter groups by the parameter sets (\(C\), \(\sqrt{D}\) in MeV). The mass-radius relations of hybrid stars corresponded to \((-0.3,168)\), \((0,160)\), \((0.4,140)\) and \((0.6,133)\) are presented, which could describe the recently discovered compact object HESS J1731-347 with \(M=0.77^{+0.20}_{-0.17}\)\(M_{\odot}\) and \(R=10.4^{+0.86}_{-0.78}\) km [52], but not the two-solar-mass pulsars. The observations of the pulsars in the Rapid Burster (MXB 1730-335) and the millisecond pulsar PSR J0030 + 0451 [53; 54] correspond to the mass-radius relation of hybrid stars with \((0.4,140)\). The results of the mass-radius relation of hybrid stars corresponded to \((0.6,133)\) is given, which could describe the millisecond pulsar PSR J0030 + 0451 as hybrid star. The results of the mass-radius relation of hybrid stars corresponded to \((0.7,129)\) is obtained, which could describe the millisecond pulsar PSR J0030 + 0451, PSR J0348 + 0432 with a mass of \(2.01\pm 0.04\)\(M_{\odot}\)[55]and the recently discovered massive pulsar MSR J0740 + 6620 (\(2.072^{+0.134}_{-0.132}\)\(M_{\odot}\) and \(R=12.39^{+2.60}_{-1.96}\) km at the 95.4% credibility interval) as hybrid star [56]. The mass-radius relations of hybrid stars corresponded to \((0.66,130.3)\) could describe all the above mentioned stars as hybrid stars.
## VI Summary
We have systematically studied the phase diagram of strange quark matter in equilibrium with hadronic matter at finite temperature within baryon density-dependent quark mass model and hard core repulsion factor. Based on the Gibbs equilibrium conditions, we studied the effects of the strangeness fraction \(f_{s}\), quark confinement and first-order perturbative interactions on the phase diagram, the isentropic expansion process and the formation of strangelets. It is found that a large strangeness fraction \(f_{s}\), smaller perturbation parameter \(C\) and confinement parameter \(D\) are beneficial for the formation of strangelets, with the reheating effect of the isentropic expansion process being less signif
Figure 8: The mass-radius relationship for hybrid stars with different parameters.
Figure 6: The phase diagram and isentropic expansion process have been shown under different confinement parameter \(D\).
Figure 7: The relationship between the energy per baryon of matter inside hybrid stars and the baryon number density.
icant. In addition, we found that the initial entropy per baryon is about 5, which is beneficial for the formation of strangelets. Based on the obtained equation of state, we calculated the mass-radius relationship for hybrid stars, and described the observed mass and radius of the pulsars.
###### Acknowledgements.
The authors would like to thank support from NSFC (Nos. 11135011, 12275234, 11875052) and the national SKA programme (No. 2020SKA0120300).
|
2309.14763 | ConPET: Continual Parameter-Efficient Tuning for Large Language Models | Continual learning necessitates the continual adaptation of models to newly
emerging tasks while minimizing the catastrophic forgetting of old ones. This
is extremely challenging for large language models (LLMs) with vanilla
full-parameter tuning due to high computation costs, memory consumption, and
forgetting issue. Inspired by the success of parameter-efficient tuning (PET),
we propose Continual Parameter-Efficient Tuning (ConPET), a generalizable
paradigm for continual task adaptation of LLMs with task-number-independent
training complexity. ConPET includes two versions with different application
scenarios. First, Static ConPET can adapt former continual learning methods
originally designed for relatively smaller models to LLMs through PET and a
dynamic replay strategy, which largely reduces the tuning costs and alleviates
the over-fitting and forgetting issue. Furthermore, to maintain scalability,
Dynamic ConPET adopts separate PET modules for different tasks and a PET module
selector for dynamic optimal selection. In our extensive experiments, the
adaptation of Static ConPET helps multiple former methods reduce the scale of
tunable parameters by over 3,000 times and surpass the PET-only baseline by at
least 5 points on five smaller benchmarks, while Dynamic ConPET gains its
advantage on the largest dataset. The codes and datasets are available at
https://github.com/Raincleared-Song/ConPET. | Chenyang Song, Xu Han, Zheni Zeng, Kuai Li, Chen Chen, Zhiyuan Liu, Maosong Sun, Tao Yang | 2023-09-26T08:52:04Z | http://arxiv.org/abs/2309.14763v1 | # ConPET: Continual Parameter-Efficient Tuning for Large Language Models
###### Abstract
Continual learning necessitates the continual adaptation of models to newly emerging tasks while minimizing the catastrophic forgetting of old ones. This is extremely challenging for large language models (LLMs) with vanilla full-parameter tuning due to high computation costs, memory consumption, and forgetting issue. Inspired by the success of parameter-efficient tuning (PET), we propose Continual Parameter-Efficient Tuning (ConPET), a generalizable paradigm for continual task adaptation of LLMs with task-number-independent training complexity. ConPET includes two versions with different application scenarios. First, Static ConPET can adapt former continual learning methods originally designed for relatively smaller models to LLMs through PET and a dynamic replay strategy, which largely reduces the tuning costs and alleviates the over-fitting and forgetting issue. Furthermore, to maintain scalability, Dynamic ConPET adopts separate PET modules for different tasks and a PET module selector for dynamic optimal selection. In our extensive experiments, the adaptation of Static ConPET helps multiple former methods reduce the scale of tunable parameters by over 3,000 times and surpass the PET-only baseline by at least 5 points on five smaller benchmarks, while Dynamic ConPET gains its advantage on the largest dataset. The codes and datasets are available at [https://github.com/Raincleared-Song/ConPET](https://github.com/Raincleared-Song/ConPET).
Continual learning, parameter-efficient tuning, large language models.
## I Introduction
Recently, large language models (LLMs) have shown excellent capabilities from a wide range of aspects [1, 2, 3], which equip them with larger potential in handling various task-specific settings. To adapt LLMs to downstream tasks, fine-tuning is often the first choice [4]. However, in real-life applications, the consistent emergence of materials such as the latest corpus [5], new knowledge [6, 7], and heterogeneous tools [8] can frequently change the task schemas. This necessitates continual task-specific adaptation of LLMs, which is highly expensive and performance-risky when conducted through traditional fine-tuning due to the huge number of LLM parameters and the catastrophic forgetting issue [9], namely the significant performance decrease on old tasks after adapted to new ones.
Although many continual learning methods have been proposed to handle these problems, specific challenges hinder their adaptation to LLMs. Dynamic-architecture-based methods [11, 12], which progressively increase the model scale with the growth of data, suffer from unacceptable linearly growing costs due to the unrestricted scaling of the architecture. Meanwhile, most memory-based methods [13, 14, 15, 9] frequently re-tune the model through a fixed replay strategy, where examples saved in limited episodic memory are replayed a fixed number of times with new data combined. This makes them susceptible to low scalability and over-fitting on memorized examples [15]. Furthermore, since their backbones are relatively small in scale such as BERT [16] and RoBERTa [17], they adopt full-parameter tuning in their original works, which imposes a heavy burden on computation resources for LLMs in terms of time and GPU memory.
Faced with these challenges, we propose **C**ontinual **P**arameter-**E**fficient **T**uning (**ConPET**), a generalizable paradigm for the continual fine-tuning-based task adaptation of LLMs with task-number-independent training complexity, including **Static ConPET** and **Dynamic ConPET**.
As shown in Figure 1, we first use **Static ConPET**, a general approach to adapting traditional memory-based continual methods such as EMR [10] to LLMs while coping with high training costs, over-fitting, and catastrophic forgetting. Specifically, it contains two key adaptations: (1) Replacing vanilla
Fig. 1: The comparison between traditional EMR [10] and adapted EMR with Static ConPET. The latter adopts the dynamic replay strategy for training data generation and PET for LLM tuning.
fine-tuning with parameter-efficient tuning (PET). As PET only updates tiny-scale tunable modules (generally account for lower than \(1\%\) of the LLM) while keeping the original LLM frozen, the computation costs for parameter updates and the GPU memory consumption can be largely reduced [4]. (2) Utilizing the historical data through a dynamic replay strategy, which conducts robust sampling hierarchically from full-volume historical data rather than a limited memory to improve data coverage and alleviate over-fitting and forgetting. A restriction on the number of sampled batches is also introduced to control the training complexity.
Adapted from memory-based methods, Static ConPET with a single PET module cannot avoid the low scalability issue. Therefore, we further propose **Dynamic ConPET** shown in Figure 2, a novel dynamic architecture including a backbone LLM, a PET module selector, a set of task-specific PET modules, and a cache system. Similar to mixture-of-expert (MoE) architectures, Dynamic ConPET separates the parameters for tasks with different schemas (e.g., distinct types of knowledge) in different modules, which naturally alleviates forgetting and maintains scalability to the increasing task number. As each task-specific module only contains a lightweight and pluggable PET module rather than heavy sub-networks in former dynamic-architecture-based methods, Dynamic ConPET is more tailored for memory-consuming LLMs. Meanwhile, the PET module selector ensures a constant forward propagation cost by pre-selecting a fixed quantity of task-specific modules with the highest scores to participate in the prediction.
We conduct comprehensive experiments on multiple datasets of knowledge extraction, a representative continual learning scenario with consistently emerging new knowledge types. The results demonstrate that both versions of ConPET are effective in the continual task-specific adaptation of LLMs while having different application scenarios. Further analysis shows the effect of parameter-efficient tuning, PET module pre-selection, and different task splits.
## II Related Work
### _Parameter-Efficient Tuning_
To fine-tune LLM more efficiently, parameter-efficient tuning is proposed [4], which mainly consists of three types: (1) Addition-based methods [18, 19, 20] introduce additional small-scale tunable parameters while freezing the original LLM. (2) Specification-based methods [21, 22, 23] selectively optimize part of the LLM parameters with the remaining parameters unchanged. (3) Reparameterization-based methods [24, 25] convert adaptive parameters of LLMs into parameter-efficient forms during optimization. In this work, ConPET can be adapted to addition-based methods and some reparameterization-based methods. Notably, LoRA [24], which introduces additional parameters to model the weight differences, is proven to perform better than most mainstream PET methods and thus widely adopted [4]. Therefore, our experiments in this work will focus on LoRA as a representative.
PET is demonstrated to save considerable computation costs and memory consumption. According to previous works [26], given the same instruction data and GPUs, the time consumed by PET tuning on LLaMA-7B [27] is only around one-fourth that of full-parameter tuning.
### _Continual Learning for LLMs_
Continual learning aims at teaching a model to incrementally handle newly emerging tasks while mitigating the catastrophic forgetting issue. Despite the success of in-context learning in zero/few-shot leaning [1], fine-tuning is still a prevalent paradigm in task adaptation of LLMs [4]. The existing efforts on tuning-based continual learning can be generally classified into three categories: (1) Consolidation-based methods protect the parameters of importance from shifting considerably, which is often implemented through regularization [28, 29, 30] or distillation [31, 32]. However, they perform poorly for lack of historical data utilization. (2) Dynamic-architecture-based methods [11, 12, 33] maintain a model whose scale is progressively increasing with the task number. By introducing independent parameters for different tasks, they can effectively overcome catastrophic forgetting but suffer from the linear growth of training costs. (3) Memory-based methods [13, 14, 15, 9, 34] introduce episodic memory to store and replay examples of old tasks. Although the old task information is partly retained in memory, they are susceptible to over-fitting and low scalability caused by the frequent re-training on a fixed model architecture through the fixed replay strategy. All the above methods adopt vanilla fine-tuning due to the small scale of their backbones.
Considering the expensive costs of tuning LLMs, some works integrate parameter-efficient tuning with continual learning but still cannot fully handle some shortcomings of former methods. Both LFPT5 [35] and Progressive Prompts [36] propose to continuously introduce and train new soft prompts for a new task to tackle catastrophic forgetting, while LFPT5 additionally generate pseudo examples for experience replay. However, similar to the aforementioned dynamic-architecture-based methods, the continual accumulation of new prompts can cause the scalability issue. AdapterCL [37] learns a separate Adapter [18] for each task and selects the task-specific Adapter based on the perplexity. Such a selection method causes a linear increase in forwarding costs with respect to the task number. LAE [38] iteratively trains and ensembles two experts favored by novel/historical tasks respectively, which is susceptible to over-fitting given a fixed number of experts.
Moreover, there are also works focused on the continual pre-training problem [5, 39, 40], but they are too expensive to be applied to task-specific LLM fine-tuning, which is often conducted frequently with low resources.
### _Mixture-of-Expert for LLMs_
Another line of work similar to Dynamic ConPET is the MoE architecture composed of multiple separate networks, which learn to handle different subsets of input examples or take distinct responsibilities. MoE is first demonstrated to be effective in deep learning by introducing an MoE layer stacked between LSTM modules [41]. Later, GShard [42], BASELayer [43], HashLayer [44], Switch Transformers [45],
MoEfication [46], and DEMIX [47] attempt to explore the implementation and training strategies of MoE in Transformer-based models. However, these works have to modify specific structures (e.g., the FFN layer) in Transformers, which cannot be easily adapted to the continual fine-tuning of an already pre-trained LLM. Some recent works have proved that LLMs with MoE are able to obtain supreme performances when combined with instruction tuning [48, 49], illustrating the potential of such kinds of architecture.
To improve efficiency while alleviating over-fitting and catastrophic forgetting in the continual task-specific adaptation of LLMs, Static ConPET adapts memory-based continual learning methods using PET and dynamic replay strategy. Furthermore, Dynamic ConPET handles low scalability through a dynamic structure with task-specific PET modules and a selector. Unlike former MoE architectures, Dynamic ConPET is better suited for LLM tuning as each expert is a lightweight and highly pluggable PET module, which can be tuned without altering the original LLM structure or parameters. Finally, both ConPET versions offer task-number-independent training complexity.
## III Proposed Methods
### _Task Definition_
Continual fine-tuning of LLMs aims at teaching the LLM to simultaneously handle a sequence of tasks in a specific application scenario with consistently emerging materials. We denote the set of schemas of the materials handled in the \(k\)-th task as \(\mathcal{S}_{k}\), with a corresponding training set \(\mathcal{T}_{k}\) and an evaluation set \(\mathcal{Q}_{k}\). The schemas of different tasks are disjoint. At the \(k\)-th step, given the seen training data \(\mathcal{\tilde{T}}_{k}=\cup_{i=1}^{k}\mathcal{T}_{i}\), the model is required to obtain satisfactory results on the evaluation set of the new task as well as all the \(k-1\) historical tasks, namely \(\mathcal{\tilde{Q}}_{k}=\cup_{i=1}^{k}\mathcal{Q}_{i}\) with the schema set \(\mathcal{\tilde{S}}_{k}=\cup_{i=1}^{k}\mathcal{S}_{i}\).
Taking knowledge extraction as a representative, \(\mathcal{S}_{k}\) is a specific subset of knowledge types (e.g., entity types or relation types). Each example in the dataset consists of an input sentence and a groud-truth label, indicating the type of knowledge expressed in the input. The LLM is then required to predict the knowledge type label with steadily satisfying accuracy along the task sequence.
### _Static ConPET_
Static ConPET is a generalizable approach to adapting former memory-based continual learning methods to the continual task-specific adaptation of LLMs, which mainly consists of two parts: the PET adaptation and the dynamic replay strategy.
#### Iii-B1 PET Adaptation and Example Encoding
As most former memory-based methods mainly concern relatively small-scale models [9, 10, 14], we first replace the vanilla fine-tuning with PET when applying them to LLMs. Specifically, instead of tuning all the parameters in LLMs, we only optimize one tiny PET module, while the LLM and remaining modules stay frozen. Considering the small size of tunable parameters, the computation costs for parameter updates and the GPU memory consumption will be significantly reduced.
With the aid of PET adaptation, we can conduct more efficient example encoding, which is aimed at generating informative representations for downstream tasks. We take knowledge extraction, including entity typing and relation extraction, as a representative. To improve the quality of representations, we adopt an LLM as our backbone encoder and enhance the inputs through entity markers and prompt templates. The LLM has obtained large amounts of knowledge through unsupervised training and can convert inputs into informative hidden state vectors. Following the previous work [50], we apply entity markers surrounding each entity in the input example to insert entity positional information. Besides, inspired by the success of prompt tuning [51, 52], we append prompt templates at the end of inputs and take the hidden state of [MASK] as the example representation. The entity markers and prompt templates used in the two knowledge extraction tasks involved are shown in Table I.
Formally, given an input example **x**, the LLM integrated with a PET module \(\mathrm{M}\) first encodes **x** into its example representation, which is then projected to the corresponding logits by the linear head contained in \(\mathrm{M}\). We hereinafter denote this process as follows,
\[\textbf{s}=f(\mathrm{M},\textbf{x}) \tag{1}\]
where \(f\) and **s** stand for the encoding function and the logit vector respectively.
#### Iii-B2 Dynamic Sampling Strategy
Rather than conducting replays on limited memorized examples as existing memory-based methods, ConPET utilizes historical data through a dynamic replay strategy to avoid over-fitting and control the overall training steps. Specifically, we remove the limits to storage space to improve data coverage. Instead, the replay examples are dynamically selected from the full-volume data, under a restriction on the maximum batch number at each step, which ensures task-number-independent complexity.
This strategy may challenge a common assumption in continual learning that the memory is limited and thus it is unrealistic to save full-volume data [14]. However, we consider it more reasonable to focus on the restriction of
training costs (complexity) rather than memory in terms of price and consumption. Generally, current training corpora do not exceed the TB-level even for GPT-3 175B [1], one of the largest existing language models. The cost of storing such scale data is much less than a single V100 GPU, let alone that training modern LLMs often requires hundreds or even thousands of GPUs [1, 27]. More supporting facts for this claim are provided in Appendix A.
Another problem of full-data storage is statistically inefficient sampling [14]. To overcome this issue, we adopt hierarchical sampling when utilizing stored historical data to ensure equal coverage for the examples of each old task. Specifically, rather than direct random sampling, we first generate an old task ID and then select an example from the sub-dataset of that task. Besides, a fixed ratio is kept between old and new examples in each training batch. In this way, ConPET is more robust to the data imbalance issue and statistically more efficient in terms of equal coverage for each historical task.
### _Dynamic ConPET_
Despite the efficiency of Static ConPET, there still exists a potential problem of low scalability. Specifically, under downstream tasks with extremely abundant emerging materials, the volume of knowledge to be acquired may exceed the capacity of tunable parameters and thus the performance will decrease. This issue can be further exacerbated by PET due to the tiny scale of PET modules.
Therefore, we introduce Dynamic ConPET to address this issue, which is composed of a backbone LLM, a set of task-specific PET modules, a PET module selector, and a cache system. We denote the PET module for the \(k\)-th task as \(\mathrm{M}_{k}\). The working process at the \(k\)-th step can be summarized in two procedures: (1) PET module pre-selection (Section III-C1): We train the PET module selector to classify input examples into the \(k\) seen schema sets \(\{\mathcal{S}_{1},\mathcal{S}_{2},...,\mathcal{S}_{k}\}\). Then, \(t\) PET modules corresponding to schema sets with top-\(t\) selection scores are reserved as active ones. (2) Prediction with active PET modules (Section III-C2): Active PET modules produce logits on their own schema set respectively, which are then concatenated to make the final prediction. We analyze the training complexity in Section III-C3. As an auxiliary module to address duplicate logit computations, the cache system is introduced in Section III-C4. Dynamic ConPET also adopts the same dynamic replay strategy.
#### Iii-C1 PET Module Pre-Selection
To avoid the uncontrollable linear growth of training costs as in former dynamic-architecture-based methods, we employ PET module pre-selection to select a fixed quantity of the most important PET modules. Specifically, at the \(k\)-th step, we train a PET module selector (also a PET module) to distinguish a fixed number \(t\) of task schema sets that each example most probably belongs to among \(\{\mathcal{S}_{1},\mathcal{S}_{2},...,\mathcal{S}_{k}\}\). Then the PET modules specific for the selected \(t\) schema sets are reserved as active ones to participate in the subsequent inference.
Formally, the PET module selector converts each example \(\mathbf{x}\) into a selection score vector \(\mathbf{s}_{sel}\) of size \(k\) following Formula 1, whose \(j\)-th element stands for the confidence that \(\mathbf{x}\) belongs to \(\mathcal{S}_{j}\). The top-\(t\) elements of \(\mathbf{s}_{sel}\) (with indexes \(\{i_{1},i_{2},...,i_{t}\}\)) determine the selection of \(t\) active PET modules \(\{\mathrm{M}_{i_{i}},\mathrm{M}_{i_{2}},...,\mathrm{M}_{i_{t}}\}\). Besides, we adopt the teacher-forcing policy that the correct PET module corresponding to
Fig. 2: The architecture of ConPET when the number of active PET modules is 2. The working process can be split into two procedures: PET module pre-selection and prediction with active PET modules. All logits generated by a specific PET module will be saved instantly by the cache system after this module completes tuning.
the example is always selected for training. Through pre-selection, the forward propagation cost is kept unaffected by the number of PET modules. Experiments in Section IV-G also demonstrate its positive effect on performance.
#### Iii-B2 Prediction with Active PET Modules
To obtain the final prediction, we integrate the learned information from active modules \(\{\mathrm{M}_{i_{1}},\mathrm{M}_{i_{2}},...,\mathrm{M}_{i_{t}}\}\). Formally, each active module obtains a logit vector on its corresponding schema set following Formula 1, while the logit vectors of inactive modules (i.e., those unselected modules) are assigned as vectors of an identical low enough value \(\alpha\). We then concatenate the logits of all PET modules and figure out the prediction. This procedure can be expressed as the following formulas,
\[\begin{split}&\mathbf{s}_{j}=f(\mathrm{M}_{j},\mathbf{x}),\ j\in\{i_{1},i_{2},...,i_{t}\},\\ &\mathbf{s}_{j}=[\alpha,\alpha,...,\alpha],\ j\notin\{i_{1},i_{2 },...,i_{t}\},\\ &\mathbf{s}_{j}\in\mathbb{R}^{|\mathcal{S}_{j}|},j=1,2,...,k,\\ & pred=\arg\max{[\mathbf{s}_{1},\mathbf{s}_{2},...,\mathbf{s}_{ k}]},\end{split} \tag{2}\]
where \([\cdot]\) denotes concatenation, \(\mathbf{s}_{j}\) refers to the logit vector produced by the \(j\)-th PET module, and \(pred\) means the predicted label.
#### Iii-B3 Detailed Algorithm and Complexity Analysis
Finally, we conduct an analysis of the detailed algorithm and its training complexity. The process of training Dynamic ConPET for the \(k\)-th task is presented in Algorithm 1. Both the PET module selector and task-specific modules adopt a standard cross-entropy loss for multi-classification as the training objective. The PET module selector is iteratively updated. At the \(k\)-th step, the selector is initialized by the \((k-1)\)-th step selector, except that its linear head should be expanded in dimension to handle more schemas, which is the function of \(\mathrm{Dimension\_Expand}\).
The above training process has a complexity independent of the task number. Specifically, we limit the training batch number of the PET module selector and the \(k\)-th task-specific module to \(iter_{1}\) and \(iter_{2}\) respectively, and the batch size is fixed as \(b\). Meanwhile, \(t\) active modules are reserved for each example. Therefore, the training complexity for the \(k\)-th task is \(\mathcal{O}(b\cdot(iter_{1}+iter_{2}\cdot(t+1)))\), which does not increase with the task number \(k\).
#### Iii-B4 Cache System
To address duplicate logit computations, we introduce an auxiliary cache system to store the logits generated by already-tuned PET modules. Specifically, each cache entry comprises the example ID, the PET module ID (the index of a task-specific module or the selector), and the logit vector. When a tuned PET module encounters an input example, ConPET checks the database using the example ID and PET module ID. If a match exists, the time-consuming calculation in Equation 1 can be avoided. It should be noted that the cache system is available for a specific module only after the module completes its tuning process and remains fixed thereafter. For instance, the PET Module selector can only access the cache after its own training process in the pre-selection stage is finished.
```
0: The training set \(\mathcal{T}_{k}\) of the \(k\)-th task
0: All the historical training data \(\mathcal{\tilde{T}}_{k-1}\)
0: All the seen task schema sets \(\mathcal{\tilde{S}}_{k}\)
0: The last PET module selector \(\mathrm{M}_{sel}^{(k-1)}\)
0: The maximum training batch number \(iter_{1}\) and \(iter_{2}\)
1:\(\mathrm{M}_{sel}^{(k)}\leftarrow\mathrm{Dimension\_Expand}(\mathrm{M}_{sel}^{(k- 1)})\)
2:for\(i\gets 1\) to \(iter_{1}\)do
3: Sample\(\mathcal{B}_{new}\) randomly from \(\mathcal{T}_{k}\)
4: Sample\(\mathcal{B}_{old}\) hierarchically from \(\mathcal{\tilde{T}}_{k-1}\)
5:\(\mathcal{B}_{tot}\leftarrow\mathcal{B}_{new}\cup\mathcal{B}_{old}\)
6: Update\(\mathrm{M}_{sel}^{(k)}\) with \(k\)-category classification loss on \(\mathcal{B}_{tot}\)
7:endfor
8:Initialize the \(k\)-th task-specific module \(\mathrm{M}_{k}\)
9:for\(i\gets 1\) to \(iter_{2}\)do
10: Sample\(\mathcal{B}_{new}\) randomly from \(\mathcal{T}_{k}\)
11: Sample\(\mathcal{B}_{old}\) hierarchically from \(\mathcal{\tilde{T}}_{k-1}\)
12:\(\mathcal{B}_{tot}\leftarrow\mathcal{B}_{new}\cup\mathcal{B}_{old}\)
13:for\(\mathbf{x}\) in \(\mathcal{B}_{tot}\)do
14: Select the most possible \(t\) active PET modules for \(\mathbf{x}\)
15: Predict for \(\mathbf{x}\) with active PET modules
16:endfor
17: Update\(\mathrm{M}_{k}\) with \(|\mathcal{\tilde{S}}_{k}|\)-category classification loss on \(\mathcal{B}_{tot}\)
18:endfor
```
**Algorithm 1** Dynamic ConPET for the \(k\)-th task
## IV Experiments
### _Datasets_
For experiments, we focus on continual knowledge extraction as a representative continual adaptation scenario of LLMs, including entity typing and knowledge extraction. Specifically, we introduce the following 3 datasets as the benchmark for continual entity typing:
(1) **FewNERD**. FewNERD [53] is a large manually-labeled dataset with hierarchical 66 fine-grained entity types. Specifically, we use FewNERD (SUP) to construct the benchmark, including 486,044 examples from Wikipedia.
(2) **OntoNotes**. OntoNotes 5.0 [54] is also a manually labeled dataset with 86 entity types, 264,404 sentences, and multiple sources of data.
(3) **BBN**. Compared with the above two datasets, BBN [55] is relatively smaller, with 111,728 examples and 46 entity types. It is mainly employed to test the generalizability of ConPET on small-scale tasks.
In addition, we test our method on 3 continual relation extraction tasks:
(1) **FewRel**. FewRel [56] is a large-scale dataset for relation extraction with 80 relation types and 56,000 examples from Wikipedia.
(2) **TACRED**. TACRED [57] is a sentence-level relation extraction dataset obtained through crowdsourcing. For simplicity, we remove those examples with the label "n/a", and finally it consists of 21,773 sentences and 41 relation types.
(3) **ACE 2005**. We adopt the English part of ACE 2005 Multilingual Training Corpus [58], which includes 7,070 sentences and 18 relation types. Considering its small scale, it is also aimed at testing the small-scale-task generalizability of our method.
For each dataset, we construct the corresponding task sequence for continual learning by randomly splitting its entity or relation types into clusters (5 clusters for ACE 2005 and 10 clusters for the others). Then the schema set of each task is specified to one of these clusters.
The train-validation-test split methods of FewNERD and TACRED are the same as the original works [53, 57]. For the remaining 4 datasets, we randomly split them into the training, validation, and test set by a ratio of \(8:1:1\). More key statistics of these 6 datasets are provided in Table II.
### _Dataset Source and License_
All the textual materials included in the 6 datasets are in English. The examples of FewNERD and FewRel are collected from Wikipedia. The sentences in BBN are selected from the Penn Treebank Corpus of Wall Street Journal. The data sources of the remaining three datasets are mixed. TACRED is collected from the newswire and web collection. ACE 2005 contains textual materials from broadcast conversations, broadcast news, newsgroups, telephone conversations, and weblogs. OntoNotes 5.0 is the most complicated, with a corpus from a combination of telephone conversations, newswire, newsgroups, broadcast news, broadcast conversation, weblogs, and religious texts.
As for the licenses and data usage policy, FewNERD and FewRel are released under the CC BY-SA 4.0 license, while OntoNotes, BBN, TACRED, and ACE 2005 are used under the Linguistic Data Consortium (LDC) data license as a member. All the datasets are used in a way consistent with their intended use, and we limit public access to them under the requirements of licenses. Through manual sampling, we do not find any offensive content or identifiers in these datasets.
### _Experimental Settings_
Two evaluation metrics are adopted: **whole accuracy** and **average accuracy**. The former is a standard classification accuracy on all the evaluation data \(\tilde{\mathcal{Q}}_{L}\), where \(L\) is the task number. The latter averages the independent accuracies of each seen task, which can better assess the ability to address catastrophic forgetting [15].
We adopt LLaMA-7B [27] as the backbone LLM, which has 32 layers and a hidden size of 4,096. LoRA [24] is used as a representative PET method in our experiments. Including LoRA matrices and a linear head, a PET module contains around 2M parameters, which only accounts for \(0.03\%\) of the LLM (about 6.74B parameters). For efficiency, the rank of LoRA matrices and the number \(t\) of active PET modules are set to 4 and 1 respectively under all settings. The ratio between old and new examples in each batch is fixed as \(1:1\).
For Static ConPET, we mainly adapt it to the following three memory-based methods:
(1) **EMR**. EMR [10] is a basic memory-based continual learning method, which saves a fixed number of examples for each task and then replays them with new data combined when training a new model.
(2) **EA-EMR**. EA-EMR [14] is an extension of EMR. In addition to memory replay, it conducts embedding alignment at the end of each step to alleviate the distortion of the embedding space of old knowledge types.
(3) **CRL**. CRL [9], also an EMR extension, further applies contrastive learning and knowledge distillation when replaying memorized examples to retain the old relation knowledge.
As the vanilla full-parameter fine-tuning occupies abundant computation resources with more than 3,000 times tunable parameters, we introduce the above three methods with PET adaptation alone as baselines, which adopt limited memory and select a fixed number of memorized examples for each knowledge type through K-Means clustering.
In addition, we introduce an upper-bound setting **Limitless** for reference. Specifically, it shares the same working process as Dynamic ConPET, except that its limit on the training batch number is removed. Therefore, at the \(k\)-th step, it is trained with full-volume historical data \(\mathcal{\tilde{T}}_{k}\) and suffers from a linearly increasing complexity of \(O(|\mathcal{\tilde{T}}_{k}|\cdot(t+2))\).
### _Hyper-Parameters and Implementation Details_
The experimental hyper-parameters are tuned through grid search for each dataset respectively. Besides the most important parameters introduced in Section IV-C, other parameters for ConPET are listed in Table III. Specifically, for Dynamic ConPET, the learning rate on FewNERD is adjusted to \(5e-4\) for the first two continual learning steps, and the learning rate on OntoNotes is modified to \(5e-4\) for the first three steps in all settings. The batch number limit refers to the restriction to the total training and validation batch number at each step if the maximum epoch number is reached, as mentioned in Section III-B2. The ratio between the training and validation batch number of ConPET is set to \(4:1\) throughout our experiments. While the table provides the maximum epoch number, the best epoch is chosen according to the average accuracies on the validation data. Therefore, the actual training steps of the best-epoch model may be lower than the provided batch number limit. The "example number" refers to the memorized example number for each knowledge type at each step of the PET-only baseline version of three memory-based methods EMR, EA-EMR, and CRL, whose replay frequency is dynamically adjusted to obtain a total training step not lower than ConPET with dynamic sampling strategy.
The implementation of ConPET is based on PyTorch and each ConPET instance involved in our experiments is trained on a single A100 GPU for 1 to 48 hours, depending on the corresponding dataset scale. Tools and packages that we used in the experiments include: PyTorch, transformers, numpy, scikit-learn, tqdm, and loralib
### _Overall Results_
The main experimental results are shown in Table IV. The average accuracies of different settings at each step are shown in Figure 3. From the table and figure, we can come to the following conclusions.
(1) Overall tendency: All the settings decline in average accuracies with the increase of task number, which reveals the inevitable impacts and challenges brought by catastrophic forgetting.
(2) Effectiveness: Both Static ConPET settings and Dynamic ConPET surpass the baselines by a large margin, which reveals their effectiveness. As all three memory-based methods adapted with Static ConPET significantly exceed corresponding PET-only versions, we can also demonstrate the importance of dynamic sampling strategy. Of course, gaps still exist between ConPET and the reference upper bound with linearly growing complexity, indicating that a large room remains for exploration in the continual task-specific fine-tuning of LLMs.
(3) Application Scenarios: While both Dynamic ConPET and Static ConPET yield satisfactory outcomes, the Static version excels on five benchmarks. A possible reason is the small scale of these datasets, which fits within the capacity of a single PET module, thus minimizing the negative impact of low scalability. Notably, Static ConPET demonstrates significant superiority in BBN and ACE 2005, which have the least knowledge schemas and examples among entity typing and relation extraction datasets respectively. In contrast, Dynamic ConPET outperforms Static ConPET on FewNERD with a large schema set and the most examples. Therefore, we can conclude that Static ConPET is more suitable for scenarios with relatively small-scale emerging data and knowledge schemas. Conversely, we need Dynamic ConPET to handle larger schema sets and more extensive data, which demand higher scalability of the continual fine-tuning architecture.
All the scores displayed in Table IV except the "Limitless" setting are the average value of the results from a two-time replication with different random seeds. The standard deviations of these results are shown in Table V. Due to the extremely high complexity and training costs of "Limitless", we only take its result from a single run.
### _Effect of Parameter-Efficient Tuning_
Despite the fact that PET can significantly reduce the costs of parameter updates and GPU memory, it may lead to a decrease in overall performance [4]. Due to the huge costs of tuning the full-parameter LLaMA-7B, we only involve the ablation study on the smaller 335M BERT-Large [16] as an alternative to demonstrating the rationality of PET adaptation. Concretely, we introduce the parallel settings on FewNERD
Fig. 3: The average accuracies (%) of different settings at each step throughout the learning process.
for EMR*, EA-EMR*, CRL*, and Dynamic ConPET (DyConPET) without PET adaptation. The results are shown in Table VI. As can be observed, although ConPET has only 0.12% tunable parameters and thus saves considerable time and computation resources, the drop in accuracy is not significant, which demonstrates the reasonableness of employing PET. The satisfactory results on BERT-Large also illustrate the generalizability of ConPET to relatively smaller pre-trained language models.
### _Effect of PET Module Pre-Selection_
In Section III-C1, we discuss the effect of PET module pre-selection on efficiency, which is to select a fixed number of active PET modules in Dynamic ConPET and ensure a constant forward propagation cost. Here we further analyze its effect from the aspect of performance. We conduct experiments on a parallel setting without this technique, which is corresponding to the situation with \(t=k\) at the \(k\)-th step. The results are shown in Table VII.
As can be observed, the performance significantly drops without PET module pre-selection, although the setting "w/o Sel" makes all PET modules active and has a linearly increasing complexity of \(\mathcal{O}(b\cdot(iter_{1}+iter_{2}\cdot(k+1)))\). This may be attributed to the mutual interference between the logit vectors produced by \(k\) PET modules. Even if the dynamic architecture can retain the classification capability on each independent task schema set, it is nontrivial to distinguish between the schemas of distinct tasks. Therefore, a PET module selector explicitly taking this responsibility can largely boost the overall performance as well as reduce the complexity.
### _Effect of Different Task Splits_
In our experiments, task splits are generated by randomly clustering knowledge schemas. However, these schemas may exhibit correlation, which leads to non-independent tasks. Such correlations have the potential to impact the performance of continual learning, especially Dynamic ConPET, whose architecture heavily relies on the task split. Therefore, this section will focus on the effect of different task splits on Dynamic ConPET.
Based on FewNERD, which has a hierarchical entity type schema with 8 coarse types, we reconstruct a new mutually independent task sequence of length 8 by assigning each coarse type to a task, while the original random split is inter-correlated. The results are shown in Table VIII. While the overall accuracy does not change substantially, the independent split increases the pre-selection accuracy by a large margin, which implies a decrease in the accuracies of task-specific PET modules. This highlights a trade-off between the PET module pre-selection and downstream inner-task classification, as the upstream selector prefers an independent task split but the downstream PET modules favor a correlated one. To achieve optimal results, we should align the capacity of each PET module (including the selector) to its task difficulty. For instance, further splitting a task-specific PET module may be beneficial if the task exceeds its capacity.
Moreover, it may be reasonable to explore a wiser strategy for maintaining the task splits. A possible improvement is to adopt the knowledge-aware hierarchical organization of PET modules. Concretely, instead of limiting the layer number of PET modules to 2 (i.e., one layer of the PET selector and one layer of task-specific modules), we can introduce a PET module tree with multiple layers, where non-leaf PET modules are responsible for conducting pre-selection on its child nodes, and leaf PET modules make the final prediction. Meanwhile, considering the hierarchical nature of some knowledge schemas (e.g., the entity types in FewNERD, OntoNotes, and BBN), we can assign the responsibility of PET modules according to their positions in the knowledge hierarchy structure rather than the chronological order. Therefore, hierarchical knowledge can be incorporated explicitly and the capacity of each PET module can easily controlled given a well-designed knowledge schema. We leave this improvement for future work.
## V Conclusion and Future Work
In this paper, we mainly discuss the efficient and effective adaptation of LLMs to continual downstream task sequences.
To achieve this goal, we propose the paradigm of ConPET, including two versions with training complexity independent of the task number. Static ConPET can adapt former memory-based methods to LLMs through the cost-saving PET and a dynamic sampling strategy more robust to over-fitting and forgetting. In contrast, Dynamic ConPET is more scalable to scenarios with large-scale data and task schemas owing to its dynamic MoE-style architecture. The experiments demonstrate the effectiveness and rationality of key techniques used in ConPET, with a considerable reduction in tuning costs. In the future, we will extend ConPET to more diverse continual learning scenarios (e.g., continual learning of heterogeneous tools [8]) and further improve our paradigm by exploring viser task split strategies.
## More Supporting Facts for the Rationality of Dynamic Sampling Strategy
Despite the common assumption of limited memory in the field of continual learning [14], we still consider it more reasonable to control the training costs rather than the memory during the process of fine-tuning LLMs, which is the fundamental philosophy of our dynamic sampling strategy. The rationality of this approach mainly lies in the overwhelming price of time and computation resources when compared to storage.
Take GPT-3 175B [1] as an example, whose training corpus contains about 300B tokens. As a single English token contains about 4 characters on average1, the overall storage for its training corpus is around \(1\sim 2\) TB. Such a scale of memory is quite acceptable for the majority of modern servers for AI research. Although the training materials for more recent LLMs are believed to take up more space, the price of storage has already been made moderate as no more than a few dozen dollars per TB in general thanks to the advance in storage hardware technology2. Besides, since the datasets of most downstream tasks should be filtered and annotated to ensure high quality and provide supervision for training, their scales can hardly reach the TB level and typically need much less storage than the LLM training corpus.
Footnote 1: [https://help.openai.com/en/articles/4936856-what-are-tokens-and-how-to-count-them](https://help.openai.com/en/articles/4936856-what-are-tokens-and-how-to-count-them)
Footnote 2: [https://diskprices.com/](https://diskprices.com/)
On the other hand, the computation resources required for GPT-3 175B (pre-trained on V100 GPUs) are at a more tremendous scale, which is around 3.64E+03 petaflop/s-days. Even if PET can save many computation resources, a single V100 GPU still costs thousands of dollars in a month3, let alone the fact that LLMs with more than tens of billions of parameters may need more than one devices for fine-tuning and inference. In summary, the storage issue is just of little significance compared to the financial pressure posed by time and computation resources.
Footnote 3: [https://cloud.google.com/compute/gpus-pricing](https://cloud.google.com/compute/gpus-pricing)
|
2309.04570 | A Torelli theorem for graphs via quasistable divisors | The Torelli theorem establishes that the Jacobian of a smooth projective
curve, together with the polarization provided by the theta divisor, fully
characterizes the curve. In the case of nodal curves, there exists a concept
known as fine compactified Jacobian. The fine compactified Jacobian of a curve
comes with a natural stratification that can be regarded as a poset.
Furthermore, this poset is entirely determined by the dual graph of the curve
and is referred to as the poset of quasistable divisors on the graph. We
present a combinatorial version of the Torelli theorem, which demonstrates that
the poset of quasistable divisors of a graph completely determines the
biconnected components of the graph (up to contracting separating edges).
Moreover, we achieve a natural extension of this theorem to tropical curves. | Alex Abreu, Marco Pacini | 2023-09-08T20:01:34Z | http://arxiv.org/abs/2309.04570v1 | # A Torelli theorem for graphs via quasistable divisors
###### Abstract.
The Torelli theorem establishes that the Jacobian of a smooth projective curve, together with the polarization provided by the theta divisor, fully characterizes the curve. In the case of nodal curves, there exists a concept known as fine compactified Jacobian. The fine compactified Jacobian of a curve comes with a natural stratification that can be regarded as a poset. Furthermore, this poset is entirely determined by the dual graph of the curve and is referred to as the poset of quasistable divisors on the graph. We present a combinatorial version of the Torelli theorem, which demonstrates that the poset of quasistable divisors of a graph completely determines the biconnected components of the graph (up to contracting separating edges). Moreover, we achieve a natural extension of this theorem to tropical curves.
MSC (2020): 05Cxx, 14Hxx
## 1. Introduction
The classical Torelli theorem states that if \(C\) and \(C^{\prime}\) are two genus \(g\) smooth projective curves whose Jacobian varieties are isomorphic (as principally polarized abelian varieties), then \(C\) and \(C^{\prime}\) are isomorphic. For nodal curves, a variant of the Torelli theorem emerges considering compactified Jacobians. In [11], Caporaso and Viviani proved that a stable curve can be reconstructed from its Caporaso compactified Jacobian and theta divisor, provided that its dual graph is \(3\)-edge connected. We refer to [1] for the construction of the compactified Jacobian and to [1] for a study of the theta divisor of the compactified Jacobian. The main result in [11] is based on a previous combinatorial result proved in [11], stating that it is possible to reconstruct a graph from its Albanese variety, provided the graph is \(3\)-vertex connected (this resolved a question posed in [1]), see also [10].
More general results are also proved in [11] and [11]: two stable curves without separating nodes have isomorphic compactified Jacobians toghether with theta divisors if and only if the curves are \(C1\)-equivalent (see [11, Definition 2.1.5] for the definition of \(C1\)-equivalence). The general statement for graphs is: two graphs without bridges have isomorphic Albanese varieties if and only if the graphs are cyclically equivalent. The observation connecting the two results is that if the compactified Jacobians of two stable curves are isomorphic, then the Albanese varieties of the dual graphs of the curves are isomorphic as well.
The question that motivated this paper is:
_can one get a more refined Torelli theorem by considering other compactified Jacobians?_ (1)
In this paper we answer a combinatorial version of the above question. We consider Esteves compactified Jacobian of a nodal curve, parametrizing quasistable torsion-free rank-\(1\) sheaves of fixed degree on a curve, constructed in [16]. Both Caporaso and Esteves compactified Jacobians for a nodal curve are instances of Oda-Seshadri construction of compactified Jacobians constructed in [10] (see [1] and [16, Section 6]).
In [11], a crucial ingredient in the proof of Torelli theorem for graphs is the Delaunay decomposition \(\operatorname{Del}(\Gamma)\) of a graph \(\Gamma\) and its associated poset (i.e., partially ordered set) \(\overline{\mathcal{OP}}_{\Gamma}\). The poset \(\overline{\mathcal{OP}}_{\Gamma}\) is the poset encoding the natural stratification of the Caporaso compactified Jacobian of a curve with dual graph \(\Gamma\) (see [11, Lemma 4.1.6]). For a \(3\)-edge connected graph \(\Gamma\), the Delaunay decomposition \(\operatorname{Del}(\Gamma)\) determines and
is determined by the poset \(\overline{\mathcal{OP}}_{\Gamma}\). The key results are that the Albanese variety of a graph determines its Delaunay decomposition and, if the graph is \(3\)-edge connected, the Delaunay decomposition only depends from the cyclic equivalence class of the graph. The general statement of this result can be found in [1, Theorem 5.3.2].
The Esteves compactified Jacobian exhibits a natural stratification that can be viewed as a poset. This is the poset \(\mathbf{QD}(\Gamma)\) of quasistable (pseudo-)divisors of degree \(g-1\) on the dual graph \(\Gamma\) of the curve, which corresponds to the multidegrees of quasistable torsion-free rank-\(1\) sheaves of degree \(g-1\) on the curve. In this paper we prove that this poset plays a crucial role in characterizing the nodal curve. Remarkably, the poset structure entirely determines the dual graph of the curve. Thus, by studying the poset of quasistable divisors, one can gain insights into the topology and combinatorial properties of the curve itself.
Note worthy, the poset \(\mathbf{QD}(\Gamma)\) is the poset induced by a refinement of the Delaunay decomposition \(\operatorname{Del}(\Gamma)\) of \(\Gamma\). This refinement holds more combinatorial information about the graph than the Delaunay decomposition. Hence, it is expected a more refined Torelli theorem for graphs using the poset \(\mathbf{QD}(\Gamma)\). The main theorem of this paper is the following result.
**Theorem** (Theorem 5.1).: _Let \(\Gamma\) and \(\Gamma^{\prime}\) be graphs with set of bridges \(\operatorname{Br}(\Gamma)\) and \(\operatorname{Br}(\Gamma^{\prime})\). The posets \(\mathbf{QD}(\Gamma)\) and \(\mathbf{QD}(\Gamma^{\prime})\) are isomorphic if and only if there is a bijection between the biconnected components of \(\Gamma/\operatorname{Br}(\Gamma)\) and \(\Gamma^{\prime}/\operatorname{Br}(\Gamma^{\prime})\) such that the corresponding components are isomorphic as pure graphs._
In particular, a pure biconnected graph \(\Gamma\) can be reconstructed from its poset \(\mathbf{QD}(\Gamma)\). Hence, for pure biconnected graphs, we get a more refined Torelli theorem. Indeed, there are nonisomorphic \(3\)-edges connected biconnected graphs \(\Gamma\) and \(\Gamma^{\prime}\) that are cyclic equivalent, and hence, by the result of Caporaso and Viviani, the poset \(\overline{\mathcal{OP}}_{\Gamma}\) and \(\overline{\mathcal{OP}}_{\Gamma^{\prime}}\) are isomorphic, while \(\mathbf{QD}(\Gamma)\) and \(\mathbf{QD}(\Gamma^{\prime})\) are not.
As a byproduct, we get a Torelli theorem for tropical curves. We prove that the tropical Jacobian \(J(X)\) of a tropical curve \(X\), together with its decomposition via quasistable divisors, determines the biconnected components of the tropical curve.
**Theorem** (Theorem 6.1).: _Let \(X\) and \(X^{\prime}\) be tropical curves without bridges such that \(J(X)\) and \(J(X^{\prime})\) are isomorphic as polyhedral complexes (with the structure of polyhedral complexes given by the poset of quasistable divisor on the underlying graph). There is a bijection between the biconnected components of \(X\) and \(X^{\prime}\) such that the corresponding components are isomorphic._
We conclude this introduction with some remarks regarding Question (1). The combinatorial result provided by Theorem 5.1 implies that a geometric Torelli Theorem utilizing fine compactified Jacobians should be distinct and potentially more refined than the result obtained by Caporaso and Viviani in their work [1]. So far we did not found examples of curves with no separating nodes whose fine compactified Jacobian are isomorphic (together with the theta divisor).
## 2. Preliminaries
### Posets
In this paper we will only consider finite posets. Given a poset \((P,\leq_{P})\) and a subset \(S\subset P\), the _induced partial order_\(\leq_{S}\) on \(S\) is given by \(x\leq_{S}y\) for \(x,y\in S\) if and only if \(x\leq_{P}y\) in \(P\). We refer to \((S,\leq_{S})\) as the _induced subposet_.
A _lower set_ of a poset \((P,\leq_{P})\) is a set \(U\subset P\) such that whenever \(x\in U\) and \(y\leq_{P}x\), then \(y\in U\). We define a topology on the poset \(P\) where the closed subsets are the lower sets.
We say that an element \(x\)_covers_ an element \(y\) of \(P\) if \(x>_{P}y\) and there are no \(z\in P\) such that \(x<_{P}z<_{P}y\). A poset is called _ranked_ if all the maximal chains have the same length. A ranked poset \(P\) comes equipped with a rank function \(\operatorname{rk}\colon P\to\mathbb{Z}\) such that \(\operatorname{rk}(x)=\operatorname{rk}(y)+1\) whenever \(x\) covers \(y\) and \(\operatorname{rk}(x)=0\) whenever \(x\) is a minimal element of \(P\). The _Hasse diagram_ of a poset is the oriented graph whose vertices are the elements of \(P\) and oriented edges are from \(x\) to \(y\) whenever \(y\) covers \(x\).
A _morphism_ between posets \(P\) and \(P^{\prime}\) is an order-preserving function (or, equivalently, a continuous function) \(f\colon P\to P^{\prime}\). Moreover, we say that \(f\)_preserves the cover relations_ if \(f(x)\) covers \(f(y)\) whenever
\(x\) covers \(y\), for \(x,y\in P\). If \(P\) and \(P^{\prime}\) are ranked, then we say that \(f\) is a _morphism of ranked posets_ if \(\operatorname{rk}(f(x))=\operatorname{rk}(x)\) for every \(x\in P\). An _isomorphism_ of posets is a morphism of posets admitting an inverse morphism. As usual, a morphism of posets is closed if it takes closed subsets to closed subsets.
**Remark 2.1**.: Notice that \(f\colon P\to P^{\prime}\) is a closed morphism of posets, if and only if, for any \(x\in P\) and \(y^{\prime}\in P^{\prime}\) such that \(y^{\prime}\leq_{P^{\prime}}f(x)\) there exists \(y\in P\) such that \(y\leq_{P}x\) and \(f(y)=y^{\prime}\).
### Graphs
Let \(\Gamma\) be a graph. We denote by \(V(\Gamma)\) and \(E(\Gamma)\) the sets of vertices and edges of \(\Gamma\), and \(w_{\Gamma}\colon V(\Gamma)\to\mathbb{Z}_{\geq 0}\) the weight function of \(\Gamma\). A graph is _pure_ if \(w_{\Gamma}(v)=0\) for every \(v\in V(\Gamma)\). Given a subset \(V\subset V(\Gamma)\), we set \(V^{c}:=V(\Gamma)\setminus V\). For subsets \(V,W\subset V(\Gamma)\), we define \(E(V,W)\) as the set of edges of \(\Gamma\) connecting a vertex in \(V\) with a vertex in \(W\). In particular, \(E(V,V)\) is the set of edges connecting two (possibly coinciding) vertices of \(V\). We set \(\delta_{V}=|E(V,V^{c})|\). We also denote by \(\Gamma(V)\) the subgraph of \(\Gamma\) whose set of vertices is \(V\) and whose set of edges is \(E(V,V)\). The edges \(e_{1},e_{2}\in E(\Gamma)\) are _parallel_ if there are two vertices incident to both \(e_{1}\) and \(e_{2}\). An _end-vertex_ of an edge \(e\) is a vertex which is incident to \(e\).
For a vertex \(v\in V(\Gamma)\), we let \(E(v)\) be the set of edges of \(\Gamma\) that are incident to \(v\). Moreover, we let \(\Gamma\setminus\{v\}\) be the subgraph of \(\Gamma\) with set of vertices equal to \(V(\Gamma)\setminus\{v\}\) and set of edges equal to \(E(\Gamma)\setminus E(v)\). For a subset \(\mathcal{E}\subset E(\Gamma)\) and a vertex \(v\in V(\Gamma)\), we define \(\operatorname{val}_{\mathcal{E}}(v)\) to be the number of edges of \(\mathcal{E}\) incident to \(v\), with loops counted twice. We set \(\operatorname{val}(v):=\operatorname{val}_{E(\Gamma)}(v)\) which is called the _valence_ of \(v\) in \(\Gamma\).
A _cut_ of \(\Gamma\) is a subset \(\mathcal{E}\subset E(\Gamma)\) such that \(\mathcal{E}=E(V,V^{c})\), for some subset \(V\subset V(\Gamma)\). A _bond_ of \(\Gamma\) is a minimal cut of \(\Gamma\). A _hemisphere_ of \(\Gamma\) is a subset \(V\subset V(\Gamma)\) such that \(\Gamma(V)\) and \(\Gamma(V^{c})\) are connected subgraphs of \(\Gamma\). Equivalently, \(V\) is a hemisphere if and only if \(E(V,V^{c})\) is a bond. The _genus_ of \(\Gamma\) is defined as \(g_{\Gamma}:=b_{1}(\Gamma)+\sum_{v\in V(\Gamma)}w_{\Gamma}(v)\), where \(b_{1}(\Gamma)\) is the first Betti number of \(\Gamma\). For every subset \(V\subset V(\Gamma)\), we let \(g_{V}\) be the genus of the graph \(\Gamma(V)\). In particular, we have \(g_{V(\Gamma)}=g_{\Gamma}\).
A _cycle_ of the graph \(\Gamma\) is a subset \(\gamma\subset E(\Gamma)\) such that there is a connected subgraph of \(\Gamma\) whose edges are the elements of \(\gamma\) and whose vertices (called the _vertices of the cycle_) have all valence \(2\). The graph \(\Gamma\) is a _tree_ if it is connected and has no cycles. Equivalently, \(\Gamma\) is a tree if and only if \(b_{1}(\Gamma)=0\). A _spanning tree_ of \(\Gamma\) is a connected subgraph of \(\Gamma\) which is a tree and whose set of vertices is equal to \(V(\Gamma)\). We usually see a spanning tree as a subset \(T\subset E(\Gamma)\). We will call the complement of a spanning tree (in \(E(\Gamma)\)) a _maximally nondisconnecting_ subset of \(\Gamma\).
A _cyclic equivalence_ between two graphs \(\Gamma\) and \(\Gamma^{\prime}\) is a bijection \(E(\Gamma)\to E(\Gamma^{\prime})\) that induces a bijection between the cycles of \(\Gamma\) and the cycles of \(\Gamma^{\prime}\).
**Remark 2.2**.: Given a bijection \(f\colon E(\Gamma)\to E(\Gamma^{\prime})\), the following conditions are equivalent.
1. The bijection \(f\) is a cyclic equivalence.
2. The bijection \(f^{-1}\) is a cyclic equivalence.
3. The bijection \(f\) induces a bijection between the set of spanning trees of \(\Gamma\) and \(\Gamma^{\prime}\).
4. The bijection \(f\) induces a bijection between the set of bonds of \(\Gamma\) and \(\Gamma^{\prime}\).
5. The bijection \(f\) induces a bijection between the set of cuts of \(\Gamma\) and \(\Gamma^{\prime}\).
An edge \(e\) of \(\Gamma\) is called a _bridge_ if \(\Gamma\) becomes disconnected after the removal of \(e\). We let \(\operatorname{Br}(\Gamma)\) be the set of bridges of \(\Gamma\). We denote the set of nondisconnecting edges of \(\Gamma\) by
\[\operatorname{ND}(\Gamma):=E(\Gamma)\setminus\operatorname{Br}(\Gamma). \tag{2}\]
A _weakly cyclic equivalence_ between two graphs \(\Gamma\) and \(\Gamma^{\prime}\) is a bijection \(f\colon\operatorname{ND}(\Gamma)\to\operatorname{ND}(\Gamma^{\prime})\) that induces a bijection between the cycles of \(\Gamma\) and the cycles of \(\Gamma^{\prime}\) (recall that every cycle of \(\Gamma\) is contained in \(\operatorname{ND}(\Gamma)\)). Equivalently, a weakly cyclic equivalence is a cyclic equivalence between \(\Gamma/\operatorname{Br}(\Gamma)\) and \(\Gamma^{\prime}/\operatorname{Br}(\Gamma^{\prime})\)
**Remark 2.3**.: Given a bijection \(f\colon\operatorname{ND}(\Gamma)\to\operatorname{ND}(\Gamma^{\prime})\), the following conditions are equivalent.
1. The bijection \(f\) is a weakly cyclic equivalence.
2. The bijection \(f^{-1}\) is a weakly cyclic equivalence.
3. The bijection \(f\) induces a bijection between the sets of maximally nondisconnectig subsets of \(\Gamma\) and \(\Gamma^{\prime}\).
A _subdivision_ of the graph \(\Gamma\) is a graph obtained from \(\Gamma\) inserting a number \(n_{e}\geq 0\) of vertices in the interior of every edge \(e\in E(\Gamma)\). We say that \(\Gamma\) is _biconnected_ if, for every subdivision \(\widehat{\Gamma}\) of \(\Gamma\), the removal of any vertex of \(\widehat{\Gamma}\) does not disconnect the graph \(\widehat{\Gamma}\). In particular, a graph with exactly one edge is biconnected if and only if it is a loop. Otherwise, a graph with at least two edges is biconnected if and only if any two vertices of the graph are vertices of a cycle of the graph. Of course, if \(\Gamma\) has a bridge, then \(\Gamma\) is not biconnected. A _biconnected component_ of \(\Gamma\) is a maximal biconnected subgraph of \(\Gamma\). Every graph admits a unique decomposition into biconnected components. An _articulation vertex_ of \(\Gamma\) is a vertex of \(\Gamma\) such that the removal of \(v\) disconnects the graph.
Consider a subset \(\mathcal{E}\) of \(E(\Gamma)\). We denote by \(\Gamma_{\mathcal{E}}\) the graph obtained from \(\Gamma\) by removing the edges in \(\mathcal{E}\), with \(E(\Gamma_{\mathcal{E}})=E(\Gamma)\setminus\mathcal{E}\) and \(V(\Gamma_{\mathcal{E}})=V(\Gamma)\). We also denote by \(\Gamma^{\mathcal{E}}\) the subdivision of \(\Gamma\) obtained from \(\Gamma\) by inserting exactly one vertex, called _exceptional_ and denoted by \(v_{e}\), in the interior of every edge \(e\in\mathcal{E}\). We have \(V(\Gamma^{\mathcal{E}})=V(\Gamma)\cup\{v_{e};e\in\mathcal{E}\}\). Finally, we let \(\Gamma/\mathcal{E}\) the graph obtained by the contraction of the edges in \(\mathcal{E}\). In this case, we say that \(\Gamma\)_specializes_ to \(\Gamma/\mathcal{E}\), and we write \(\iota\colon\Gamma\to\Gamma/\mathcal{E}\). Notice that we have an induced surjective function \(\iota\colon V(\Gamma)\to V(\Gamma/\mathcal{E})\) and an inclusion \(E(\Gamma/\mathcal{E})=E(\Gamma)\setminus\mathcal{E}\stackrel{{ \iota}}{{\to}}E(\Gamma)\). The case in which \(\mathcal{E}=\operatorname{Br}(\Gamma)\) will play an important role later on. It is clear that \(\Gamma/\operatorname{Br}(\Gamma)\) is a graph without bridges.
### Divisors on graphs
Let \(\Gamma\) be a graph. A _divisor_\(D\) on \(\Gamma\) is a formal sum \(D=\sum_{v\in V(\Gamma)}D(v)v\), where \(D(v)\in\mathbb{Z}\). We denote by \(\operatorname{Div}(\Gamma)\) the abelian group of divisors of \(\Gamma\). For every subset \(V\subset V(\Gamma)\), we set \(D(V)=\sum_{v\in V}D(v)\). The _degree_ of a divisor \(D\) is the integer \(D(V(\Gamma))\). A _pseudo-divisor_ on \(\Gamma\) is a pair \((\mathcal{E},D)\), where \(\mathcal{E}\) is a subset of \(E(\Gamma)\) and \(D\) is a divisor on \(\Gamma^{\mathcal{E}}\) such that \(D(v_{e})=1\), for every \(e\in\mathcal{E}\). The _degree_ of a pseudo-divisor \((\mathcal{E},D)\) is the degree of the divisor \(D\). Given a pseudo-divisor \((\mathcal{E},D)\) on \(\Gamma\), we set
\[\epsilon_{\Gamma}(\mathcal{E},D)=\mathcal{E}\quad\text{and}\quad\delta_{ \Gamma}(\mathcal{E},D)=D. \tag{3}\]
If \(\widehat{\Gamma}\) is a subdivision of a graph \(\Gamma\), we can extend a divisor \(D\) on \(\Gamma\) to a divisor on \(\widehat{\Gamma}\), setting \(D(v)=0\) for every \(v\in V(\widehat{\Gamma})\setminus V(\Gamma)\). Thus for every pseudo-divisor \((\mathcal{E},D)\) on \(\Gamma\), we could see \(D\) as a divisor on the subdivision \(\Gamma^{E(\Gamma)}\) of \(\Gamma\). In particular, given pseudo-divisors \((\mathcal{E}_{1},D_{1})\) and \((\mathcal{E}_{2},D_{2})\), the sum \(D_{1}+D_{2}\) will make sense as a sum of divisors on \(\Gamma^{E(\Gamma)}\).
Let \(\iota\colon\Gamma\to\Gamma^{\prime}\) be a specialization of graphs. Given a divisor \(D\) on \(\Gamma\), we have an induced divisor \(\iota_{*}(D)\) on \(\Gamma^{\prime}\) such that \(\iota_{*}(D)(v^{\prime})=\sum_{v\in\iota^{-1}(v^{\prime})}D(v)\), for every \(v^{\prime}\in V(\Gamma^{\prime})\). Notice that, if \(\mathcal{E}\) is a subset of \(E(\Gamma)\), then we have an induced specialization \(\iota^{\mathcal{E}}\colon\Gamma^{\mathcal{E}}\to\operatorname{Tr}^{\iota \mathcal{E}^{\prime}}\), where \(\mathcal{E}^{\prime}=\mathcal{E}\cap E(\Gamma^{\prime})\). Therefore, if \((\mathcal{E},D)\) is a pseudo-divisor on \(\Gamma\), we have an induced pseudo-divisor \(\iota_{*}(\mathcal{E},D):=(\mathcal{E}^{\prime},\iota_{*}^{\mathcal{E}}(D))\) on \(\Gamma^{\prime}\). Given pseudo-divisors \((\mathcal{E},D)\) on \(\Gamma\) and \((\mathcal{E}^{\prime},D^{\prime})\) on \(\Gamma^{\prime}\), we say that \((\Gamma,\mathcal{E},D)\)_specializes_ to \((\Gamma^{\prime},\mathcal{E}^{\prime},D^{\prime})\) if the following conditions hold
1. there is a specialization \(\iota\colon\Gamma\to\Gamma^{\prime}\) such that \(\mathcal{E}^{\prime}\subset\mathcal{E}\cap E(\Gamma^{\prime})\);
2. there is a specialization \(\iota^{\mathcal{E}}\colon\Gamma^{\mathcal{E}}\to\Gamma^{\prime\mathcal{E}^{ \prime}}\) such that \(\iota_{*}^{\mathcal{E}}(D)=D^{\prime}\);
3. the following diagrams are commutative
If \((\Gamma,\mathcal{E},D)\) specializes to \((\Gamma^{\prime},\mathcal{E}^{\prime},D^{\prime})\), we write \((\Gamma,\mathcal{E},D)\to(\Gamma^{\prime},\mathcal{E}^{\prime},D^{\prime})\). If \(\Gamma=\Gamma^{\prime}\) and \(\iota\) is the identity, we simply write \((\mathcal{E},D)\to(\mathcal{E}^{\prime},D^{\prime})\).
An _elementary specialization_ is a specialization of type \((\mathcal{E},D)\to(\mathcal{E}^{\prime},D^{\prime})\), where \(|\mathcal{E}^{\prime}|=|\mathcal{E}|-1\). In this case, we have \(\mathcal{E}^{\prime}=\mathcal{E}\setminus\{e\}\) for some edge \(e\in E(\Gamma)\), and we say that the elementary specialization is _over_\(e\). Notice that every specialization is a composition of elementary specializations.
**Remark 2.4**.: Let \((\mathcal{E},D)\) be a pseudo-divisor on \(\Gamma\) and consider \(e\in\mathcal{E}\). If \(e\) is not a loop with end-vertices \(s,t\), then \((\mathcal{E},D)\to(\mathcal{E}\setminus\{e\},D-v_{e}+s)\) and \((\mathcal{E},D)\to(\mathcal{E}\setminus\{e\},D-v_{e}+t)\) are all the elementary specializations over \(e\) having \((\mathcal{E},D)\) as source. If \(e\) is a loop of \(\Gamma\) with end-vertex \(s\), then \((\mathcal{E},D)\to(\mathcal{E}\setminus\{e\},D-v_{e}+s)\) is the unique elementary specialization over \(e\) having \((\mathcal{E},D)\) as source. Notice that if \((\mathcal{E},D_{1})\) and \((\mathcal{E},D_{2})\) both specialize to the same pseudo-divisors \((\mathcal{E}\setminus\{e\},D_{1}^{\prime})\) and \((\mathcal{E}\setminus\{e\},D_{2}^{\prime})\), with \(D_{1}^{\prime}\neq D_{2}^{\prime}\), then \(D_{1}=D_{2}\).
A _polarization_ on the graph \(\Gamma\) is a function \(\mu\colon V(\Gamma)\to\mathbb{R}\) such that \(\sum_{v\in V(\Gamma)}\mu(v)\in\mathbb{Z}\). For every subset \(V\subset V(\Gamma)\), we set \(\mu(V)=\sum_{v\in V}\mu(v)\). The _degree_ of a polarization \(\mu\) is the integer \(\mu(V(\Gamma))\). Given a specialization of graphs \(\iota\colon\Gamma\to\bar{\Gamma}^{\prime}\) and a polarization \(\mu\) on \(\Gamma\) of degree \(d\), we have an induced polarization \(\iota_{*}(\mu)\) on \(\Gamma^{\prime}\) of degree \(d\) given by \(\iota_{*}(\mu)(v^{\prime})=\sum_{v\in\iota^{-1}(v^{\prime})}\mu(v)\). Given a subset \(\mathcal{E}\subset E(\Gamma)\) and a degree \(d\) polarization \(\mu\) on \(\Gamma\), we have an induced polarization \(\mu^{\mathcal{E}}\) on \(\Gamma^{\mathcal{E}}\) of degree \(d\) given by \(\mu^{\mathcal{E}}(v)=\mu(v)\) if \(v\in V(\Gamma)\), and \(\mu^{\mathcal{E}}(v)=0\) if \(v\in V(\Gamma^{\mathcal{E}})\setminus V(\Gamma)\). We also have an induced polarization \(\mu_{\mathcal{E}}\) of degree \(d-|\mathcal{E}|\) on \(\Gamma_{\mathcal{E}}\) taking \(v\in V(\Gamma_{\mathcal{E}})\) to \(\mu_{\mathcal{E}}(v)=\mu(v)-\frac{1}{2}\operatorname{val}_{\mathcal{E}}(v)\).
Let \(v_{0}\) be a vertex on the graph \(\Gamma\) and \(\mu\) a polarization on \(\Gamma\) of degree \(d\). Let \(D\) be a divisor on \(\Gamma\) of degree \(d\). For every subset \(V\subset V(\Gamma)\), we set
\[\beta_{\Gamma,D}(V):=D(V)-\mu(V)+\frac{\delta_{V}}{2}. \tag{4}\]
We say that \(D\) is \((v_{0},\mu)\)_-quasistable_ if \(\beta_{\Gamma,D}(V)\geq 0\) for every \(V\subset V(\Gamma)\), with strict inequality if \(v_{0}\not\in V\).
**Remark 2.5**.: To check that a divisor is \((v_{0},\mu)\)-quasistable, it suffices to check the condition of \((v_{0},\mu)\)-quasistability for all hemispheres of \(\Gamma\).
**Remark 2.6**.: The definition of pseudo-divisor in this paper is different from the one given in [1], where a pseudo-divisor has degree \(-1\) on every exceptional vertex. As a consequence, we have to change the definition of the induced polarization \(\mu_{\mathcal{E}}\) and the notion of quasistability (which usually requires that the inequality is strict if \(v_{0}\in V\)). All the result of the paper could be proved in both setup. The reason why we preferred the new setup is because of Lemma 5.14.
Given a pseudo-divisor \((\mathcal{E},D)\) of degree \(d\) on the graph \(\Gamma\), we say that \((\mathcal{E},D)\) is \((v_{0},\mu)\)_-quasistable_ if the divisor \(D\) on \(\Gamma^{\mathcal{E}}\) is \((v_{0},\mu^{\mathcal{E}})\)-quasistable.
The _canonical polarization of degree \(g-1\)_ on the graph \(\Gamma\) is the polarization \(\mu_{\text{can}}\) of degree \(g-1\) such that
\[\mu_{\text{can}}(V)=g_{V}-1+\frac{\delta_{V}}{2}, \tag{5}\]
for every hemisphere \(V\subset V(\Gamma)\). In this case, if \((\mathcal{E},D)\) is a pseudo-divisor on \(\Gamma\), then for every hemisphere \(V\subset V(\Gamma^{\mathcal{E}})\) we have
\[\beta_{\Gamma^{\mathcal{E}},D}(V)=D(V)-\mu_{\text{can}}^{\mathcal{E}}(V)+ \frac{\delta_{V}}{2}=D(V)-g_{V}+1, \tag{6}\]
(recall that \(D\) is a divisor on \(\Gamma^{\mathcal{E}}\)). Given a \((v_{0},\mu_{\text{can}})\)-quasistable pseudo-divisor \((\mathcal{E},D)\) on \(\Gamma\), we simply say that \((\mathcal{E},D)\) is \(v_{0}\)_-quasistable_.
**Remark 2.7**.: If \(\mathcal{E}\subset E(\Gamma)\) is a nondisconneting subset of \(E(\Gamma)\), then \((\mu_{\text{can}})_{\mathcal{E}}\) is the canonical polarization of \(\Gamma_{\mathcal{E}}\).
## 3. The poset of quasistable divisors
Let \(\Gamma\) be a graph. Given a vertex \(v_{0}\) and a polarization \(\mu\) on \(\Gamma\), the set \(\mathbf{QD}_{v_{0},\mu}(\Gamma)\) of \((v_{0},\mu)\)-quasistable pseudo-divisors on \(\Gamma\) forms a poset, where \((\mathcal{E},D)\geq(\mathcal{E}^{\prime},D^{\prime})\) if there is a specialization \((\mathcal{E},D)\to(\mathcal{E}^{\prime},D^{\prime})\). Given a subset \(\mathcal{E}\subset E(\Gamma)\), we let
\[\mathbf{QD}_{v_{0},\mu}(\Gamma,\mathcal{E})=\{D\in\operatorname{Div}(\Gamma^{ \mathcal{E}});(\mathcal{E},D)\in\mathbf{QD}_{v_{0},\mu}(\Gamma)\}. \tag{7}\]
The poset \(\mathbf{QD}_{v_{0},\mu}(\Gamma)\) is ranked, with rank function taking a pseudo-divisor \((\mathcal{E},D)\) to \(|\mathcal{E}|\). We call \(|\mathcal{E}|\) the _rank_ of the pseudo-divisor \((\mathcal{E},D)\).
**Remark 3.1**.: Let \(\Gamma\) be a graph, \(v_{0}\) a vertex of \(\Gamma\), and \(\mu\) a polarization on \(\Gamma\). If \(e\) is a bridge of \(\Gamma\) and \(\iota\colon\Gamma\to\Gamma/\{e\}\) is the contraction of \(e\), then \(\mathbf{QD}_{v_{0},\mu}(\Gamma)\) is naturally isomorphic to \(\mathbf{QD}_{\iota(v_{0}),\iota_{*}(\mu)}(\Gamma/\{e\})\). Therefore, if we consider the specialization \(\iota\colon\Gamma\to\Gamma/\operatorname{Br}(\Gamma)\), then we have a natural isomorphism
\[\mathbf{QD}_{v_{0},\mu}(\Gamma)\cong\mathbf{QD}_{\iota(v_{0}),\iota_{*}(\mu)}( \Gamma/\operatorname{Br}(\Gamma)).\]
**Remark 3.2**.: Let \(\Gamma\) be a graph, \(\mu\) a polarization on \(\Gamma\) and \(\mathcal{E}\subset E(\Gamma)\) a subset. The following properties are consequences of [1, Proposition 4.6].
1. If \((\mathcal{E},D)\in\mathbf{QD}_{v_{0},\mu}(\Gamma)\) then \(\mathcal{E}\subset\operatorname{ND}(\Gamma)\) (recall Equation (2)).
2. If \((\mathcal{E},D)\) is a \((v_{0},\mu)\)-quasistable divisor on \(\Gamma\) and \(\iota\colon\Gamma\to\Gamma^{\prime}\) is a specialization, then \(\iota_{*}(\mathcal{E},D)\) is a \((\iota(v_{0}),\iota_{*}(\mu))\)-quasistable pseudo-divisor on \(\Gamma^{\prime}\).
If \(\mathcal{E}\subset E(\Gamma)\) is nondisconnecting, then:
1. We have a natural inclusion \(\mathbf{QD}_{v_{0},\mu_{\mathcal{E}}}(\Gamma_{\mathcal{E}})\subset\mathbf{QD }_{v_{0},\mu}(\Gamma)\), taking a pseudo-divisor \((\mathcal{E}^{\prime},D^{\prime})\) to the pseudo-divisor \((\mathcal{E}\cup\mathcal{E}^{\prime},D^{\prime}+\sum_{e\in\mathcal{E}}v_{e})\). Moreover, for every \(S\subset E(\Gamma)\setminus\mathcal{E}\), we can identify \(\mathbf{QD}_{v_{0},\mu_{\mathcal{E}}}(\Gamma_{\mathcal{E}},S)\) with \(\mathbf{QD}_{v_{0},\mu}(\Gamma,\mathcal{E}\cup S)\).
2. If \(\mu=\mu_{\operatorname{can}}\), then we have an inclusion \(\mathbf{QD}_{v_{0}}(\Gamma_{\mathcal{E}})\subset\mathbf{QD}_{v_{0}}(\Gamma)\) (combine Remark 2.7 and item (1)).
3. If \(\mathcal{E}\) is a maximally nondisconnecting subset of \(\Gamma\), then \(\mathbf{QD}_{v_{0},\mu_{\mathcal{E}}}(\Gamma_{\mathcal{E}})\) is a singleton.
4. The maximal elements of \(\mathbf{QD}_{v_{0},\mu}(\Gamma)\) are of the form \((\mathcal{E},D)\) where \(\mathcal{E}\) is a maximally nondisconnecting subset of \(\Gamma\).
5. For each maximally nondisconnecting subset \(\mathcal{E}\) of \(\Gamma\), there exists exactly one \(D\in\mathbf{QD}_{v_{0},\mu}(\Gamma,\mathcal{E})\). In particular, the number of maximal elements of \(\mathbf{QD}_{v_{0},\mu}(\Gamma)\) is equal to the number of spanning trees of \(\Gamma\).
Let \(\Gamma\) be a graph, \(v_{0}\) a vertex of \(\Gamma\), and \(\mu\) a polarization on \(\Gamma\). Two pseudo-divisors \((\mathcal{E},D)\) and \((\mathcal{E},D^{\prime})\) in \(\mathbf{QD}_{v_{0},\mu}(\Gamma)\) are _upper-connected_ in \(\mathbf{QD}_{v_{0},\mu}(\Gamma)\) if there are edges \(e_{i}\in E(\Gamma)\setminus\mathcal{E}\) for \(i=1,\ldots,n\), divisors \(D_{i}\) on \(\Gamma^{\mathcal{E}\cup\{e_{i}\}}\) for \(i=1,\ldots,n\), and divisors \(D^{\prime}_{i}\) on \(\Gamma^{\mathcal{E}}\) for \(i=0,\ldots,n\) such that the following conditions hold
1. we have that \(D_{i}\in\mathbf{QD}_{v_{0},\mu}(\Gamma,\mathcal{E}\cup\{e_{i}\})\) for \(i=1,\ldots,n\) and \(D^{\prime}_{i}\in\mathbf{QD}_{v_{0},\mu}(\Gamma,\mathcal{E})\) for \(i=0,\ldots,n\);
2. we have \((\mathcal{E},D)=(\mathcal{E},D^{\prime}_{0})\) and \((\mathcal{E},D^{\prime})=(\mathcal{E},D^{\prime}_{n})\);
3. we have \((\mathcal{E},D^{\prime}_{i-1})\leq(\mathcal{E}\cup\{e_{i}\},D_{i})\) and \((\mathcal{E},D^{\prime}_{i})\leq(\mathcal{E}\cup\{e_{i}\},D_{i})\) for \(i=1,\ldots,n\).
**Proposition 3.3**.: _Let \(\Gamma\) be a graph, \(v_{0}\) a vertex of \(\Gamma\), and \(\mu\) a polarization on \(\Gamma\). Consider divisors \(D,D^{\prime}\in\mathbf{QD}_{v_{0},\mu}(\Gamma,\mathcal{E})\), for some subset \(\mathcal{E}\subset E(\Gamma)\). Then \((\mathcal{E},D)\) and \((\mathcal{E},D^{\prime})\) are upper-connected in \(\mathbf{QD}_{v_{0},\mu}(\Gamma)\)._
Proof.: As recalled in Remark 3.2, we have an inclusion \(\mathbf{QD}_{v_{0},\mu_{\mathcal{E}}}(\Gamma_{\mathcal{E}})\subset\mathbf{QD }_{v_{0},\mu}(\Gamma)\). Hence we can assume \(\mathcal{E}=\emptyset\). We will proceed by induction on the number of edges of \(\Gamma\). If \(\Gamma\) has only one edge the result is clear. Otherwise, fix an edge \(e\in E(\Gamma)\) and consider the contraction \(\iota\colon\Gamma\to\Gamma/\{e\}\) of \(e\). Recall that the map \(\iota_{*}\colon\mathbf{QD}_{v_{0},\mu}(\Gamma)\to\mathbf{QD}_{\iota(v_{0}), \iota_{*}(\mu)}(\Gamma/\{e\})\) taking \((\mathcal{E},D)\) to \(\iota_{*}(\mathcal{E},D)\) is surjective and closed (see [1, Proposition 4.11]).
First of all, we assume that \(\iota_{*}(\emptyset,D)=\iota_{*}(\emptyset,D^{\prime})\). This means that \(D(v)=D^{\prime}(v)\) for every vertex \(v\in V(\Gamma)\) not incident to \(e\). If \(e\) is a loop, then \(D=D^{\prime}\), and we have nothing to prove. Otherwise, let \(s\) and \(t\) be the end-vertices of \(e\) and assume that \(D(t)\geq D^{\prime}(t)\). Set \(n:=D(t)-D^{\prime}(t)=D^{\prime}(s)-D(s)\) and define the divisors \(D_{i}\) on \(\Gamma^{\{e\}}\) for \(i=1,\ldots,n\) and \(D^{\prime}_{i}\) on \(\Gamma\) for \(i=0,\ldots,n\) taking a vertex \(v\) to
\[D_{i}(v)=\begin{cases}D(v)&\text{ if }v\not\in\{s,t\}\\ 1&\text{ if }v=v_{e}\\ D(v)-i=D^{\prime}(v)+n-i&\text{ if }v=t\\ D(v)+i-1&\text{ if }v=s\end{cases}\qquad D^{\prime}_{i}(v)=\begin{cases}D(v)&\text{ if }v\not\in\{s,t\}\\ D(v)-i=D^{\prime}(v)+n-i&\text{ if }v=t\\ D(v)+i&\text{ if }v=s.\end{cases}\]
Let \(e_{1}:=e_{2}:=\cdots:=e_{n}:=e\). Note that \((\emptyset,D^{\prime}_{i})\) and \((\{e\},D_{i})\) are \((v_{0},\mu)\)-quasistable because both \(D_{i}(V)\) and \(D^{\prime}_{i}(V)\) are greater or equal than either \(D(V)\) or \(D^{\prime}(V)\), for every \(V\subset V(\Gamma)\subset V(\Gamma^{\{e\}})\). We see that
\((\emptyset,D)\) and \((\emptyset,D^{\prime})\) are upper-connected in \(\mathbf{QD}_{v_{0},\mu}(\Gamma)\) by means of the edges \(e_{1},\ldots,e_{n}\) and the divisors \(D_{1},\ldots,D_{n},D^{\prime}_{0},\ldots,D^{\prime}_{n}\).
Now we consider the general case. By the induction hypothesis, \(\iota_{*}(\emptyset,D)\) and \(\iota_{*}(\emptyset,D^{\prime})\) are upper-connected in \(\mathbf{QD}_{\iota(v_{0}),\iota_{*}(\mu)}(\Gamma/\{e\})\) by means of edges \(e_{1},\ldots,e_{n}\) of \(\Gamma/\{e\}\), and divisors \(D_{e,i}\in\mathbf{QD}_{\iota(v_{0}),\iota_{*}(\mu)}(\Gamma/\{e\},\{e_{i}\})\) for \(i=1,\ldots,n\) and \(D^{\prime}_{e,i}\in\mathbf{QD}_{\iota(v_{0}),\iota_{*}(\mu)}(\Gamma/\{e\})\) for \(i=0,\ldots,n\). Since \(\iota_{*}\) is surjective, there are divisors \(D_{i}\in\mathbf{QD}_{v_{0},\mu}(\Gamma,\{e_{i}\})\) for \(i=1,\ldots,n\), such that \(\iota_{*}(\{e_{i}\},D_{i})=(\{e_{i}\},D_{e,i})\). By Remark 2.1 and the fact that \(\iota_{*}\) is closed, we have that there are \((v_{0},\mu)\)-quasistable divisors \(D^{\prime}_{i}\) and \(D^{\prime\prime}_{i}\) on \(\Gamma\) such that
\[(\emptyset,D^{\prime}_{i})\leq(\{e_{i}\},D_{i}), (\emptyset,D^{\prime\prime}_{i})\leq(\{e_{i}\},D_{i}),\] \[\iota_{*}(\emptyset,D^{\prime}_{i})=(\emptyset,D^{\prime}_{e,i}), \iota_{*}(\emptyset,D^{\prime\prime}_{i})=(\emptyset,D^{\prime}_{e,i-1}).\]
This means that \(\iota_{*}(\emptyset,D)=\iota_{*}(\emptyset,D^{\prime\prime}_{i})\), \(\iota_{*}(\emptyset,D^{\prime}_{i})=\iota_{*}(\emptyset,D^{\prime\prime}_{i+1})\) and \(\iota_{*}(\emptyset,D^{\prime})=\iota_{*}(\emptyset,D^{\prime}_{n})\). By the previous case, we have that the pairs \(((\emptyset,D),(\emptyset,D^{\prime\prime}_{1}))\), \(((\emptyset,D^{\prime\prime}_{i}),(\emptyset,D^{\prime\prime}_{i+1}))\) and \(((\emptyset,D^{\prime}),(\emptyset,D^{\prime}_{n}))\) are pairs of upper-connected pseudo-divisors in \(\mathbf{QD}_{v_{0},\mu}(\Gamma)\). Since \((\emptyset,D^{\prime}_{i})\leq(\{e_{i}\},D_{i})\) and \((\emptyset,D^{\prime\prime}_{i})\leq(\{e_{i}\},D_{i})\), it follows that \((\emptyset,D^{\prime\prime}_{i})\) and \((\emptyset,D^{\prime}_{i})\) are upper-connected in \(\mathbf{QD}_{v_{0},\mu}(\Gamma)\), concluding the proof.
Recall that \(\mu_{\text{can}}\) denotes the canonical polarization of degree \(g-1\) (see Equation (5)). We will simply write \(\mathbf{QD}_{v_{0}}(\Gamma)\) and \(\mathbf{QD}_{v_{0}}(\Gamma,\mathcal{E})\) instead of \(\mathbf{QD}_{v_{0},\mu_{\text{can}}}(\Gamma)\) and \(\mathbf{QD}_{v_{0},\mu_{\text{can}}}(\Gamma,\mathcal{E})\).
**Proposition 3.4**.: _Let \(\Gamma\) be a graph, and \(v_{0},v_{1}\) be vertices of \(\Gamma\). Then we have a canonical isomorphism of posets \(\mathbf{QD}_{v_{0}}(\Gamma)\cong\mathbf{QD}_{v_{1}}(\Gamma)\)._
Proof.: We construct a map \(\mathbf{QD}_{v_{0}}(\Gamma)\to\mathbf{QD}_{v_{1}}(\Gamma)\) that takes a pseudo-divisor \((\mathcal{E},D)\) in \(\mathbf{QD}_{v_{0}}(\Gamma)\) to \((\mathcal{E},D+v_{0}-v_{1})\). This map is well-defined, indeed, fix \((\mathcal{E},D)\) a \(v_{0}\)-quasistable pseudo-divisor, and set \(D^{\prime}=D+v_{0}-v_{1}\). For every \(V\subset V(\Gamma^{\mathcal{E}})\) we have that \(\beta_{\Gamma,D}(V)\) is an integer number by Equation (6). Moreover,
\[\beta_{\Gamma,D^{\prime}}(V)=\begin{cases}\beta_{\Gamma,D}(V)+1&\text{ if }v_{0}\in V,v_{1}\notin V,\\ \beta_{\Gamma,D}(V)-1&\text{ if }v_{0}\notin V,v_{1}\in V,\\ \beta_{\Gamma,D}(V)&\text{ otherwise }.\end{cases}\]
It follows that \((\mathcal{E},D-v_{0}+v_{1})\) is \(v_{1}\)-quasistable.
The fact that the map is a morphism of posets is clear, and it has a natural inverse that takes \((\mathcal{E},D^{\prime})\) to \((\mathcal{E},D^{\prime}-v_{0}+v_{1})\), hence it is an isomorphism.
Notice that Proposition 3.4 allows us to use the notation \(\mathbf{QD}(\Gamma)\) to denote one of the posets \(\mathbf{QD}_{v_{0}}(\Gamma)\), for \(v_{0}\in V(\Gamma)\). Similarly, we will use the notation \(\mathbf{QD}(\Gamma,\mathcal{E})\) to denote one of the posets \(\mathbf{QD}_{v_{0}}(\Gamma,\mathcal{E})\). We will keep using the notations \(\mathbf{QD}_{v_{0}}(\Gamma)\) and \(\mathbf{QD}_{v_{0}}(\Gamma,\mathcal{E})\) when we will need to consider one specific poset in the computations.
## 4. Special posets
In this section we will study some distinguished subposets of the poset of quasistable divisors \(\mathbf{QD}(\Gamma)\).
**Definition 4.1**.: We let \(\mathbf{P}\) (respectively, \(\mathbf{R}\)) be the ranked poset whose Hasse diagrams is drawn in Figure 1 (respectively, in Figure 2). We write \(\mathbf{P}=\{\alpha,\beta,\gamma,\delta\}\) and \(\mathbf{R}=\{\alpha_{1},\beta_{1},\beta_{2},\beta_{3},\beta_{4},\gamma_{1}, \gamma_{2},\gamma_{3}\}\).
**Proposition 4.2**.: _Let \(\Gamma\) be a graph and \(v_{0}\) a vertex of \(\Gamma\). Suppose that \(g\colon\mathbf{P}\to\mathbf{QD}_{v_{0}}(\Gamma)\) is an injective morphism of posets that preserves cover relations. Then there are parallel edges \(e_{1},e_{2}\) of \(\Gamma\) and a subset \(\mathcal{E}\subset E(\Gamma)\setminus\{e_{1},e_{2}\}\) such that, denoting by \(s\) and \(t\) the end-vertices of \(e_{1}\) and \(e_{2}\), one of the following conditions hold_
1. _there is a divisor_ \(D\) _on_ \(\Gamma^{\mathcal{E}}\) _such that_ \[g(\mathbf{P})=\left\{\begin{array}{l}\{(\mathcal{E}\cup\{e_{1}\},D+v_{e_{1}}),( \mathcal{E}\cup\{e_{2}\},D+v_{e_{2}}),\;D+v_{e_{2}}),\;\right\}.\end{array}\right\}.\]
2. _there is a divisor_ \(D\) _on_ \(\Gamma^{\mathcal{E}}\) _such that_ \[g(\mathbf{P})=\left\{\begin{array}{l}(\mathcal{E}\cup\{e_{1},e_{2}\},D-t+v_{e _{1}}+v_{e_{2}}),\;(\mathcal{E}\cup\{e_{1},e_{2}\},D-s+v_{e_{1}}+v_{e_{2}}),\\ (\mathcal{E}\cup\{e_{1}\},D+v_{e_{1}}),\;(\mathcal{E}\cup\{e_{2}\},D+v_{e_{2}} )\end{array}\right\}.\]
_The two possibilities for the Hasse diagram of \(g(\mathbf{P})\) are drawn in Figure 3 (where we only draw the edges \(e_{1}\) and \(e_{2}\), instead of the whole graph \(\Gamma\))._
Proof.: Recall that we write \(\mathbf{P}=\{\alpha,\beta,\gamma,\delta\}\) (see Figure 1). We set
\[(\mathcal{E}_{1},D_{1}):=g(\alpha),\;\;(\mathcal{E}_{2},D_{2}):=g(\beta),\;\; (\mathcal{E}_{3},D_{3}):=g(\gamma),\;\;(\mathcal{E}_{4},D_{4}):=g(\delta).\]
By definition of specialization, we have \(\mathcal{E}_{3}\cup\mathcal{E}_{4}\subset\mathcal{E}_{1}\cap\mathcal{E}_{2}\), with \(|\mathcal{E}_{1}|=|\mathcal{E}_{2}|=|\mathcal{E}_{3}|+1=|\mathcal{E}_{4}|+1\). Hence we have three cases:
1. either \(\mathcal{E}_{3}=\mathcal{E}_{4}\) and \(\mathcal{E}_{1}\neq\mathcal{E}_{2}\),
2. or \(\mathcal{E}_{3}\neq\mathcal{E}_{4}\) and \(\mathcal{E}_{1}=\mathcal{E}_{2}\),
3. or \(\mathcal{E}_{3}=\mathcal{E}_{4}\) and \(\mathcal{E}_{1}=\mathcal{E}_{2}\).
We begin with Case (1). In this case, we define \(\mathcal{E}:=\mathcal{E}_{3}=\mathcal{E}_{4}\), which means that \(\mathcal{E}_{1}=\mathcal{E}\cup\{e_{1}\}\) and \(\mathcal{E}_{2}=\mathcal{E}\cup\{e_{2}\}\) for some distinct edges \(e_{1},e_{2}\in E(\Gamma)\). We have that \((\mathcal{E},D_{3})\) and \((\mathcal{E},D_{4})\) must be different (since \(g\) is injective) and hence they are the two pseudo-divisors on \(\Gamma\) of type \((\mathcal{E},D^{\prime})\) to which both \((\mathcal{E}\cup\{e_{1}\},D_{1})\) and \((\mathcal{E}\cup\{e_{2}\},D_{2})\) specialize described in Remark 2.4. In particular neither \(e_{1}\) nor \(e_{2}\) is a loop, otherwise there will be only one of these specializations.
Let us prove that \(e_{1}\) and \(e_{2}\) are parallel edges. Assume, by contradiction, that there exists a vertex \(v\) incident to \(e_{1}\) and not to \(e_{2}\). Then, by Remark 2.4, it follows that \(D_{3}(v)=D_{2}(v)\) and \(D_{4}(v)=D_{2}(v)\), and also, without loss of generality, that \(D_{3}(v)=D_{1}(v)\) and \(D_{4}(v)=D_{1}(v)+1\), giving rise to a contradiction. This proves that \(e_{1}\) and \(e_{2}\) are incident to same pair of vertices, meaning that they are parallel.
Denote by \(s,t\) the end-vertices of \(e_{1}\) and \(e_{2}\). Again by Remark 2.4 and up to switch \(D_{3}\) with \(D_{4}\), we have that either \(D_{3}=D_{1}-v_{e_{1}}+s=D_{2}-v_{e_{2}}+s\) or \(D_{3}=D_{1}-v_{e_{1}}+t=D_{2}-v_{e_{2}}+s\). We can rule out the second possibility as follows. If \(D_{3}=D_{1}-v_{e_{1}}+t=D_{2}-v_{e_{2}}+s\), then \(D_{4}=D_{1}-v_{e_{1}}+s=D_{2}-v_{e_{2}}+t\), hence \(D_{3}(t)=D_{1}(t)+1=D_{2}(t)\) and \(D_{4}(t)=D_{1}(t)=D_{2}(t)+1\), which is a contradiction. It follows that \(D_{3}=D_{1}-v_{e_{1}}+s=D_{2}-v_{e_{2}}+s\), giving the poset described in item (1) of the statement with \(D:=D_{1}-v_{e_{1}}=D_{2}-v_{e_{2}}\).
We move to Case (2). In this case, we define \(\mathcal{E}:=\mathcal{E}_{3}\cap\mathcal{E}_{4}\), and hence \(|\mathcal{E}_{3}|=|\mathcal{E}_{4}|=|\mathcal{E}|+1\). This means that \(\mathcal{E}_{3}=\mathcal{E}\cup\{e_{1}\}\), \(\mathcal{E}_{4}=\mathcal{E}\cup\{e_{2}\}\), \(\mathcal{E}_{1}=\mathcal{E}_{2}=\mathcal{E}\cup\{e_{1},e_{2}\}\) for some distinct edges \(e_{1},e_{2}\in E(\Gamma)\).
Let us prove that \(e_{1}\) and \(e_{2}\) are parallel edges. Let \(V_{0}\) be the set of vertices incident to both \(e_{1}\) and \(e_{2}\). Assume, by contradiction, that \(|V_{0}|\leq 1\). Let \(v\) be a vertex not incident to \(e_{2}\). Since \((\mathcal{E}\cup\{e_{1}\},D_{3})\) is an elementary specialization of both \((\mathcal{E}\cup\{e_{1},e_{2}\},D_{1})\) and \((\mathcal{E}\cup\{e_{1},e_{2}\},D_{2})\), by Remark 2.4 we have that \(D_{1}(v)=D_{3}(v)=D_{2}(v)\). We can argue similarly for any vertex not incident to \(e_{1}\). We deduce that \(D_{1}(v)=D_{2}(v)\) for every vertex \(v\not\in V_{0}\). Since \(D_{1}\) and \(D_{2}\) have the same degree and since \(|V_{0}|\leq 1\), we have that \(D_{1}=D_{2}\), which is a contradiction. This proves that \(|V_{0}|=2\), i.e., \(e_{1}\) and \(e_{2}\) are parallel edges.
Let \(s\) and \(t\) be the end-vertices of \(e_{1}\) and \(e_{2}\). Then, for \(i=3,4\), we have four cases
* either \(D_{i}(s)=D_{1}(s)+1=D_{2}(s)+1\),
* or \(D_{i}(s)=D_{1}(s)=D_{2}(s)+1\),
* or \(D_{i}(s)=D_{1}(s)+1=D_{2}(s)\),
* or \(D_{i}(s)=D_{1}(s)=D_{2}(s)\).
With the same argument used above, Cases (i) and (iv) would imply that \(D_{1}=D_{2}\), which is a contradiction. In Case (ii) we have that \(D_{i}=D_{1}-v_{e_{5-i}}+t=D_{2}-v_{e_{5-i}}+s\), which means that \(D_{1}+t=D_{2}+s\). Similarly, in Case (iii) we have that \(D_{1}+s=D_{2}+t\). So the same case must hold for both \(i=3\) and \(i=4\). This means that \(D_{3}(v)=D_{4}(v)\) for every \(v\in V(\Gamma)\), giving the poset described in item (2) of the statement with \(D:=D_{3}-v_{e_{1}}=D_{4}-v_{e_{2}}\).
Finally, we consider Case (3). In this case, we define \(\mathcal{E}:=\mathcal{E}_{3}=\mathcal{E}_{4}\), which means that \(\mathcal{E}_{1}=\mathcal{E}_{2}=\mathcal{E}\cup\{e\}\) for some edge \(e\in E(\Gamma)\). Since \(g\) is injective, we have that \((\mathcal{E},D_{3})\) and \((\mathcal{E},D_{4})\) are different. Hence they are the two pseudo-divisor of type \((\mathcal{E},D^{\prime})\) to which both \((\mathcal{E}\cup\{e\},D_{1})\) and \((\mathcal{E}\cup\{e\},D_{2})\) specialize, described in Remark 2.4. This implies that \(D_{1}=D_{2}\), which is a contradiction with the fact that \(g(\alpha)\neq g(\beta)\).
**Corollary 4.3**.: _Let \(\Gamma\) be a graph and \(v_{0}\) a vertex of \(\Gamma\). Let \(g\colon\mathbf{P}\to\mathbf{QD}_{v_{0}}(\Gamma)\) be an injective morphism of ranked posets. Then there are parallel edges \(e_{1},e_{2}\in E(\Gamma)\) and a divisor \(D\) on \(\Gamma\) such that_
\[g(\mathbf{P})=\{(\{e_{1}\},D+v_{e_{1}}),(\{e_{2}\},D+v_{e_{2}}),(\emptyset,D+s),( \emptyset,D+t)\},\]
_where \(s\) and \(t\) are the end-vertices of \(e_{1}\) and \(e_{2}\)._
Proof.: The rank of the elements \(\alpha\) and \(\beta\) in \(\mathbf{P}\) is \(1\) and \(g\) is a morphism of ranked posets. Then \(g(\alpha)\) and \(g(\beta)\) have rank \(1\) in \(\mathbf{QD}(\Gamma)\), and hence \(g(\mathbf{P})\) is the poset described in item (1) of Proposition 4.2 with \(\mathcal{E}=\emptyset\).
**Definition 4.4**.: Let \(\Gamma\) be a graph and \(v_{0}\) be a vertex of \(\Gamma\). Assume that \(e_{1}\) and \(e_{2}\) are parallel edges of \(\Gamma\) and \(D\) is a divisor in \(\mathbf{QD}_{v_{0}}(\Gamma,\{e_{1},e_{2}\})\). We denote by \(s\) and \(t\) the end-vertices of \(e_{1}\) and \(e_{2}\). We let \(\mathbf{R}_{e_{1},e_{2}}(D)\) be the ranked sub-poset of \(\mathbf{QD}_{v_{0}}(\Gamma)\) given by
\[\mathbf{R}_{e_{1},e_{2}}(D)=\left\{\begin{array}{l}(\{e_{1},e_{2}\},D),\;(\{e_ {1}\},D-v_{e_{2}}+s),\;(\{e_{1}\},D-v_{e_{2}}+t),\\ (\{e_{2}\},D-v_{e_{1}}+s),(\{e_{2}\},D-v_{e_{1}}+t),\\ (\emptyset,D-v_{e_{1}}-v_{e_{2}}+2s),(\emptyset,D-v_{e_{1}}-v_{e_{2}}+s+t),( \emptyset,D-v_{e_{1}}-v_{e_{2}}+2s)\end{array}\right\}.\]
The Hasse diagram of \(\mathbf{R}_{e_{1},e_{2}}(D)\) is drawn in Figure 4. In the figure, we only draw the edges \(e_{1}\) and \(e_{2}\), instead of the whole graph \(\Gamma\).
Recall that we write \(\mathbf{R}=\{\alpha_{1},\beta_{1},\beta_{2},\beta_{3},\beta_{4},\gamma_{1}, \gamma_{2},\gamma_{3},\gamma_{4}\}\) (see Definition 4.1). Notice that \(\mathbf{R}\) and \(\mathbf{R}_{e_{1},e_{2}}(D)\) are isomorphic ranked posets.
**Proposition 4.5**.: _Let \(\Gamma\) be a graph and \(v_{0}\) a vertex of \(\Gamma\). Suppose that \(g\colon\mathbf{R}\to\mathbf{QD}_{v_{0}}(\Gamma)\) is an injective morphism of ranked posets. Then \(g(\alpha_{1})=(\{e_{1},e_{2}\},D)\), for some parallel edges \(e_{1},e_{2}\) of \(\Gamma\) and a divisor \(D\in\mathbf{QD}_{v_{0}}(\Gamma,\{e_{1},e_{2}\})\), and \(g(\mathbf{R})=\mathbf{R}_{e_{1},e_{2}}(D)\)._
Proof.: Since \(g\) is a morphism of ranked posets and the rank of \(\alpha_{1}\) is \(2\), we have \(g(\alpha_{1})=(\{e_{1},e_{2}\},D)\) for some edges \(e_{1}\) and \(e_{2}\) of \(\Gamma\) and a divisor \(D\) on \(\mathbf{QD}_{v_{0}}(\Gamma,\{e_{1},e_{2}\})\). Let \(s_{1}\) and \(t_{1}\) (respectively, \(s_{2}\) and \(t_{2}\)) be the (possibly coincident) end-vertices of \(e_{1}\) (respectively, of \(e_{2}\)). By Remark 2.4, there are at most \(4\) pseudo-divisors \((\mathcal{E}^{\prime},D^{\prime})\) of rank \(1\) (i.e., with \(|\mathcal{E}^{\prime}|=1\)) such that \((\mathcal{E}^{\prime},D^{\prime})<(\{e_{1},e_{2}\},D)\): they are the pseudo-divisors of the set
\[\left\{(\{e_{1}\},D-v_{e_{2}}+s_{2}),\;(\{e_{1}\},D-v_{e_{2}}+t_{2}),\;(\{e_{2 }\},D-v_{e_{1}}+s_{1}),\;(\{e_{2}\},D-v_{e_{1}}+t_{1})\right\}. \tag{8}\]
Since \(g\) is an injective morphism of ranked poset, the set in Equation (8) is equal to \(\{g(\beta_{1}),g(\beta_{2}),g(\beta_{3}),g(\beta_{4})\}\). In particular neither \(e_{1}\) nor \(e_{2}\) are loops, i.e., \(s_{1}\neq t_{1}\) and \(s_{2}\neq t_{2}\).
The induced subposets \(\{\beta_{1},\beta_{2},\gamma_{1},\gamma_{2}\}\) and \(\{\beta_{3},\beta_{4},\gamma_{2},\gamma_{3}\}\) of \(\mathbf{R}\) are isomorphic to the poset \(\mathbf{P}\). By Corollary 4.3, we see that \(e_{1}\) and \(e_{2}\) are parallel edges of \(\Gamma\) and, without loss of generality, we have that \(s:=s_{1}=s_{2}\), \(t:=t_{1}=t_{2}\), and
\[g(\beta_{1})=(\{e_{1}\},D-v_{e_{2}}+s),\ g(\beta_{2})=(\{e_{2}\},D -v_{e_{1}}+s),\] \[g(\beta_{3})=(\{e_{1}\},D-v_{e_{2}}+t),\ g(\beta_{4})=(\{e_{2}\}, D-v_{e_{1}}+t).\]
Figure 4. The Hasse diagram of the poset \(\mathbf{R}_{e_{1},e_{2}}(D)\)
Finally, there are exactly 3 pseudo-divisors of rank 0, i.e., of type \((\emptyset,D^{\prime\prime})\), that are smaller than at least one pseudo-divisor in the set \(\mathcal{U}=\{g(\beta_{1}),g(\beta_{2}),g(\beta_{3}),g(\beta_{4})\}\). By Remark 2.4, they are
\[(\emptyset,D^{\prime\prime}_{1}):= (\emptyset,D-v_{e_{1}}-v_{e_{2}}+2s),\] \[(\emptyset,D^{\prime\prime}_{2}):= (\emptyset,D-v_{e_{1}}-v_{e_{2}}+s+t),\] \[(\emptyset,D^{\prime\prime}_{3}):= (\emptyset,D-v_{e_{1}}-v_{e_{2}}+2t).\]
The first and the third are smaller than exactly two of the pseudo-divisors in the set \(\mathcal{U}\), while the second is smaller that every pseudo-divisor in \(\mathcal{U}\). Thus we have \(g(\gamma_{1})=(\emptyset,D^{\prime\prime}_{1})\), \(g(\gamma_{2})=(\emptyset,D^{\prime\prime}_{2})\), and \(g(\gamma_{3})=(\emptyset,D^{\prime\prime}_{3})\). This finishes the proof.
**Remark 4.6**.: In the proofs of Propositions 4.2 and 4.5, we never used that the divisors were quasistable. So these results remain true if we change the target of the map \(g\) with the poset of all pseudo-divisors on \(\Gamma\).
**Lemma 4.7**.: _Let \(\Gamma\) be a graph and \((\mathcal{E},D)\) a pseudo-divisor on \(\Gamma\). Assume that \(e\in\mathcal{E}\) is a non-loop edge of \(\Gamma\). Let \(s\) and \(t\) be the end-vertices of \(e\). If \(e_{0}\in\mathcal{E}\) is an edge of \(\Gamma\) such that there exists a pseudo-divisor \((\mathcal{E}\setminus\{e,e_{0}\},D^{\prime})\) on \(\Gamma\) smaller than both \((\mathcal{E}\setminus\{e\},D-v_{e}+s)\) and \((\mathcal{E}\setminus\{e\},D-v_{e}+t)\), then \(e\) and \(e_{0}\) are parallel edges of \(\Gamma\)._
Proof.: Set \(D_{1}:=D-v_{e}+s\) and \(D_{2}:=D-v_{e}+t\) and recall Remark 2.4. For \(v\in V(\Gamma)\), we have
\[D_{1}(v)=\begin{cases}D(v)&\text{ if }v\neq s,\\ D(v)+1&\text{ if }v=s,\end{cases} D_{2}(v)=\begin{cases}D(v)&\text{ if }v\neq t,\\ D(v)+1&\text{ if }v=t.\end{cases}\]
Let \(s_{0}\) and \(t_{0}\) be the end-vertices of \(e_{0}\). We can assume without loss of generality that
\[D^{\prime}(v)=\begin{cases}D_{1}(v)&\text{ if }v\neq s_{0},\\ D_{1}(v)+1&\text{ if }v=s_{0}.\end{cases}\]
Assume by contradiction that
\[D^{\prime}(v)=\begin{cases}D_{2}(v)&\text{ if }v\neq s_{0},\\ D_{2}(v)+1&\text{ if }v=s_{0}.\end{cases}\]
Hence we would have that \(D_{1}=D_{2}\), a contradiction. Then we have that \(s_{0}\neq t_{0}\) and
\[D^{\prime}(v)=\begin{cases}D_{2}(v)&\text{ if }v\neq t_{0},\\ D_{2}(v)+1&\text{ if }v=t_{0}.\end{cases}\]
If \(t_{0}\not\in\{s,t\}\), then \(D^{\prime}(t_{0})=D_{2}(t_{0})+1=D(t_{0})+1\) and \(D^{\prime}(t_{0})=D_{1}(t_{0})=D(t_{0})\) which is a contradiction. So we have that \(t_{0}\in\{s,t\}\) and, analogously, we have that \(s_{0}\in\{s,t\}\). This proves that \(\{s_{0},t_{0}\}=\{s,t\}\) and hence the edges \(e,e_{0}\) are parallel edges of \(\Gamma\).
## 5. Torelli theorem for graphs
In this section we will prove the following Torelli theorem for graphs:
**Theorem 5.1**.: _Let \(\Gamma\) and \(\Gamma^{\prime}\) be graphs. The posets \(\mathbf{QD}(\Gamma)\) and \(\mathbf{QD}(\Gamma^{\prime})\) are isomorphic if and only if there is a bijection between the biconnected components of \(\Gamma/\operatorname{Br}(\Gamma)\) and \(\Gamma^{\prime}/\operatorname{Br}(\Gamma^{\prime})\) such that the corresponding components are isomorphic as pure graphs._
As a particular case of Theorem 5.1, we get the following corollary.
**Corollary 5.2**.: _Let \(\Gamma\) and \(\Gamma^{\prime}\) be biconnected pure graphs. The posets \(\mathbf{QD}(\Gamma)\) and \(\mathbf{QD}(\Gamma^{\prime})\) are isomorphic if and only if \(\Gamma\) and \(\Gamma^{\prime}\) are isomorphic._
We start by reducing to the case of pure graphs.
**Proposition 5.3**.: _Let \(\Gamma\) be a graph. Let \(\Gamma_{0}\) be the pure graph with underlying graph equal to \(\Gamma\). Then \(\mathbf{QD}(\Gamma)\) is naturally isomorphic to \(\mathbf{QD}(\Gamma_{0})\)._
Proof.: It is enough to notice that a pseudo-divisor \((\mathcal{E},D)\) on \(\Gamma_{0}\) is \(v_{0}\)-quasistable for some \(v_{0}\in V(\Gamma)\), if and only if \((\mathcal{E},D+\sum_{v\in V(\Gamma)}w_{\Gamma}(v)v)\) is a \(v_{0}\)-quasistable pseudo-divisor on \(\Gamma\).
By Proposition 5.3, we see that it is enough to prove Theorem 5.1 for pure graphs. For the rest of this section, we will only consider pure graphs, and we use the word _graph_ for _pure graphs_.
**Definition 5.4**.: A _special pair_ of a graph \(\Gamma\) is a set \(\{e_{1},e_{2}\}\) of edges of \(\Gamma\) such that
1. the edges \(e_{1},e_{2}\) are distinct parallel edges of \(\Gamma\);
2. there are no parallel edges to \(e_{1}\) and \(e_{2}\) in \(E(\Gamma)\setminus\{e_{1},e_{2}\}\);
3. the graph \(\Gamma\) remains connected after the removal of \(e_{1}\) and \(e_{2}\).
Condition (3) implies that a special pair of \(\Gamma\) is contained in \(\mathrm{ND}(\Gamma)\) (recall Equation (2)). From now on, we will fix:
1. two graphs \(\Gamma\) and \(\Gamma^{\prime}\).
2. an isomorphism of posets \(f\colon\mathbf{QD}(\Gamma)\to\mathbf{QD}(\Gamma^{\prime})\) with inverse \(f^{-1}\colon\mathbf{QD}(\Gamma^{\prime})\to\mathbf{QD}(\Gamma)\).
3. identifications \(\mathbf{QD}(\Gamma)=\mathbf{QD}_{v_{0}}(\Gamma)\) and \(\mathbf{QD}(\Gamma^{\prime})=\mathbf{QD}_{v^{\prime}_{0}}(\Gamma^{\prime})\), for some \(v_{0}\in V(\Gamma)\) and \(v^{\prime}_{0}\in V(\Gamma^{\prime})\).
Let \(e_{1},e_{2}\) be parallel edges of the graph \(\Gamma\). Let \(s\) and \(t\) be the end-vertices of \(e_{1}\) and \(e_{2}\). Assume that there is a divisor \(D\in\mathbf{QD}_{v_{0}}(\Gamma,\{e_{1},e_{2}\})\) (the existence of one such \(D\) is equivalent to the fact that \(\Gamma\) remains connected after the removal of \(e_{1}\) and \(e_{2}\)). By Proposition 4.5, there are parallel edges \(e^{\prime}_{1},e^{\prime}_{2}\) of the graph \(\Gamma^{\prime}\) and a divisor \(D^{\prime}\in\mathbf{QD}_{v^{\prime}_{0}}(\Gamma^{\prime},\{e^{\prime}_{1},e^{ \prime}_{2}\})\) such that \(f(\mathbf{R}_{e_{1},e_{2}}(D))=\mathbf{R}_{e^{\prime}_{1},e^{\prime}_{2}}(D^{ \prime})\). Let \(s^{\prime},t^{\prime}\) be the end-vertices of \(e^{\prime}_{1}\) and \(e^{\prime}_{2}\).
**Lemma 5.5**.: _Keep the above notations. Assume that_
\[f(\{e_{1}\},D-v_{e_{2}}+s)=(\{e^{\prime}_{1}\},D^{\prime}-v_{e^{\prime}_{2}}+s ^{\prime}),\]
\[f(\{e_{1}\},D-v_{e_{2}}+t))=(\{e^{\prime}_{2}\},D^{\prime}-v_{e^{\prime}_{1}}+ t^{\prime}).\]
_Then \(\{e_{1},e_{2}\}\) and \(\{e^{\prime}_{1},e^{\prime}_{2}\}\) are special pairs of \(\Gamma\) and \(\Gamma^{\prime}\), respectively._
Proof.: Let \(e\neq e_{1}\) be an edge of \(\Gamma\) parallel to \(e_{1}\) and \(e_{2}\). Using that \(D\in\mathbf{QD}_{v_{0}}(\Gamma,\{e_{1},e_{2}\})\), it is easy to see that \(D-v_{e_{2}}+v_{e}\) is in \(\mathbf{QD}_{v_{0}}(\Gamma,\{e,e_{1}\})\). We also have
\[(\{e,e_{1}\},D-v_{e_{2}}+v_{e})\geq(\{e_{1}\},D-v_{e_{2}}+s),\] \[(\{e,e_{1}\},D-v_{e_{2}}+v_{e})\geq(\{e_{1}\},D-v_{e_{2}}+t).\]
That means that the number of quasistable pseudo-divisors \((\mathcal{E},\widetilde{D})\) on \(\Gamma\) with \(|\mathcal{E}|=2\) such that
\[(\mathcal{E},\widetilde{D})\geq(\{e_{1}\},D-v_{e_{2}}+s)\ \ \ \text{and}\ \ \ ( \mathcal{E},\widetilde{D})\geq(\{e_{1}\},D-v_{e_{2}}+t)\]
is at least the number of edges parallel to \(e_{1}\) and different from \(e_{1}\).
On the other hand, let us see that the number of pseudo-divisors \((\mathcal{E}^{\prime},\widetilde{D}^{\prime})\) on \(\Gamma^{\prime}\) with \(|\mathcal{E}^{\prime}|=2\) such that
\[(\mathcal{E}^{\prime},\widetilde{D}^{\prime})\geq(\{e^{\prime}_{1}\},D^{ \prime}-v_{e^{\prime}_{2}}+s^{\prime})\ \ \text{and}\ \ (\mathcal{E}^{\prime},\widetilde{D}^{\prime})\geq(\{e^{\prime}_{1}\},D^{ \prime}-v_{e^{\prime}_{2}}+t^{\prime})\]
is exactly one. Indeed, we have that \((\{e^{\prime}_{1},e^{\prime}_{2}\},D^{\prime})\) satisfy this condition. Moreover, any other such pseudo-divisor must satisfy that \(\mathcal{E}^{\prime}=\{e^{\prime}_{1},e^{\prime}_{2}\}\).
Assume that we have another such pseudo-divisor \((\{e^{\prime}_{1},e^{\prime}_{2}\},\widetilde{D}^{\prime})\). The poset
\[\{(\{e^{\prime}_{1},e^{\prime}_{2}\},\widetilde{D}^{\prime}),\ (\{e^{\prime}_{1},e^{ \prime}_{2}\},D^{\prime}),(\{e^{\prime}_{1}\},D^{\prime}-v_{e^{\prime}_{2}}+s^{ \prime}),\ (\{e^{\prime}_{2}\},D^{\prime}-v_{e^{\prime}_{1}}+t^{\prime})\}\]
is a copy of the poset \(\mathbf{P}\) inside \(\mathbf{QD}_{v^{\prime}_{0}}(\Gamma^{\prime})\). Proposition 4.2 characterizes such copies of \(\mathbf{P}\) inside \(\mathbf{QD}_{v^{\prime}_{0}}(\Gamma^{\prime})\), and it is clear that we are in the case of Item (2). Hence \(D^{\prime}-v_{e^{\prime}_{2}}+s^{\prime}-v_{e^{\prime}_{1}}=D^{\prime}-v_{e^{ \prime}_{1}}+t^{\prime}-v_{e^{\prime}_{2}}\), which is a contradiction since \(s^{\prime}\neq t^{\prime}\).
Since \(f\) is an isomorphism of posets, we deduce that there exists exactly one edge parallel to \(e_{1}\), which must be \(e_{2}\). Since \((\{e_{1},e_{2}\},D)\) is a quasistable pseudodivisor, by Remark 3.2, we have that the removal of \(e_{1},e_{2}\) does not disconnect the graph, hence \(e_{1},e_{2}\) is a special pair of \(\Gamma\). Arguing similarly for \(f^{-1}\), we have that \(e_{1}^{\prime},e_{2}^{\prime}\) is a special pair of \(\Gamma^{\prime}\).
Recall the functions \(\epsilon_{\Gamma}\) and \(\delta_{\Gamma}\) defined in Equation (3).
**Proposition 5.6**.: _Let \(e\) be an edge of \(\Gamma\). Assume that there are divisors \(D_{1}\) and \(D_{2}\) in \(\mathbf{QD}_{v_{0}}(\Gamma,\{e\})\). Set \(\{e_{1}^{\prime}\}:=\epsilon_{\Gamma^{\prime}}(f(\{e\},D_{1}))\) and \(\{e_{2}^{\prime}\}:=\epsilon_{\Gamma^{\prime}}(f(\{e\},D_{2}))\). Then one of the two conditions holds._
1. _The edges_ \(e_{1}^{\prime}\) _and_ \(e_{2}^{\prime}\) _of_ \(\Gamma^{\prime}\) _are equal._
2. _The edge_ \(e\) _belongs to a special pair_ \(\{e,e_{0}\}\) _of_ \(\Gamma\) _and_ \(\{e_{1}^{\prime},e_{2}^{\prime}\}\) _is a special pair of_ \(\Gamma^{\prime}\)_._
Proof.: The result is clear if \(D_{1}=D_{2}\), so we can assume that \(D_{1}\neq D_{2}\). By Proposition 3.3, it is sufficient to prove the result when there exists a pseudo-divisor \((\{e,e_{0}\},D)\) that specializes to both \((\{e\},D_{1})\) and \((\{e\},D_{2})\). By Remark 2.4, the edge \(e\) is not a loop and, denoting by \(s\) and \(t\) the end-vertices of \(e\), we have \(D_{1}=D-v_{e}+s\) and \(D_{2}=D-v_{e}+t\). Set \(D_{i}^{\prime}:=\delta_{\Gamma^{\prime}}(f(\{e\},D_{i}))\) for \(i=1,2\).
Assume that \(e_{1}^{\prime}\neq e_{2}^{\prime}\). For \(i=1,2\), we have that
\[f(\{e,e_{0}\},D)\geq f(\{e\},D_{i})=(\{e_{i}^{\prime}\},D_{i}^{\prime}),\]
which implies that \(f(\{e,e_{0}\},D)=(\{e_{1}^{\prime},e_{2}^{\prime}\},D^{\prime})\), for some divisor \(D^{\prime}\in\mathbf{QD}_{v_{0}^{\prime}}(\Gamma^{\prime},\{e_{1}^{\prime},e_ {2}^{\prime}\})\).
Again by Remark 2.4, there is an end-vertex \(s_{i}^{\prime}\) of \(e_{i}^{\prime}\) for \(i=1,2\), such that \(D_{1}^{\prime}=D^{\prime}-v_{e_{2}^{\prime}}+s_{2}^{\prime}\) and \(D_{2}^{\prime}=D^{\prime}-v_{e_{1}^{\prime}}+s_{1}^{\prime}\). If we set \(\widetilde{D}^{\prime}:=D^{\prime}-v_{e_{1}^{\prime}}-v_{e_{2}^{\prime}}+s_{1 }^{\prime}+s_{2}^{\prime}\), then \((\emptyset,\widetilde{D}^{\prime})\leq(\{e_{i}^{\prime}\},D_{i}^{\prime})\) for \(i=1,2\). Set \((\emptyset,\widetilde{D}):=f^{-1}(\emptyset,\widetilde{D}^{\prime})\). Therefore, for \(i=1,2\), we have
\[(\emptyset,\widetilde{D})\leq(\{e\},D_{i})\leq(\{e,e_{0}\},D).\]
We see that \((\{e,e_{0}\},D)\) satisfies the hypotheses of Lemma 4.7, hence the edges \(e\) and \(e_{0}\) of \(\Gamma\) are parallel.
Now consider the image of the poset \(\mathbf{R}_{e,e_{0}}(D)\subset\mathbf{QD}_{v_{0}}(\Gamma)\) via the isomorphism \(f\). By Proposition 4.5, we have \(f(\mathbf{R}_{e,e_{0}}(D))=\mathbf{R}_{e_{1}^{\prime},e_{2}^{\prime}}(D^{\prime})\) and the edges \(e_{1}^{\prime},e_{2}^{\prime}\) are parallel, with
\[f(\{e\},D-v_{e_{0}}+s)=(\{e_{1}^{\prime}\},D^{\prime}-v_{e_{2}^{\prime}}+s_{2 }^{\prime})\]
\[f(\{e\},D-v_{e_{0}}+t)=(\{e_{2}^{\prime}\},D^{\prime}-v_{e_{1}^{\prime}}+s_{1 }^{\prime}).\]
By contradiction, assume that \(s^{\prime}:=s_{1}^{\prime}=s_{2}^{\prime}\), and let \(t^{\prime}\) be the other end-vertex of \(e_{1}^{\prime}\) and \(e_{2}^{\prime}\). Then the two pseudo-divisors \((\emptyset,D^{\prime}-v_{e_{1}^{\prime}}-v_{e_{2}^{\prime}}+2s^{\prime})\) and \((\emptyset,D^{\prime}-v_{e_{1}^{\prime}}-v_{e_{2}^{\prime}}+s^{\prime}+t^{ \prime})\) of \(\mathbf{R}_{e_{1}^{\prime},e_{2}^{\prime}}(D^{\prime})\) would be smaller then both \((\{e_{1}^{\prime}\},D_{1}^{\prime})\) and \((\{e_{2}^{\prime}\},D_{2}^{\prime})\). On the other hand, there is only one element of \(\mathbf{R}_{e,e_{0}}(D)\) smaller then \((\{e\},D_{1})\) and \((\{e\},D_{2})\), which is \((\emptyset,D-v_{e}-v_{e_{0}}+s+t)\). We have a contradiction, which proves that \(s_{1}^{\prime}\neq s_{2}^{\prime}\). So the end-vertices of \(e_{1}^{\prime}\) and \(e_{2}^{\prime}\) are \(s^{\prime}:=s_{2}^{\prime}\) and \(t^{\prime}:=s_{1}^{\prime}\). We see that the hypotheses of Lemma 5.5 are satisfied, hence the pairs \(e,e_{0}\) and \(e_{1}^{\prime},e_{2}^{\prime}\) are special pairs of \(\Gamma\) and \(\Gamma^{\prime}\), respectively, and we are done.
**Corollary 5.7**.: _Let \(e\) be an edge of \(\Gamma\). The following conditions hold._
1. _If_ \(e\) _does not belong to a special pair of_ \(\Gamma\)_, then_ \(\epsilon_{\Gamma^{\prime}}(f(\{e\},D))\) _is independent of the choice of the divisor_ \(D\in\mathbf{QD}_{v_{0}}(\Gamma,\{e\})\)_._
2. _If_ \(e\) _belongs to a special pair_ \(\{e,e_{0}\}\) _of_ \(\Gamma\)_, then_ \(\epsilon_{\Gamma^{\prime}}(f(\{e,e_{0}\},D))\) _is independent of the choice of the divisor_ \(D\in\mathbf{QD}_{v_{0}}(\Gamma,\{e,e_{0}\})\)_._
Proof.: The result readily follows from Proposition 5.6.
**Definition 5.8**.: Let \(\Gamma\) be a graph. We say that two subsets \(\mathcal{E}_{1}\) and \(\mathcal{E}_{2}\) of \(E(\Gamma)\) are _equivalent_ if there are special pairs \(\{e_{1,1},e_{1,2}\},\ldots,\{e_{k,1},e_{k,2}\}\) of \(\Gamma\) such that \(\{e_{1,i},\ldots,e_{k,i}\}\subset\mathcal{E}_{i}\) for \(i=1,2\) and
\[\mathcal{E}_{1}\setminus\{e_{1,1},\ldots,e_{k,1}\}=\mathcal{E}_{2}\setminus\{e _{1,2},\ldots,e_{k,2}\}.\]
We say that two pseudo-divisors \((\mathcal{E}_{1},D_{1})\) and \((\mathcal{E}_{2},D_{2})\) of \(\Gamma\) are _equivalent_, and we write \((\mathcal{E}_{1},D_{1})\sim(\mathcal{E}_{2},D_{2})\), if the following conditions hold
1. we have \(D_{1}(v)=D_{2}(v)\) for every \(v\in V(\Gamma)\).
2. the subsets \(\mathcal{E}_{1}\) and \(\mathcal{E}_{2}\) of \(E(\Gamma)\) are equivalent.
**Remark 5.9**.: Given a pseudo-divisor \((\mathcal{E}_{1},D_{1})\) of a graph \(\Gamma\) and a subset \(\mathcal{E}_{2}\subset E(\Gamma)\) such that \(\mathcal{E}_{2}\) is equivalent to \(\mathcal{E}_{1}\), then there is a unique divisor \(D_{2}\) on \(\Gamma^{\mathcal{E}_{2}}\) such that \((\mathcal{E}_{2},D_{2})\) is equivalent to \((\mathcal{E}_{1},D_{1})\) (the divisor \(D_{2}\) is defined as \(D_{2}(v):=D_{1}(v)\) for every \(v\in V(\Gamma)\) and \(D_{2}(v_{e})=1\) for every \(e\in\mathcal{E}_{2}\)). In particular, if \((\mathcal{E}_{1},D_{1})\sim(\mathcal{E}_{2},D_{2})\) and \(\mathcal{E}_{1}=\mathcal{E}_{2}\), then \((\mathcal{E}_{1},D_{1})=(\mathcal{E}_{2},D_{2})\).
**Remark 5.10**.: Let \((\mathcal{E}_{1},D_{1})\geq(\widetilde{\mathcal{E}}_{1},\widetilde{D}_{1})\) be a specialization in \(\mathbf{QD}_{v_{0}}(\Gamma)\). If \((\mathcal{E}_{2},D_{2})\) and \((\widetilde{\mathcal{E}}_{2},\widetilde{D}_{2})\) are two pseudo-divisors on \(\Gamma\) such that \(\widetilde{\mathcal{E}}_{2}\subset\mathcal{E}_{2}\), with \((\mathcal{E}_{2},D_{2})\sim(\mathcal{E}_{1},D_{1})\) and \((\widetilde{\mathcal{E}}_{2},\widetilde{D}_{2})\sim(\widetilde{\mathcal{E}}_{ 1},\widetilde{D}_{1})\), then \((\mathcal{E}_{2},D_{2})\geq(\widetilde{\mathcal{E}}_{2},\widetilde{D}_{2})\).
Recall the definition of the set \(\mathrm{ND}(\Gamma)\) in Equation (2).
**Proposition 5.11**.: _The isomorphisms \(f\) and \(f^{-1}\) take equivalent pseudo-divisors to equivalent pseudo-divisors. Moreover, \(f\) induces a weakly cyclic equivalence \(f_{E}\colon\mathrm{ND}(\Gamma)\to\mathrm{ND}(\Gamma^{\prime})\) such that for every pseudo-divisor \((\mathcal{E},D)\in\mathbf{QD}_{v_{0}}(\Gamma)\) there exists a unique divisor \(D^{\prime}\in\mathbf{QD}_{v^{\prime}_{0}}(\Gamma^{\prime},f_{E}(\mathcal{E}))\) for which \(f(\mathcal{E},D)\sim(f_{E}(\mathcal{E}),D^{\prime})\)._
Proof.: By Corollary 5.7, we can define a bijection \(f_{E}\colon\mathrm{ND}(\Gamma)\to\mathrm{ND}(\Gamma^{\prime})\) as follows.
1. For each edge \(e\in\mathrm{ND}(\Gamma)\) that does not belong to a special pair of \(\Gamma\), we set \(f_{E}(e)\) to be the unique edge of \(\Gamma^{\prime}\) satisfying \(\{f_{E}(e)\}=\epsilon_{\Gamma^{\prime}}(f(\{e\},D))\) for every divisor \(D\in\mathbf{QD}_{v_{0}}(\Gamma,\{e\})\). (Notice that \(\mathbf{QD}_{v_{0}}(\Gamma,\{e\})\) is not empty, since \(e\) is not a bridge of \(\Gamma\)).
2. For each special pair \(\{e_{1},e_{2}\}\) of \(\Gamma\), we let \(f_{E}(e_{1}),f_{E}(e_{2})\) to be the edges of \(\Gamma^{\prime}\) such that \[\{f_{E}(e_{1}),f_{E}(e_{2})\}=\epsilon_{\Gamma^{\prime}}(f(\{e_{1},e_{2}\},D)),\] for every divisor \(D\in\mathbf{QD}_{v_{0}}(\Gamma,\{e_{1},e_{2}\})\). (Here there is a choice to be made: a different choice would switch the values \(f_{E}(e_{1})\) and \(f_{E}(e_{2})\).) Notice that \(\{f_{E}(e_{1}),f_{E}(e_{2})\}\) is a special pair and hence it is contained in \(\mathrm{ND}(\Gamma)\).
Let us prove that for every \((\mathcal{E},D)\in\mathbf{QD}_{v_{0}}(\Gamma)\), there is a unique divisor \(D^{\prime}\in\mathbf{QD}_{v^{\prime}_{0}}(\Gamma^{\prime},f_{E}(\mathcal{E}))\) such that \(f(\mathcal{E},D)\sim(f_{E}(\mathcal{E}),D^{\prime})\). By Remark 5.9, it suffices to prove that \(\epsilon_{\Gamma^{\prime}}(f(\mathcal{E},D))\) and \(f_{E}(\mathcal{E})\) are equivalent subsets of \(E(\Gamma^{\prime})\). Set \(\mathcal{E}^{\prime}:=\epsilon_{\Gamma^{\prime}}(f(\mathcal{E},D))\). For each subset \(\mathcal{E}_{0}\subset\mathcal{E}\), there exists a divisor \(D_{0}\in\mathbf{QD}_{v_{0}}(\Gamma,\mathcal{E}_{0})\) such that \((\mathcal{E}_{0},D_{0})\leq(\mathcal{E},D)\) (see Remark 2.4). Moreover, we have \(f(\mathcal{E}_{0},D_{0})\leq f(\mathcal{E},D)\), and hence \(\epsilon_{\Gamma^{\prime}}(f(\mathcal{E}_{0},D_{0}))\subset\epsilon_{\Gamma^{ \prime}}(f(\mathcal{E},D))=\mathcal{E}^{\prime}\). Thus the following conditions hold
1. if an edge \(e\in\mathcal{E}\) does not belong to a special pair, then \(f_{E}(e)\in\mathcal{E}^{\prime}\).
2. if \(\{e_{1},e_{2}\}\subset\mathcal{E}\) is a special pair, then \(f_{E}(\{e_{1},e_{2}\})\subset\mathcal{E}^{\prime}\).
3. if an edge \(e_{1}\in\mathcal{E}\) belongs to a special pair \(\{e_{1},e_{2}\}\) with \(e_{2}\) not in \(\mathcal{E}\), then either \(f_{E}(e_{1})\in\mathcal{E}^{\prime}\) or \(f_{E}(e_{2})\in\mathcal{E}^{\prime}\), but \(\{f_{E}(e_{1}),f_{E}(e_{2})\}\not\subset\mathcal{E}^{\prime}\). Moreover, \(\{f_{E}(e_{1}),f_{E}(e_{2})\}\) is a special pair of \(\Gamma^{\prime}\).
This concludes the proof that \(\epsilon_{\Gamma^{\prime}}(f(\mathcal{E},D))\) and \(f_{E}(\mathcal{E})\) are equivalent.
Next, we prove that \(f_{E}\) is a weakly cyclic equivalence. By Remark 2.3 it is enough to prove that \(f_{E}\) and \(f_{E}^{-1}\) take maximally nondisconnecting subsets to maximally nondisconnecting subsets. Let \(\mathcal{E}\subset E(\Gamma)\) be a maximally nondisconnecting subset. Then there exists exactly one divisor \(D\in\mathbf{QD}_{v_{0}}(\Gamma,\mathcal{E})\) (see Remark 3.2). We also have that \((\mathcal{E},D)\) is maximal in \(\mathbf{QD}_{v_{0}}(\Gamma)\). We set \((\mathcal{E}^{\prime},D^{\prime}):=f(\mathcal{E},D)\). Then \((\mathcal{E}^{\prime},D^{\prime})\) is maximal in \(\mathbf{QD}_{v^{\prime}_{0}}(\Gamma^{\prime})\), which implies that \(\mathcal{E}^{\prime}\) is a maximally nondisconnecting subset (see Remark 3.2). Since \(f_{E}(\mathcal{E})\) and \(\mathcal{E}^{\prime}\) are equivalent, we have that \(f_{E}(\mathcal{E})\) is also a maximally nondisconnecting subset. The number of spanning trees of \(\Gamma\) and \(\Gamma^{\prime}\) is equal to the number of maximal elements of \(\mathbf{QD}_{v_{0}}(\Gamma)\) and \(\mathbf{QD}_{v^{\prime}_{0}}(\Gamma^{\prime})\), respectively, hence they are equal, because \(f\) is an isomorphism. Since the number of spanning trees of \(\Gamma\) and \(\Gamma^{\prime}\) are the same,
it follows that \(f_{E}^{-1}\) also takes maximally nondisconnecting subsets to maximally nondisconnecting subsets. This concludes the proof that \(f_{E}\) is a weakly cyclic equivalence.
Now we will prove that \(f\) and \(f^{-1}\) take equivalent pseudo-divisors to equivalent pseudo-divisors. We proceed by induction on the rank of a pseudo-divisor. Let \((\mathcal{E}_{1},D_{1})\) and \((\mathcal{E}_{2},D_{2})\) be two equivalent pseudo-divisors of rank \(k\) on \(\Gamma\). If \(k=0\), that is, if \(|\mathcal{E}_{1}|=|\mathcal{E}_{2}|=0\), then \(D_{1}=D_{2}\) and hence \(f(\mathcal{E}_{1},D_{1})=f(\mathcal{E}_{2},D_{2})\) and we are done. The same reasoning holds for \(f^{-1}\).
By the induction hypothesis, \(f\) and \(f^{-1}\) send equivalent pseudo-divisors of rank strictly less than \(k\) to equivalent pseudo-divisors. We will prove the induction step only for \(f\). The reasoning for \(f^{-1}\) is similar. It is enough to prove the result for \(\mathcal{E}_{1}=\mathcal{E}\cup\{e_{1}\}\) and \(\mathcal{E}_{2}=\mathcal{E}\cup\{e_{2}\}\), for some \(\mathcal{E}\subset E(\Gamma)\) and for some special pair \(\{e_{1},e_{2}\}\) of \(\Gamma\) such that \(\mathcal{E}\cap\{e_{1},e_{2}\}=\emptyset\). Let \(s\) and \(t\) be the end-vertices of \(e_{1}\) and \(e_{2}\). Define
\[D_{s}:= D_{1}-v_{e_{1}}+s=D_{2}-v_{e_{2}}+s\] \[D_{t}:= D_{1}-v_{e_{1}}+t=D_{2}-v_{e_{2}}+t.\]
Notice that we have
\[D_{s}(s)=D_{t}(s)+1\quad\text{ and }\quad D_{s}(t)=D_{t}(t)-1. \tag{9}\]
By Remark 2.4, we have that
\[(\mathcal{E},D_{s}) \leq(\mathcal{E}_{1},D_{1}),\ (\mathcal{E},D_{s})\leq(\mathcal{E}_{2},D_ {2}),\] \[(\mathcal{E},D_{t}) \leq(\mathcal{E}_{1},D_{1}),\ (\mathcal{E},D_{t})\leq(\mathcal{E}_{2},D_ {2}).\]
In particular, the set \(\{(\mathcal{E}_{1},D_{1}),(\mathcal{E}_{2},D_{2}),(\mathcal{E},D_{s}),( \mathcal{E},D_{t})\}\) is a poset isomorphic to the poset \(\mathbf{P}\) in Definition 4.1. Therefore, the image of this set via \(f\) must be one of the images described in Proposition 4.2. Set \((\mathcal{E}^{\prime}_{1},D^{\prime}_{1})=f(\mathcal{E}_{1},D_{1})\) and \((\mathcal{E}^{\prime}_{2},D^{\prime}_{2})=f(\mathcal{E}_{2},D_{2})\).
By contradiction, assume that we are in the situation described in item (2) of Proposition 4.2. This implies that there exist parallel edges \(\{e^{\prime}_{1},e^{\prime}_{2}\}\), a subset \(\mathcal{E}^{\prime}\subset E(\Gamma^{\prime})\setminus\{e^{\prime}_{1},e^{ \prime}_{2}\}\) and a divisor \(D^{\prime}\) on \(\Gamma^{\prime\mathcal{E}^{\prime}}\) such that \(f(\mathcal{E},D_{s})=(\mathcal{E}^{\prime}\cup\{e^{\prime}_{1}\},D^{\prime}+v _{e^{\prime}_{1}})\) and \(f(\mathcal{E},D_{t})=(\mathcal{E}^{\prime}\cup\{e^{\prime}_{2}\},D^{\prime}+v _{e^{\prime}_{2}})\). By induction hypothesis (recall that \(|\mathcal{E}|=k-1\)), we have that \(\mathcal{E}^{\prime}\cup\{e^{\prime}_{1}\}\) and \(\mathcal{E}^{\prime}\cup\{e^{\prime}_{2}\}\) are both equivalent to \(f_{E}(\mathcal{E})\), hence \(\mathcal{E}^{\prime}\cup\{e^{\prime}_{1}\}\) and \(\mathcal{E}^{\prime}\cup\{e^{\prime}_{2}\}\) are equivalent subsets of \(E(\Gamma^{\prime})\). We deduce that \(\{e^{\prime}_{1},e^{\prime}_{2}\}\) is a special pair and hence \((\mathcal{E}^{\prime}\cup\{e^{\prime}_{1}\},D^{\prime}+v_{e^{\prime}_{1}})\) and \((\mathcal{E}^{\prime}\cup\{e^{\prime}_{2}\},D^{\prime}+v_{e^{\prime}_{2}})\) are equivalent pseudo-divisors of \(\Gamma^{\prime}\). However, the ranks of \((\mathcal{E}^{\prime}\cup\{e^{\prime}_{1}\},D^{\prime}+v_{e^{\prime}_{1}})\) and \((\mathcal{E}^{\prime}\cup\{e^{\prime}_{2}\},D^{\prime}+v_{e^{\prime}_{2}})\) are equal to \(k-1\), so by the induction hypothesis we get that \(f^{-1}(\mathcal{E}^{\prime}\cup\{e^{\prime}_{1}\},D^{\prime}+v_{e^{\prime}_{1}})= (\mathcal{E},D_{s})\) and \(f^{-1}(\mathcal{E}^{\prime}\cup\{e^{\prime}_{2}\},D^{\prime}+v_{e^{\prime}_{2}} )=(\mathcal{E},D_{t})\) are equivalent pseudo-divisors of \(\Gamma\). This implies that \(D_{s}(v)=D_{t}(v)\) for every \(v\in V(\Gamma)\), which contradicts Equation (9).
We deduce that we are in the situation described in item (1) of Proposition 4.2. Then there exist parallel edges \(e^{\prime}_{1},e^{\prime}_{2}\) of \(\Gamma^{\prime}\), a subset \(\mathcal{E}^{\prime}\subset E(\Gamma^{\prime})\setminus\{e^{\prime}_{1},e^{ \prime}_{2}\}\) of \(\Gamma^{\prime}\), and a divisor \(D^{\prime}\in\Gamma^{\prime\mathcal{E}^{\prime}}\) such that \((\mathcal{E}^{\prime}_{1},D^{\prime}_{1})=(\mathcal{E}^{\prime}\cup\{e^{ \prime}_{1}\},D^{\prime}+v_{e^{\prime}_{1}})\) and \((\mathcal{E}^{\prime}_{2},D^{\prime}_{2})=(\mathcal{E}^{\prime}\cup\{e^{ \prime}_{2}\},D^{\prime}+v_{e^{\prime}_{2}})\). Hence \(D^{\prime}_{1}(v)=D^{\prime}_{2}(v)\) for every \(v\in V(\Gamma)\), and so \(D^{\prime}_{1}\) and \(D^{\prime}_{2}\) satisfy Condition (1) of Definition 5.8. Moreover:
1. the subsets \(\mathcal{E}^{\prime}\cup\{e^{\prime}_{1}\}\) and \(f_{E}(\mathcal{E}\cup\{e_{1}\})\) of \(E(\Gamma^{\prime})\) are equivalent, by construction.
2. the subsets \(\mathcal{E}^{\prime}\cup\{e^{\prime}_{2}\}\) and \(f_{E}(\mathcal{E}\cup\{e_{2}\})\) of \(E(\Gamma)\) are equivalent, by construction.
3. the subsets \(f_{E}(\mathcal{E}\cup\{e_{1}\})\) and \(f_{E}(\mathcal{E}\cup\{e_{2}\})\) of \(E(\Gamma^{\prime})\) are equivalent, since \(f_{E}\) sends special pairs to special pairs.
This implies that \(\mathcal{E}^{\prime}_{1}=\mathcal{E}^{\prime}\cup\{e^{\prime}_{1}\}\) and \(\mathcal{E}^{\prime}_{2}=\mathcal{E}^{\prime}\cup\{e^{\prime}_{2}\}\) are equivalent, and hence \(f(\mathcal{E}_{1},D_{1})\sim f(\mathcal{E}_{2},D_{2})\), concluding the proof.
**Definition 5.12**.: Let \(f_{E}\) be as in Proposition 5.11. We let \(h_{f}\colon\mathbf{QD}_{v_{0}}(\Gamma)\to\mathbf{QD}_{v^{\prime}_{0}}(\Gamma^{ \prime})\) be the function taking a pseudo-divisor \((\mathcal{E},D)\in\mathbf{QD}_{v_{0}}(\Gamma)\) to
\[h_{f}(\mathcal{E},D):=(f_{E}(\mathcal{E}),D^{\prime}),\]
where \(D^{\prime}\) is the unique divisor in \(\mathbf{QD}_{v^{\prime}_{0}}(\Gamma^{\prime},f_{E}(\mathcal{E}))\) such that \(f(\mathcal{E},D)\sim(f_{E}(\mathcal{E}),D^{\prime})\) (see Proposition 5.11).
By definition, for every pseudo-divisor \((\mathcal{E},D)\in\mathbf{QD}_{v_{0}}(\Gamma)\) we have
\[\epsilon_{\Gamma^{\prime}}(h_{f}(\mathcal{E},D))=f_{E}(\mathcal{E}). \tag{10}\]
**Proposition 5.13**.: _The map \(h_{f}\colon\mathbf{QD}_{v_{0}}(\Gamma)\to\mathbf{QD}_{v_{0}^{\prime}}(\Gamma^{ \prime})\) is an isomorphism of ranked posets._
Proof.: Let us prove that \(h_{f}\) is a bijection. We begin by proving that \(h_{f}\) is injective. Assume that \(h_{f}(\mathcal{E}_{1},D_{1})=h_{f}(\mathcal{E}_{2},D_{2})\) for some pseudo-divisors \((\mathcal{E}_{1},D_{1})\) and \((\mathcal{E}_{2},D_{2})\) on \(\Gamma\). Since \(\epsilon_{\Gamma^{\prime}}(h_{f}(\mathcal{E}_{i},D_{i}))=f_{E}(\mathcal{E}_{i})\), we have that \(f_{E}(\mathcal{E}_{1})=f_{E}(\mathcal{E}_{2})\) which implies that \(\mathcal{E}_{1}=\mathcal{E}_{2}=:\mathcal{E}\) (recall that \(f_{E}\) is a bijection, see Proposition 5.11). Writing \((\mathcal{E}_{i}^{\prime},D_{i}^{\prime}):=f(\mathcal{E},D_{i})\) for \(i=1,2\), we have that
\[(\mathcal{E}_{1}^{\prime},D_{1}^{\prime})\sim h_{f}(\mathcal{E},D_{1})=h_{f}( \mathcal{E},D_{2})\sim(\mathcal{E}_{2}^{\prime},D_{2}^{\prime}),\]
hence \((\mathcal{E}_{1}^{\prime},D_{1}^{\prime})\sim(\mathcal{E}_{2}^{\prime},D_{2}^ {\prime})\). By Proposition 5.11, we have that
\[(\mathcal{E},D_{1})=f^{-1}(\mathcal{E}_{1}^{\prime},D_{1}^{\prime})\sim f^{-1} (\mathcal{E}_{2}^{\prime},D_{2}^{\prime})=(\mathcal{E},D_{2}),\]
and hence \(D_{1}=D_{2}\) by Remark 5.9. This finishes the proof of the injectivity of \(h_{f}\). Since \(\mathbf{QD}_{v_{0}}(\Gamma)\) and \(\mathbf{QD}_{v_{0}^{\prime}}(\Gamma^{\prime})\) are finite sets of the same cardinality, it follows that \(h_{f}\) is bijective.
Let us prove that \(h_{f}\) is a morphism of ranked posets. It is clear that \(h_{f}\) preserves the rank of pseudo-divisors. Assume that \((\mathcal{E}_{1},D_{1})\geq(\mathcal{E}_{2},D_{2})\) in \(\mathbf{QD}_{v_{0}}(\Gamma)\). In particular, \(\mathcal{E}_{2}\subset\mathcal{E}_{1}\). We have that
\[f(\mathcal{E}_{1},D_{1})\geq f(\mathcal{E}_{2},D_{2}),\] \[h_{f}(\mathcal{E}_{i},D_{i})\sim f(\mathcal{E}_{i},D_{i}),\] \[\epsilon_{\Gamma^{\prime}}(h_{f}(\mathcal{E}_{2},D_{2}))=f_{E}( \mathcal{E}_{2})\subset f_{E}(\mathcal{E}_{1})=\epsilon_{\Gamma^{\prime}}(h_{ f}(\mathcal{E}_{1},D_{1})).\]
Thus \(h_{f}(\mathcal{E}_{1},D_{1})\geq h_{f}(\mathcal{E}_{2},D_{2})\) by Remark 5.10, concluding the proof that \(h_{f}\) is a morphism of ranked posets.
Using the same reasoning, we also have that \(h_{f^{-1}}\colon\mathbf{QD}_{v_{0}^{\prime}}(\Gamma^{\prime})\to\mathbf{QD}_{ v_{0}}(\Gamma)\) is a morphism of ranked posets. It remains to prove that \(h_{f^{-1}}\) is the inverse of \(h_{f}\). Fix \((\mathcal{E}^{\prime},D^{\prime})=h_{f}(\mathcal{E},D)\). We have the following equivalences
1. \(h_{f^{-1}}(\mathcal{E}^{\prime},D^{\prime})\sim f^{-1}(\mathcal{E}^{\prime},D ^{\prime})\), by the definition of \(h_{f^{-1}}\).
2. \(f^{-1}(\mathcal{E}^{\prime},D^{\prime})\sim f^{-1}(f(\mathcal{E},D))=(\mathcal{ E},D)\), because \((\mathcal{E}^{\prime},D^{\prime})\sim f(\mathcal{E},D)\) by the definition of \(h_{f}\) and because \(f^{-1}\) takes equivalent divisors to equivalent divisors (see Proposition 5.11).
Therefore \(h_{f^{-1}}(\mathcal{E}^{\prime},D^{\prime})\sim(\mathcal{E},D)\). By the definition of \(h_{f}\) and \(h_{f^{-1}}\), we have \(\mathcal{E}^{\prime}=f_{E}(\mathcal{E})\) and \(\epsilon_{\Gamma}(h_{f^{-1}}(\mathcal{E}^{\prime},D^{\prime}))=f_{E}^{-1}( \mathcal{E}^{\prime})\). It follows that
\[\epsilon_{\Gamma}(h_{f^{-1}}(\mathcal{E}^{\prime},D^{\prime}))=f_{E}^{-1}( \mathcal{E}^{\prime})=\mathcal{E}=\epsilon_{\Gamma}(\mathcal{E},D).\]
Hence \(h_{f^{-1}}(\mathcal{E}^{\prime},D^{\prime})=(\mathcal{E},D)\) by Remark 5.9. This finishes the proof.
We now substitute the isomorphism \(f\colon\mathbf{QD}_{v_{0}}(\Gamma)\to\mathbf{QD}_{v_{0}^{\prime}}(\Gamma^{ \prime})\) with \(h_{f}\colon\mathbf{QD}_{v_{0}}(\Gamma)\to\mathbf{QD}_{v_{0}^{\prime}}(\Gamma^{ \prime})\), which is an isomorphism by Proposition 5.13. By Equation (10), this allows us to use the following property:
\[\epsilon_{\Gamma^{\prime}}(f(\mathcal{E},D))=f_{E}(\mathcal{E}), \tag{11}\]
for every pseudo-divisor \((\mathcal{E},D)\in\mathbf{QD}_{v_{0}}(\Gamma)\).
**Lemma 5.14**.: _Assume that \(\Gamma\) is a tree and let \(v_{0}\) be a vertex of \(\Gamma\). Let \(D\) be the divisor on \(\Gamma\) such that_
\[D(v)=\begin{cases}0&\text{ if }v\neq v_{0}\\ -1&\text{ if }v=v_{0}\end{cases}\]
_for every \(v\in V(\Gamma)\). Then \(D\) is the unique element of \(\mathbf{QD}_{v_{0}}(\Gamma)\)._
Proof.: By Remark 3.2, the poset \(\mathbf{QD}_{v_{0}}(\Gamma)\) is a singleton. So it is enough to prove that the divisor \(D\) given by the formula in the statement is \(v_{0}\)-quasistable. By Equation (6) we have \(\beta_{\Gamma,D}(V)=D(V)-g_{V}+1\) for
every hemisphere \(V\subset V(\Gamma)\). Since \(\Gamma\) is a tree, we have \(g_{V}=0\) for every hemisphere \(V\subset V(\Gamma)\). We also have
\[D(V)=\begin{cases}0&\text{ if }v_{0}\notin V,\\ -1&\text{ if }v_{0}\in V.\end{cases}\]
It follows that
\[\beta_{\Gamma,D}(V)=\begin{cases}0&\text{ if }v_{0}\in V,\\ 1&\text{ if }v_{0}\notin V.\end{cases}\]
This proves that \(D\) is \(v_{0}\)-quasistable.
**Lemma 5.15**.: _Let \(v_{1}\) be a vertex of \(\Gamma\) which is not an articulation vertex. Fix a maximally nondisconnecting subset \(\mathcal{E}_{1}\subset E(\Gamma\setminus\{v_{1}\})\) of \(\Gamma\setminus\{v_{1}\}\). There is a unique divisor \(D_{1}\) in \(\mathbf{QD}_{v_{0}}(\Gamma,\mathcal{E}_{1})\) such that_
\[D_{1}(v_{1})=\begin{cases}\operatorname{val}(v_{1})-1&\text{ if }v_{1}\neq v_{0} \\ \operatorname{val}(v_{1})-2&\text{ if }v_{1}=v_{0}.\end{cases} \tag{12}\]
_Moreover, for each \(S\subsetneqq E(v_{1})\) there exists a unique \(D_{S}\) in \(\mathbf{QD}_{v_{0}}(\Gamma,\mathcal{E}_{1}\cup S)\) such that \((\mathcal{E}_{1},D_{1})\leq(\mathcal{E}_{1}\cup S,D_{S})\)._
Proof.: By Remark 3.2 we can assume that \(\mathcal{E}_{1}=\emptyset\) and \(\Gamma\setminus\{v_{1}\}\) is a tree.
For \(e\in E(v_{1})\), we set \(S_{e}:=E(v)\setminus\{e\}\). By Lemma 5.14, there exists a unique divisor \(D_{S_{e}}\in\mathbf{QD}_{v_{0}}(\Gamma,S_{e})\) and we have \(D_{S_{e}}(v_{0})=-1\) and \(D_{S_{e}}(u)=0\) for every \(u\in V(\Gamma)\setminus\{v_{0}\}\). Note that \(S_{e}\) is a maximally nondisconnecting subset of \(\Gamma\) and, vice-versa, any maximally nondisconnecting subset of \(\Gamma\) is of the form \(S_{e}\) for some \(e\in E(v_{1})\). In particular, by Remark 3.2 we have that \(\{(S_{e},D_{S_{e}})\}_{e\in E(v_{1})}\) is the set of all maximal elements of \(\mathbf{QD}_{v_{0}}(\Gamma)\).
Set \(D_{1}:=D_{S_{e}}-\sum_{\widetilde{e}\in S_{e}}v_{\widetilde{e}}+(\operatorname {val}(v)-1)v\). We have that \((\emptyset,D_{1})\) is a specialization of \((S_{e},D_{S_{e}})\) and, since \((S_{e},D_{S_{e}})\) is \(v_{0}\)-quasistable, it follows that \((\emptyset,D_{1})\) is \(v_{0}\)-quasistable as well (see Remark 3.2). This proves the existence of \(D_{1}\). Note that \(D_{1}\) is independent of the choice of \(e\in E(v_{1})\).
On the other hand, if \(\widetilde{D}_{1}\) is another such divisor, then \((\emptyset,\widetilde{D}_{1})\) is smaller than a maximal element \((S_{e},D_{S_{e}})\) for some \(e\in E(v)\). By Lemma 5.14 and Equation 12, we can write \(\widetilde{D}_{1}(v_{1})=D_{S_{e}}(v_{1})+\operatorname{val}(v_{1})-1\). Since \(|S_{e}|=\operatorname{val}(v_{1})-1\), the only possible way for \((S_{e},D_{S_{e}})\) to specialize to \((\emptyset,\widetilde{D}_{1})\) is if \(\widetilde{D}_{1}=D_{S_{e}}-\sum_{\widetilde{e}\in S_{e}}v_{\widetilde{e}}+( \operatorname{val}(v)-1)v\). This means that \(\widetilde{D}_{1}=D_{1}\) and finishes the proof of the first statement.
Fix \(S\subsetneqq E(v_{1})\). There exists \(e\in E(v_{1})\) such that \(S\subset S_{e}\). The divisor \(D_{S}:=D_{S_{e}}-\sum_{\widetilde{e}\in S_{e}\setminus S}v_{\widetilde{e}}+|S _{e}\setminus S|v\) is independent of the choice of \(e\) and satisfies \((\emptyset,D_{1})\leq(S,D_{S})\). Moreover, we have \(D_{S}\in\mathbf{QD}_{v_{0}}(\Gamma,S)\), because the \(v_{0}\)-quasistable pseudo-divisor \((S_{e},D_{S_{e}})\) specializes to \((S,D_{s})\) (see Remark 3.2).
We claim that the divisor \((S,D_{S})\) is the unique divisor in \(\mathbf{QD}_{v_{0}}(\Gamma,S)\) such that \((\emptyset,D_{1})\leq(S,D_{S})\). Indeed, assume that \(\widetilde{D}_{S}\) is another divisor in \(\mathbf{QD}_{v_{0}}(\Gamma,S)\) such that \((\emptyset,D_{1})\leq(S,\widetilde{D}_{S})\). Then, there exists a maximal pseudo-divisor \((S_{e},D_{S_{e}})\) that is greater than \((S,\widetilde{D}_{S})\). This implies that
\[\widetilde{D}_{S}(v_{1})\leq D_{S_{e}}(v_{1})+\operatorname{val}(v_{1})-|S|-1. \tag{13}\]
On the other hand, since \((S,\widetilde{D}_{S})\) is \(v_{0}\)-quasistable and greater than \((\emptyset,D_{1})\), we must have that \(\widetilde{D}_{S}(v_{1})\geq D_{S_{e}}(v_{1})+\operatorname{val}(v_{1})-|S|-1\), and hence equality holds in Equation (13). This implies that \(\widetilde{D}_{S}=D_{S}\).
**Lemma 5.16**.: _Let \(V\) be a hemisphere of \(\Gamma\). Let \(\mathcal{E}_{1}\subset E(V,V)\) and \(\mathcal{E}_{2}\subset E(V^{c},V^{c})\) be maximally nondisconnecting subsets of \(\Gamma(V)\) and \(\Gamma(V^{c})\). Set \(\mathcal{E}=\mathcal{E}_{1}\cup\mathcal{E}_{2}\). Let \(D\) be a divisor in \(\mathbf{QD}_{v_{0}}(\Gamma,\mathcal{E})\) such that for each subset \(S\subsetneqq E(V,V^{c})\) there exists a unique \(D_{S}\in\mathbf{QD}_{v_{0}}(\Gamma,\mathcal{E}\cup S)\) such that \((\mathcal{E},D)\leq(\mathcal{E}\cup S,D_{S})\). Then there exists a vertex \(v_{1}\in V(\Gamma)\) that is incident to all edges in \(E(V,V^{c})\)._
Proof.: By Remark 3.2, we can assume that \(\mathcal{E}=\emptyset\). This implies that \(\Gamma(V)\) and \(\Gamma(V^{c})\) are trees.
Consider \(S\subsetneqq E(V,V^{c})\) and the unique divisor \(D_{S}\) of the statement. Let us prove that there exists only one specialization \((S,D_{S})\to(\emptyset,D)\). Assume, by contradiction, that there are two different specializations
\(\iota_{1},\iota_{2}\colon(S,D_{S})\to(\emptyset,D)\). This implies that we have distinct edges \(e_{1}\) and \(e_{2}\) in \(S\) such that \(\iota_{1}(v_{e_{1}}),\iota_{2}(v_{e_{2}})\in V\) and \(\iota_{1}(v_{e_{2}}),\iota_{2}(v_{e_{1}})\in V^{c}\) (note that the degrees of \((\iota_{1})_{*}(D_{S})=D\) and \((\iota_{2})_{*}(D_{S})=D\) in \(V\) and \(V^{c}\) are the same).
Let \(S_{0}:=\{e_{1},e_{2}\}\). We can consider the specialization \(\iota^{\prime}_{i}\colon\Gamma^{S}\to\Gamma^{S_{0}}\) giving rise to a factorization
\[\iota_{i}\colon(S,D_{S})\to(S_{0},(\iota^{\prime}_{i})_{*}(D_{S}))\stackrel{{ j_{i}}}{{\to}}(\emptyset,D).\]
Then \((\iota^{\prime}_{i})_{*}(D_{S})=D_{S_{0}}\), by the uniqueness of \(D_{S_{0}}\). Hence we get the specializations \(j_{1},j_{2}\colon(S_{0},D_{S_{0}})\to(\emptyset,D)\). These specializations must be distinct because \(j_{1}(v_{e_{1}}),j_{2}(v_{e_{2}})\in V\) and \(j_{1}(v_{e_{2}}),j_{2}(v_{e_{1}})\in V^{c}\). Let \(t_{e_{i}}\) and \(s_{e_{i}}\) be the end-vertices of \(e_{i}\), with \(t_{e_{i}}\in V\). Thus
\[D =(j_{1})_{*}(D_{S_{0}})=D_{S_{0}}+t_{e_{1}}+s_{e_{2}}-v_{e_{1}}-v_ {e_{2}}\] \[D =(j_{2})_{*}(D_{S_{0}})=D_{S_{0}}+s_{e_{1}}+t_{e_{2}}-v_{e_{1}}-v_ {e_{2}}.\]
It follows that \(t_{e_{1}}+s_{e_{2}}=s_{e_{1}}+t_{e_{2}}\), hence \(t_{e_{1}}=t_{e_{2}}\) and \(s_{e_{1}}=s_{e_{2}}\) (this means that \(e_{1},e_{2}\) are parallel edges). Set \(S_{1}:=\{e_{1}\}\). We see that the two pseudo-divisors \((S_{1},D_{S_{0}}-v_{e_{2}}+t_{e_{2}})\) and \((S_{1},D_{S_{0}}-v_{e_{2}}+s_{e_{2}})\) are both greater than \((\emptyset,D)\), which contradicts the uniqueness of \(D_{S_{1}}\). This proves that there exists a unique specialization \((S,D_{S})\to(\emptyset,D)\), which we will denote by \(\iota_{S}\colon(S,D_{S})\to(\emptyset,D)\).
For every \(e\in E(V,V^{c})\), we set \(u_{e}:=\iota_{\{e\}}(v_{e})\in V(\Gamma)\). Let us prove that \(\iota_{S}(v_{e})=u_{e}\), for every \(e\in E(V,V^{c})\) and every subset \(S\subsetneqq E(V,V^{c})\) containing \(e\). In fact, for every such edge \(e\) and subset \(S\), let \(\iota^{\prime}\colon(S,D_{S})\to(\{e\},D_{\{e\}})\) be the specialization factoring \(\iota_{S}\) (note that this is unique by the uniqueness of \(\iota_{S}\)), i.e., such that \(\iota_{S}=\iota_{\{e\}}\circ\iota^{\prime}\). We see that \(\iota_{S}(v_{e})=\iota_{\{e\}}(v_{e})=u_{e}\), as wanted.
Now we claim that, if \(u_{e_{0}}\in V\) for some \(e_{0}\in E(V,V^{c})\), then \(u_{e}\in V\) for every \(e\in E(V,V^{c})\). By contradiction, assume that there are edges \(e_{1},e_{2}\), such that \(u_{e_{1}}\in V\) and \(u_{e_{2}}\in V^{c}\). Set \(S_{i}=E(V,V^{c})\setminus\{e_{i}\}\) for \(i=1,2\). Since
\[(\iota_{S_{i}})_{*}(D_{S_{i}})(V)=D_{S_{i}}(V)+|\{e\in S_{i},u_{e}\in V\}|\]
and \((\iota_{S_{i}})_{*}(D_{S_{i}})(V)=D(V)\), we have that
\[D_{S_{1}}(V)=D_{S_{2}}(V)+1.\]
However, since \(S_{i}\) is a maximally nondisconnecting subset of \(\Gamma\), Lemma 5.14 implies that \(D_{S_{1}}(V)=D_{S_{2}}(V)\), giving rise to a contradiction. This proves the claim.
Finally, let us prove that \(u_{e_{1}}=u_{e_{2}}\) for every \(e_{1},e_{2}\in E(V,V^{c})\). As before, set \(S_{i}=E(V,V^{c})\setminus\{e_{i}\}\). By Lemma 5.14, we have that \(D_{S_{1}}(v)=D_{S_{2}}(v)\) for every \(v\in V(\Gamma)\), and
\[D(v)=D_{S_{i}}(v)+|\{e\in S_{i};u_{e}=v\}|\]
for every \(v\in V(\Gamma)\). Hence, taking \(v=u_{e_{1}}\), we have that
\[|\{e\in S_{1};u_{e}=u_{e_{1}}\}|=|\{e\in S_{2};u_{e}=u_{e_{1}}\}|\]
which implies that \(u_{e_{1}}=u_{e_{2}}\). The conclusion is that, if we set \(v_{1}:=u_{e}\) for some (every) edge \(e\in E(V,V^{c})\), then \(v_{1}\) is incident to every \(e\in E(V,V^{c})\).
**Theorem 5.17**.: _Let \(\Gamma\) and \(\Gamma^{\prime}\) be biconnected pure graphs. The posets \(\mathbf{QD}(\Gamma)\) and \(\mathbf{QD}(\Gamma^{\prime})\) are isomorphic if and only if there is an isomorphism between \(\Gamma\) and \(\Gamma^{\prime}\)._
Proof.: Assume that \(\mathbf{QD}(\Gamma)\) and \(\mathbf{QD}(\Gamma^{\prime})\) are isomorphic. Recall that we have identifications \(\mathbf{QD}(\Gamma)\cong\mathbf{QD}_{v_{0}}(\Gamma)\) and \(\mathbf{QD}(\Gamma^{\prime})\cong\mathbf{QD}_{v^{\prime}_{0}}(\Gamma^{\prime})\) for \(v_{0}\in V(\Gamma)\) and \(v^{\prime}_{0}\in V(\Gamma^{\prime})\). Recall that we are given an isomorphism of posets \(f\colon\mathbf{QD}_{v_{0}}(\Gamma)\to\mathbf{QD}_{v^{\prime}_{0}}(\Gamma^{\prime})\). Since \(\Gamma\) and \(\Gamma^{\prime}\) are biconnected, they have no bridges, hence by Proposition 5.11, there is a cyclic equivalence \(f_{E}\colon E(\Gamma)\to E(\Gamma^{\prime})\).
Assume that \(\Gamma\) has only one vertex. Then \(\Gamma\) has at most one edge, hence \(\Gamma^{\prime}\) is isomorphic to \(\Gamma\) because \(f_{E}\) is a cyclic equivalence. The same argument holds if \(\Gamma^{\prime}\) has only one vertex. Assume that \(\Gamma\) has two vertices. Since \(f_{E}\) is a cyclic equivalence and since every set of two edges of \(\Gamma\) is a cycle, we must have that \(\Gamma^{\prime}\) also
has two vertices and the same number of edges. So \(\Gamma\) and \(\Gamma^{\prime}\) are isomorphic. The same argument holds if \(\Gamma^{\prime}\) has two vertices. So, we can assume that \(\Gamma\) and \(\Gamma^{\prime}\) have at least three vertices.
First we observe that if \(S^{\prime}\) is a subset of \(E(\Gamma^{\prime})\), then there exists at most one vertex \(v_{1}^{\prime}\) such that \(E(v_{1}^{\prime})=S^{\prime}\). Indeed, if there are distinct vertices \(v_{1}^{\prime},v_{2}^{\prime}\) such that \(E(v_{1}^{\prime})=E(v_{2}^{\prime})=S^{\prime}\), then either \(\Gamma^{\prime}\) is disconnected or \(V(\Gamma^{\prime})=\{v_{1}^{\prime},v_{2}^{\prime}\}\), which is a contradiction.
To prove that \(\Gamma\) and \(\Gamma^{\prime}\) are isomorphic, it is sufficient to prove that for every \(v_{1}\in V(\Gamma)\) there exists a unique \(v_{1}^{\prime}\in V(\Gamma^{\prime})\) such that \(E(v_{1}^{\prime})=f_{E}(E(v_{1}))\). By the above observation, it is sufficent to prove that for every \(v_{1}\), there exists a \(v_{1}^{\prime}\in V(\Gamma^{\prime})\) such that \(E(v_{1}^{\prime})=f_{E}(E(v_{1}))\).
Fix \(v_{1}\in V(\Gamma)\). Since \(\Gamma\) is biconnected, we have that \(v_{1}\) is not an articulation vertex. Let \(\mathcal{E}_{1}\subset E(\Gamma\setminus\{v_{1}\})\) be a maximally nondisconnecting subset of \(\Gamma\setminus\{v_{1}\}\). Let \(D_{1}\in\mathbf{QD}_{v_{0}}(\Gamma,\mathcal{E}_{1})\) be as in Lemma 5.15. The same lemma states that for each \(S\subsetneqq E(v)\), there exists a unique \(D_{S}\in\mathbf{QD}_{v_{0}}(\Gamma,\mathcal{E}_{1}\cup S)\) such that \((\mathcal{E}_{1},D_{1})\leq(\mathcal{E}_{1}\cup S,D_{S})\).
Since \(E(v_{1})\) is a bond of \(\Gamma\) (recall that \(\Gamma\) is biconnected) and \(f_{E}\) is a cyclic equivalence, by Remark 2.2 we have that \(f_{E}(E(v_{1}))\) is also a bond of \(\Gamma^{\prime}\), that is, there exists a hemisphere \(V^{\prime}\subset V(\Gamma^{\prime})\) such that \(f_{E}(E(v_{1}))=E(V^{\prime},V^{\prime c})\). Set \((\mathcal{E}_{1}^{\prime},D_{1}^{\prime})=f(\mathcal{E}_{1},D_{1})\). Since \(f\) is an isomorphism and \(\epsilon_{\Gamma^{\prime}}\circ f=f_{E}\circ\epsilon_{\Gamma}\) (recall Equation (11)), we have that for each \(S^{\prime}\subsetneqq E(V^{\prime},V^{\prime c})\) there exists a unique \(D_{S^{\prime}}^{\prime}\in\mathbf{QD}_{v_{0}^{\prime}}(\Gamma^{\prime}, \mathcal{E}_{1}^{\prime}\cup S^{\prime})\) such that \((\mathcal{E}_{1}^{\prime},D_{1}^{\prime})\leq(\mathcal{E}_{1}^{\prime}\cup S ^{\prime},D_{S^{\prime}}^{\prime})\). By Lemma 5.16, there exists a vertex \(v_{1}^{\prime}\) such that \(E(V^{\prime},V^{\prime c})\subset E(v_{1}^{\prime})\). However, \(\Gamma^{\prime}\) is biconnected, that means that either \(V^{\prime}=\{v_{1}^{\prime}\}\) or \(V^{\prime c}=\{v_{1}^{\prime}\}\), otherwise \(v_{1}^{\prime}\) would be an articulation vertex of \(\Gamma^{\prime}\). This means that \(f_{E}(E(v_{1}))=E(v_{1}^{\prime})\) and we are done.
If \(\Gamma\) and \(\Gamma^{\prime}\) are isomorphic it is clear that \(\mathbf{QD}(\Gamma)\) and \(\mathbf{QD}(\Gamma^{\prime})\) are isomorphic as well.
**Definition 5.18**.: Let \(v_{0}\) be an articulation vertex of a graph \(\Gamma\). A pair of connected subgraphs \((\Gamma_{1},\Gamma_{2})\), with \(E(\Gamma_{1})\neq\emptyset\) and \(E(\Gamma_{2})\neq\emptyset\), are called _a split of \(\Gamma\) with respect to \(v_{0}\)_ if
\[\begin{array}{ll}V(\Gamma_{1})\cap V(\Gamma_{2})=\{v_{0}\},&V(\Gamma_{1}) \cup V(\Gamma_{2})=V(\Gamma),\\ E(\Gamma_{1})\cap E(\Gamma_{2})=\emptyset,&E(\Gamma_{1})\cup E(\Gamma_{2})=E( \Gamma).\end{array}\]
**Remark 5.19**.: It easy to check that, given an articulation vertex \(v_{0}\), there always exists a split \((\Gamma_{1},\Gamma_{2})\) of \(\Gamma\) with respect to \(v_{0}\). Notice that the connected componets of \(\Gamma_{1}\setminus\{v_{0}\}\) and \(\Gamma_{2}\setminus\{v_{0}\}\) form a partition of the connected components of \(\Gamma\setminus\{v_{0}\}\). Moreover, the biconnected components of \(\Gamma_{1}\) and \(\Gamma_{2}\) are biconnected components of \(\Gamma\) and, conversely, every biconnected component of \(\Gamma\) is a biconnected component of one between \(\Gamma_{1}\) or \(\Gamma_{2}\).
**Proposition 5.20**.: _Let \(\Gamma\) be a pure graph and \(v_{0}\) an articulation vertex of \(\Gamma\). Let \((\Gamma_{1},\Gamma_{2})\) be a split of \(\Gamma\) with respect to \(v_{0}\). We have an isomorphism_
\[\sigma\colon\mathbf{QD}_{v_{0}}(\Gamma_{1})\times\mathbf{QD}_{v_{0}}(\Gamma_{ 2})\stackrel{{\cong}}{{\rightarrow}}\mathbf{QD}_{v_{0}}(\Gamma)\]
_taking a pair of pseudo-divisors \(((\mathcal{E}_{1},D_{1}),(\mathcal{E}_{2},D_{2}))\) to \((\mathcal{E}_{1}\cup\mathcal{E}_{2},D_{1}+D_{2}+v_{0})\). Moreover, if \(e\in E(\Gamma_{1})\) and \((\mathcal{E},D)\rightarrow(\mathcal{E}\setminus\{e\},\overline{D})\) is an elementary specialization in \(\mathbf{QD}_{v_{0}}(\Gamma)\) over \(e\), then \(\sigma^{-1}(\mathcal{E}\setminus\{e\},\overline{D})=((\mathcal{E}_{1}\setminus\{e \},\overline{D}_{1}),(\mathcal{E}_{2},D_{2}))\), where \((\mathcal{E}_{1}\setminus\{e\},\overline{D}_{1})\) is an elementary specialization of \((\mathcal{E}_{1},D_{1})\) in \(\mathbf{QD}_{v_{0}}(\Gamma_{1})\) over \(e\)._
Proof.: Let \(((\mathcal{E}_{1},D_{1}),\)\((\mathcal{E}_{2},D_{2}))\) and \((\mathcal{E},D):=(\mathcal{E}_{1}\cup\mathcal{E}_{2},D_{1}+D_{2}+v_{0})\) be as in the statement. Since \(\Gamma\) is pure, we have \(g=g_{\Gamma}=g_{\Gamma_{1}}+g_{\Gamma_{2}}\). Hence the degree of \(D\) is \(g-1\). Let us see that \(\sigma\) is well-defined, proving that \((\mathcal{E},D)\in\mathbf{QD}_{v_{0}}(\Gamma)\). We use Remark 2.5 and Equation (6). Let \(V\subset V(\Gamma^{\mathcal{E}})\) be a hemisphere. Assume that \(v_{0}\notin V\). Since \(V\) is a hemisphere we have that \(V\subset V(\Gamma_{i})\setminus\{v_{0}\}\) for some \(i=1,2\). In this case, we can assume without loss of generality that \(i=1\), and we have that
\[\beta_{\Gamma^{\mathcal{E}},D}(V)=\beta_{\Gamma_{1}^{\mathcal{E}_{1}},D_{1}}(V \cap V(\Gamma_{1}))>0.\]
On the other hand, if \(v_{0}\in V\), then we have that
\[\beta_{\Gamma^{\mathcal{E}},D}(V)=\beta_{\Gamma_{1}^{\mathcal{E}_{1}},D_{1}}(V \cap V(\Gamma_{1}))+\beta_{\Gamma_{2}^{\mathcal{E}_{2}},D_{2}}(V\cap V(\Gamma_{2} ))\geq 0.\]
This proves that \((\mathcal{E},D)\in\mathbf{QD}_{v_{0}}(\Gamma)\) and hence the function \(\sigma\) is well-defined.
Given a specialization \((\mathcal{E}_{i},D_{i})\to(\mathcal{E}^{\prime}_{i},D^{\prime}_{i})\) in \(\mathbf{QD}_{v_{0}}(\Gamma_{i})\) for every \(i=1,2\), we have an induced specialization \(\sigma((\mathcal{E}_{1},D_{1}),(\mathcal{E}_{2},D_{2}))\to\sigma((\mathcal{E}^ {\prime}_{1},D^{\prime}_{1}),(\mathcal{E}^{\prime}_{2},D^{\prime}_{2}))\) via the inclusions \(\mathcal{E}_{i}\subset E(\Gamma)\) and \(\mathcal{E}^{\prime}_{i}\subset E(\Gamma)\). This implies that \(\sigma\) is a morphism of posets.
Let us prove that \(\sigma\) is injective. Assume that \(\sigma((\mathcal{E}_{1},D_{1}),(\mathcal{E}_{2},D_{2}))=\sigma((\mathcal{E}^ {\prime}_{1},D^{\prime}_{1}),(\mathcal{E}^{\prime}_{2},D^{\prime}_{2}))\). It is clear that \(\mathcal{E}_{1}=\mathcal{E}^{\prime}_{1}\) and \(\mathcal{E}_{2}=\mathcal{E}^{\prime}_{2}\). Moreover, it is also clear that for each \(i=1,2\) and for each vertex \(v\in V(\Gamma_{i})\setminus\{v_{0}\}\), we have that \(D_{i}(v)=D^{\prime}_{i}(v)\). Since \(D_{i}\) and \(D^{\prime}_{i}\) have the same degree, we have that \(D_{i}(v_{0})=D^{\prime}_{i}(v_{0})\) for \(i=1,2\). Thus \(D_{1}=D^{\prime}_{1}\) and \(D_{2}=D^{\prime}_{2}\), as wanted.
Let us prove that \(\sigma\) is surjective. Since we already know that \(\sigma\) is injective, we need only to prove that the cardinalities of the domain and target of \(\sigma\) are the same. The number of elements of \(\mathbf{QD}_{v_{0}}(\Gamma)\) is \(2^{g}\) times the number of spanning trees of \(\Gamma\). Since \(2^{g}=2^{g_{1}}\cdot 2^{g_{2}}\) and each spanning tree of \(\Gamma\) is a union of spanning trees of \(\Gamma_{1}\) and \(\Gamma_{2}\), the result follows.
Finally, we show that \(\sigma^{-1}\) is a morphism of posets. We start with an elementary specialization \((\mathcal{E},D)\to(\overline{\mathcal{E}},\overline{D})\) in \(\mathbf{QD}_{v_{0}}(\Gamma)\). Let us show that \(\sigma^{-1}(\overline{\mathcal{E}},\overline{D})\leq\sigma^{-1}(\mathcal{E},D)\). Since every specialization is a composition of elementary specializations, we can assume that \((\mathcal{E},D)\to(\overline{\mathcal{E}},\overline{D})\) is elementary. By Remark 2.4, we can write \((\overline{\mathcal{E}},\overline{D})=(\mathcal{E}\setminus\{e\},D-v_{e}+s)\), for some edge \(e\in E(\Gamma)\) with end-vertex \(s\). Assume that \(e\in E(\Gamma_{1})\). In particular, \(s\in V(\Gamma_{1})\). Set \(((\mathcal{E}_{1},D_{1}),(\mathcal{E}_{2},D_{2})):=\sigma^{-1}((\mathcal{E},D))\). Consider the elementary specialization \((\mathcal{E}_{1},D_{1})\to(\overline{\mathcal{E}}_{1},\overline{D}_{1})\) in \(\mathbf{QD}_{v_{0}}(\Gamma_{1})\), where \((\overline{\mathcal{E}}_{1},\overline{D}_{1})=(\mathcal{E}_{1}\setminus\{e\},D_{1}-v_{e}+s)\). Clearly we have \(\sigma((\overline{\mathcal{E}}_{1},\overline{D}_{1}),(\mathcal{E}_{2},D_{2}) )=(\overline{\mathcal{E}},\overline{D})\). This proves that \(\sigma^{-1}(\overline{\mathcal{E}},\overline{D})\leq\sigma^{-1}(\mathcal{E},D)\), as wanted. Notice that we have also proved the last statement of the proposition.
**Corollary 5.21**.: _Given a pure graph \(\Gamma\), we have an isomorphism_
\[\mathbf{QD}(\Gamma)\cong\prod\mathbf{QD}(\Gamma_{i}),\]
_where \(\Gamma_{i}\) runs through all biconnected components of \(\Gamma\)._
Proof.: The result readily follows from Proposition 5.20.
We are now ready to prove Theorem 5.1.
Proof of Theorem 5.1.: Recall that we have reduced to the case where \(\Gamma\) and \(\Gamma^{\prime}\) are pure graphs (recall Proposition 5.3).
Assume that there is a bijection between the biconnected components of \(\Gamma/\operatorname{Br}(\Gamma)\) and \(\Gamma^{\prime}/\operatorname{Br}(\Gamma^{\prime})\) such that the corresponding components are isomorphic. We must prove that \(\mathbf{QD}(\Gamma)\) and \(\mathbf{QD}(\Gamma^{\prime})\) are isomorphic. By Remark 3.1, we need only to show that \(\mathbf{QD}(\Gamma/\operatorname{Br}(\Gamma))\) and \(\mathbf{QD}(\Gamma^{\prime}/\operatorname{Br}(\Gamma^{\prime}))\) are isomorphic. This clearly follows from Corollary 5.21.
Conversely, assume that \(f\colon\mathbf{QD}(\Gamma)\to\mathbf{QD}(\Gamma^{\prime})\) is an isomorphism. By Remark 3.1 we can assume that \(\Gamma\) and \(\Gamma^{\prime}\) have no bridges. Consider the cyclic equivalence \(f_{E}\colon E(\Gamma)\to E(\Gamma^{\prime})\) given by Proposition 5.11. This induces a bijection between the sets of biconnected components of \(\Gamma\) and \(\Gamma^{\prime}\). We proceed by induction on the number of biconnected components of \(\Gamma\). If \(\Gamma\) is biconnected, the result follows from Theorem 5.17.
Assume that \(\Gamma\) is not biconnected. Let \(v_{0}\) be an articulation vertex of \(\Gamma\). Let \((\Gamma_{1},\Gamma_{2})\) be a split of \(\Gamma\) with respect to \(v_{0}\) (see Definition 5.18). Let \(\Gamma^{\prime}_{1}\) and \(\Gamma^{\prime}_{2}\) be the subgraphs of \(\Gamma^{\prime}\) such that \(E(\Gamma^{\prime}_{i})=f_{E}(E(\Gamma_{i}))\). Since \(f_{E}\) is a cyclic equivalence, there is an articulation vertex \(v^{\prime}_{0}\) of \(\Gamma^{\prime}\) such that \((\Gamma^{\prime}_{1},\Gamma^{\prime}_{2})\) is a split of \(\Gamma^{\prime}\) with respect to \(v^{\prime}_{0}\). Choose identifications \(\mathbf{QD}(\Gamma)\cong\mathbf{QD}_{v_{0}}(\Gamma)\) and \(\mathbf{QD}(\Gamma^{\prime})\cong\mathbf{QD}_{v^{\prime}_{0}}(\Gamma^{\prime})\). Let
\[\sigma\colon\mathbf{QD}_{v_{0}}(\Gamma_{1})\times\mathbf{QD}_{v_{0}}(\Gamma_{2}) \to\mathbf{QD}_{v_{0}}(\Gamma)\]
\[\sigma^{\prime}\colon\mathbf{QD}_{v^{\prime}_{0}}(\Gamma^{\prime}_{1})\times \mathbf{QD}_{v^{\prime}_{0}}(\Gamma^{\prime}_{2}) \to\mathbf{QD}_{v^{\prime}_{0}}(\Gamma^{\prime})\]
be the isomorphisms of Proposition 5.20. Define
\[\overline{f}:=\sigma^{\prime-1}\circ f\circ\sigma\colon\mathbf{QD}_{v_{0}}(\Gamma_{1}) \times\mathbf{QD}_{v_{0}}(\Gamma_{2})\to\mathbf{QD}_{v^{\prime}_{0}}(\Gamma^{ \prime}_{1})\times\mathbf{QD}_{v^{\prime}_{0}}(\Gamma^{\prime}_{2}),\]
and let \(\overline{f}_{i}\colon\mathbf{QD}_{v_{0}}(\Gamma_{1})\times\mathbf{QD}_{v_{0}}( \Gamma_{2})\to\mathbf{QD}_{v_{0}^{\prime}}(\Gamma_{i}^{\prime})\) be the composition of \(\overline{f}\) with the projection onto the \(i\)-th factor.
We claim that \(\overline{f}_{1}((\mathcal{E}_{1},D_{1}),(\mathcal{E}_{2},D_{2}))\) is independent of \((\mathcal{E}_{2},D_{2})\) (and, similarly, \(\overline{f}_{2}((\mathcal{E}_{1},D_{1}),(\mathcal{E}_{2},D_{2}))\) is independent of \((\mathcal{E}_{1},D_{1})\)). The claim allows us to conclude the proof. Indeed, it implies that
\[\overline{f}((\mathcal{E}_{1},D_{1}),(\mathcal{E}_{2},D_{2}))=(f_{1}(\mathcal{ E}_{1},D_{1}),f_{2}(\mathcal{E}_{2},D_{2})),\]
where \(f_{i}\colon\mathbf{QD}_{v_{0}}(\Gamma_{i})\to\mathbf{QD}_{v_{0}^{\prime}}( \Gamma_{i}^{\prime})\) is an isomorphism induced by \(\overline{f}_{i}\). We conclude the proof by the induction hypothesis, using Remark 5.19.
To prove the claim, let us start with an observation coming from Proposition 5.20. Let \((\mathcal{E}^{\prime},D^{\prime})\to(\mathcal{E}^{\prime}\setminus\{e^{ \prime}\},\overline{D}^{\prime})\) be an elementary specialization in \(\mathbf{QD}_{v_{0}^{\prime}}(\Gamma^{\prime})\) with \(e^{\prime}\in E(\Gamma_{2}^{\prime})\). Set \(((\mathcal{E}^{\prime}_{1},D^{\prime}_{1}),(\mathcal{E}^{\prime}_{2},D^{ \prime}_{2})):=\sigma^{\prime-1}(\mathcal{E}^{\prime},D^{\prime})\). By Proposition 5.20, we have that \(\sigma^{\prime-1}(\mathcal{E}^{\prime}\setminus\{e^{\prime}\},\overline{D}^{ \prime})=((\mathcal{E}^{\prime}_{1},D^{\prime}_{1}),(\mathcal{E}^{\prime}_{2} \setminus\{e^{\prime}\},\overline{D}^{\prime}_{2})\), where \((\mathcal{E}^{\prime}_{2}\setminus\{e^{\prime}\},\overline{D}^{\prime}_{2})\) is an elementary specialization of \((\mathcal{E}^{\prime}_{2},D^{\prime}_{2})\) in \(\mathbf{QD}_{v_{0}^{\prime}}(\Gamma_{2}^{\prime})\) over \(e^{\prime}\).
Now, we just note that if \((\mathcal{E},D)\to(\mathcal{E}\setminus\{e\},\overline{D})\) is an elementary specialization in \(\mathbf{QD}_{v_{0}}(\Gamma)\) over \(e\), then \(f(\mathcal{E},D)\to f(\mathcal{E}\setminus\{e\},\overline{D})\) is an elementary specialization in \(\mathbf{QD}_{v_{0}^{\prime}}(\Gamma^{\prime})\) over \(f_{E}(e)\). In particular, if \((\mathcal{E}_{2},D_{2})\to(\mathcal{E}_{2}\setminus\{e\},\overline{D}_{2})\) is an elementary specialization in \(\mathbf{QD}_{v_{0}}(\Gamma_{2})\) over \(e\in E(\Gamma_{2})\), then \(\sigma((\mathcal{E}_{1},D_{1}),(\mathcal{E}_{2},D_{2}))\to\sigma((\mathcal{E }_{1},D_{2}),(\mathcal{E}_{2}\setminus\{e\},\overline{D}_{2}))\) is an elementary specialization in \(\mathbf{QD}_{v_{0}}(\Gamma)\) over \(e\). Then,
\[f\circ\sigma((\mathcal{E}_{1},D_{1}),(\mathcal{E}_{2},D_{2}))\to f\circ\sigma( (\mathcal{E}_{1},D_{2}),(\mathcal{E}_{2}\setminus\{e\},\overline{D}_{2}))\]
is an elementary specialization in \(\mathbf{QD}_{v_{0}^{\prime}}(\Gamma^{\prime})\) over \(f_{E}(e)\in E(\Gamma_{2}^{\prime})\). By the above observation, we have that \(\overline{f}_{1}((\mathcal{E}_{1},D_{1}),(\mathcal{E}_{2},D_{2}))=\overline{f }_{1}((\mathcal{E}_{1},D_{1}),(\mathcal{E}_{2}\setminus\{e\},\overline{D}_{2}))\). Since \(\mathbf{QD}_{v_{0}}(\Gamma_{2})\) is connected and any specialization is a composition of elementary specialization, we have that \(\overline{f}_{1}((\mathcal{E}_{1},D_{1}),(\mathcal{E}_{2},D_{2}))\) is independent of \((\mathcal{E}_{2},D_{2})\) and we are done.
## 6. Torelli Theorem for tropical curves
A _metric graph_ is a pair \((\Gamma,\ell)\), where \(\Gamma=(E(\Gamma),V(\Gamma))\) is a graph and \(\ell\colon E(\Gamma)\to\mathbb{R}_{>0}\) is a function. A tropical curve is a metric space obtained by gluing segments \([0,\ell(e)]\), for every \(e\in E(\Gamma)\) at their end-vertices as prescribed by the combinatorial data of the graph. We call \((\Gamma,\ell)\) a _model_ of the tropical curve.
Given a tropical curve \(X\) associated to a metric graph \((\Gamma,\ell)\), we say that \((\Gamma,\ell)\) is the _canonical model_ of \(X\) if \(\Gamma\) has no vertices of valence \(2\) or if \(\Gamma\) is the graph with only one vertex and one edge. The canonical model of a tropical curve \(X\) is unique, and we write \((\Gamma_{X},\ell_{X})\) for the canonical model of \(X\). A _bridge_ of a tropical curve \(X\) is a bridge of the graph \(\Gamma_{X}\). A _biconneted_ component of a tropical curve \(X\) is the tropical curve with model \((\Gamma^{\prime},\ell^{\prime})\), where \(\Gamma^{\prime}\) is a biconnected component of \(\Gamma_{X}\) and \(\ell^{\prime}\) is the restriction of \(\ell\) to \(\Gamma^{\prime}\).
A tropical curve has an associated tropical Jacobian \(J(X)\), which was first introduced in [10]. The tropical Jacobian \(J(X)\) has the following structure as a polyhedral complex. For each pseudo-divisor \((\mathcal{E},D)\) of \(\Gamma_{X}\), let \(\mathcal{P}_{X}(\mathcal{E},D)=\prod_{e\in\mathcal{E}}[0,\ell(e)]\). For each specialization \((\mathcal{E},D)\to(\mathcal{E}^{\prime},D^{\prime})\) there is an associated face morphism \(\mathcal{P}_{X}(\mathcal{E}^{\prime},D^{\prime})\subset\mathcal{P}_{X}(\mathcal{E },D)\). Fix \(v_{0}\in V(\Gamma_{X})\), and define
\[J_{v_{0}}^{\mathrm{qs}}(X):=\lim\mathcal{P}_{X}(\mathcal{E},D)\]
where the colimit is taken through all \((\mathcal{E},D)\in\mathbf{QD}_{v_{0}}(\Gamma)\). By [1, Theorem 5.10] we have that \(J(X)\) and \(J_{v_{0}}^{qs}(X)\) are homeomorphic. The structure of a polyhedral complex for the tropical Jacobian was first described in [1], and was extended in [1], [10] and [1].
By Proposition 3.4, we have that \(J_{v_{0}}^{qs}(X)\) does not depends on \(v_{0}\), so we denote it by \(J_{v_{0}}^{qs}(X)\).
The following result is a corollary of Theorem 5.1.
**Theorem 6.1**.: _Let \(X\) and \(X^{\prime}\) be tropical curves without bridges such that \(J(X)\) and \(J(X^{\prime})\) are isomorphic as polyhedral complexes (with the structure of polyhedral complexes given by \(\mathbf{QD}(\Gamma_{X})\) and \(\mathbf{QD}(\Gamma_{X^{\prime}})\)). Then, there is a bijection between the biconnected components of \(X\) and \(X^{\prime}\) such that corresponding components are isomorphic._
Proof.: An isomorphism \(f_{J}\colon J^{\operatorname{qs}}(X)\to J^{\operatorname{qs}}(X^{\prime})\) induces an isomorphism between \(f\colon\mathbf{QD}(\Gamma_{X})\to\mathbf{QD}(\Gamma_{X^{\prime}})\) and hence, by Theorem 5.1, also isomorphisms between the biconnected components of \(\Gamma_{X}\) and of \(\Gamma_{X^{\prime}}\). In particular, if \(e\in E(\Gamma_{X})\) is an edge not contained in any special pair and \(D\in\mathbf{QD}(\Gamma,\{e\})\), we have that \(f(\{e\},D)=(f_{E}(e),D^{\prime})\) for some \(D^{\prime}\in\mathbf{QD}(\Gamma^{\prime},\{f_{E}(e)\})\). Moreover, we also have that \(f_{J}(\mathcal{P}_{X}(\{e\},D))=\mathcal{P}_{X^{\prime}}(f_{E}(e),D^{\prime})\). Since \(\mathcal{P}_{X}(\{e\},D)\) is a segment with length \(\ell(e)\), we have that \(\ell(e)=\ell(f_{E}(e))\). If \(\{e_{1},e_{2}\}\) is a special pair, we have that \(f_{J}(\mathcal{P}_{X}(\{e_{1},e_{2}\},D))=\mathcal{P}_{X^{\prime}}(\{f_{E}(e _{1}),f_{E}(e_{2})\},D^{\prime})\), which means that \(\{\ell(e_{1}),\ell(e_{2})\}=\{\ell(f_{E}(e_{1})),\ell(f_{E}(e_{1}))\}\). Since \(e_{1},e_{2}\) are conjugated by an automorphism of \(\Gamma_{X}\) and \(f_{E}(e_{1})\), \(f_{E}(e_{2})\) are conjugated by an automorphism of \(\Gamma_{X^{\prime}}\), we have that \(X\) and \(X^{\prime}\) have isomorphic biconnected components.
|
2306.00193 | Sign reversal diode effect in superconducting Dayem nanobridges | Supercurrent diodes are nonreciprocal electronic elements whose switching
current depends on their flow direction. Recently, a variety of composite
systems combining different materials and engineered asymmetric superconducting
devices have been proposed. Yet, ease of fabrication and tunable sign of
supercurrent rectification joined to large efficiency have not been assessed in
a single platform so far. We demonstrate that all-metallic superconducting
Dayem nanobridges naturally exhibit nonreciprocal supercurrents under an
external magnetic field, with a rectification efficiency up to $\sim 27\%$. Our
niobium nanostructures are tailored so that the diode polarity can be tuned by
varying the amplitude of an out-of-plane magnetic field or the temperature in a
regime without magnetic screening. We show that sign reversal of the diode
effect may arise from the high-harmonic content of the current phase relation
in combination with vortex phase windings present in the bridge or an anomalous
phase shift compatible with anisotropic spin-orbit interactions. | Daniel Margineda, Alessandro Crippa, Elia Strambini, Yuri Fukaya, Maria Teresa Mercaldo, Mario Cuoco, Francesco Giazotto | 2023-05-31T21:20:18Z | http://arxiv.org/abs/2306.00193v2 | # Sign reversal diode effect in superconducting Dayem nanobridges
###### Abstract
**Supercurrent diodes are nonreciprocal electronic elements whose switching current depends on their flow direction. Recently, a variety of composite systems combining different materials and engineered asymmetric superconducting devices have been proposed. Yet, ease of fabrication and tunable sign of supercurrent rectification joined to large efficiency have not been assessed in a single platform so far. Here, we demonstrate that all-metallic superconducting Dayem nanobridges naturally exhibit nonreciprocal supercurrents in the presence of an external magnetic field, with a rectification efficiency up to \(\sim 27\%\). Our niobium nanostructures are tailored so that the diode polarity can be tuned by varying the amplitude of an out-of-plane magnetic field or the temperature in a regime without magnetic screening. We show that sign reversal of the diode effect may arise from the high-harmonic content of the current phase relation of the nanoconstriction in combination with vortex phase windings present in the bridge or an anomalous phase shift compatible with anisotropic spin-orbit interactions.**
Non-reciprocal charge transport is an essential element in modern electronics as a building block for multiple components such as rectifiers, photodetectors, and logic circuits. For instance, pn-junctions and Schottky-barrier devices are archetypal semiconductor-based examples of systems, known as _diodes_, with direction-selective charge propagation. Their operation stems from the spatial asymmetry of the heterojunction that provides inversion symmetry breaking. Likewise, dissipationless rectification refers to the asymmetric switching of the critical current (\(I_{sw}\)) required to turn a superconductor into the normal state depending on the current bias polarity. Breaking both inversion and time-reversal symmetry, which are preserved in conventional superconductors, is the foundational aspect to enable the diode effect, as recently observed in superconducting materials [1; 2; 3; 4] and heterostructures [5; 6; 7; 8; 9; 10; 11; 12]. Recent experimental findings have boosted a number of theoretical investigations in superconductors [13; 14; 15; 16] and Josephson junctions (JJs) [17; 18; 19]. In particular, several mechanisms have been proposed to account for the supercurrent diode effect (SDE). On the one hand, those based on intrinsic depairing currents focus on finite momentum pairing that arise from the combination of spin-orbit coupling and Zeeman field [13; 14; 15; 16; 20], or from Meissner currents [18]. On the other hand, other works underline the role of Abrikosov vortices, magnetic fluxes, and screening currents as key-elements for setting out non-reciprocal charge transport in superconductors [21; 22; 23; 24; 25; 26], such as in systems with trapped Abrikosov vortices [27; 28] or in micron-sized Nb-based strips with asymmetric edges [29; 30; 31; 32].
Till now, most of the research efforts have aimed to the realization of a SDE that maximizes the rectification efficiency, while the change of its polarity has been reported in a few cases only [6; 10; 11; 33; 34; 3]. The SDE sign reversal has been interpreted as a consequence of finite momentum pairing [6; 11; 33; 34] requiring in-plane magnetic fields or diamagnetic currents and Josephson vortices [10; 28], as well as ascribed to vortex ratchet and asymmetric pinning effects [35; 36; 37; 38]. All these outcomes suggest the need of an effective mastering over the polarity change of the SDE and its implementation in a simple and monolithic platform suitable for nanoscale miniaturization, not accomplished yet.
Here, we experimentally demonstrate a sign reversal tunable SDE in elemental superconducting weak links made on niobium (Nb). Nano-sized constrictions of Nb realize Dayem bridges whose switching currents for positive and negative sweep direction, \(I_{sw}^{+}\) and \(I_{sw}^{-}\), respectively, differ in the absolute value. This difference can be tuned both in amplitude and sign by an out-of-plane magnetic field (\(B_{z}\)), without inverting the polarity of \(B_{z}\). Thermal effects can lead to two different energy scales for the maximal amplitude and the sign reversal of the diode efficiency.
We show that sign reversal of the non-reciprocal response may arise from the phase shift due to the vortex phase winding or from spin-orbit effects due the material granularity, in either case jointly with a few-harmonic content of the current-phase relation (CPR) of the weak link.
## II Metallic diode architectures
We analyze two different geometries of Nb Dayem bridges, i.e., weak links made of a constant-thickness
and all-metallic constriction between two superconducting banks [39]. The schematics of the electronic circuitry and false-color scanning electron micrographs of the devices are shown in Fig. 1a. In the first type of samples, 25-nm-thick micrometer-wide banks are connected via a link whose length \(l\) is \(\sim 80\) nm and width \(w\sim 180\) nm. The second type consists of 55-nm-thick banks connected via a quasi-one-dimensional wire with \(l\simeq 1\,\mu\)m, and \(w\simeq 80\) nm. Hereafter, we shall refer to the first and second type of bridges as "short" and "long", respectively. Both device families are patterned through a single electron-beam lithography step followed by sputter deposition of the Nb thin film and lift-off. A 4-nm-thick Ti layer is pre-sputtered for adhesion purposes.
The differential resistance \(R=dV/dI\) versus temperature \(T\) of two representative bridges is shown in Fig. 1b. The first abrupt reduction of \(R\) marks the critical temperature of the Nb films \(T_{TF}\simeq 8.1(7.9)\) K for the 55(25)-nm-thick sample. The resistance drops to zero at the critical temperature of the weak link (\(T_{c}\)), which strongly depends, along with its normal-state resistance \(R_{N}\), on the geometry [39]. While the "short" bridge exhibits \(R_{N}\sim 40\,\Omega\), in the "long" one has \(R_{N}\sim 270\,\Omega\).
Below \(T_{c}\), dissipationless transport occurs in the bridges owing to Cooper pairs supercurrent. The temperature dependence of the switching current \(I_{sw}\) of both devices is displayed in Fig 1c. From the fit to the Bardeen equation \(I_{sw}(T)=I_{sw}^{0}[1-(\frac{T}{T_{c}})^{2}]^{\frac{3}{2}}\), we extract a zero-temperature switching current \(I_{sw}^{0}\simeq 720\)\(\mu\)A and a critical temperature \(T_{c}^{S}\simeq 4.3\) K for the "short" bridge. Similarly, for the "long" weak link we obtain \(I_{sw}^{0}\simeq 42\,\mu\)A and \(T_{c}^{S}\simeq 2.1\) K. From these values, we de
Figure 1: **Nb Dayem nanobridge diodes and basic electrical characterization.****a**, Samples and schematic setup to measure the voltage characteristics \(V\) as a function of a bias current \(I\) with an applied out-of-plane magnetic field \(B_{z}\). In the upper part, scanning electron micrographs of two weak links with different length \(l\) and width \(w\): one is a constriction of Nb strip (with characteristic dimensions \(l\sim 80\) nm and \(w\sim 180\) nm), the other is a quasi-1D wire (\(l\sim 1\,\mu\)m, \(w\sim 80\) nm) connecting the banks; they are labelled as “short” and “long”, respectively. The electrodes used in the experiment are false-colored in orange. **b**, Temperature dependence of the zero-bias resistance for the two Dayem bridges. Thin film (\(T_{TF}\)) and weak link (\(T_{c}\)) critical temperatures are marked by dashed lines for the “short” device. **c**, Temperature (\(T\)) dependence of the switching supercurrents for the “short” (blue dots) and the “long” (black dots) bridges. Red dashed lines are the fit to Bardeen equation, as described in the text. \(IV\) characteristics of the “short”, **d**, and “long” bridge, **e**. The curves are vertically offset for clarity, and the superconducting region is highlighted in grey to visualize the temperature-induced decay of the dissipationless current. Black arrows indicate the direction of the bias current swept back and forth starting at zero amplitude.
termine a zero-temperature BCS energy gap \(\Delta_{0}\)=1.764 \(k_{B}T_{c}^{S(L)}\simeq 650(320)\,\mu\)eV for the "short"("long") bridge, where \(k_{B}\) is the Boltzmann constant. For the "long" bridge, we deduce a superconducting coherence length \(\xi_{0}=\sqrt{\hbar l/(R_{N}wte^{2}N_{F}\Delta_{0})}\simeq 11\,\)nm, where \(t\) is the film thickness, \(N_{F}\simeq 5.33\times 10^{47}J^{-1}m^{-3}\) is the density of states at the Fermi level of Nb [40], and \(e\) is the electron charge. Similarly, we can evaluate the London penetration depth \(\lambda_{L}=\sqrt{\hbar R_{N}wt/(\pi l\mu_{0}\Delta_{0})}\simeq 790\,\)nm, where \(\mu_{0}\) is the vacuum magnetic permeability. Since \(w,t\ll\lambda_{L}\), the bridges can be uniformly penetrated by an external magnetic field.
The current vs voltage (\(IV\)) characteristics of the "short" and "long" bridges are shown in Fig.1d,e, respectively, for selected values of bath temperature. The devices show an abrupt transition to the normal state at the switching current \(I_{sw}\), and display the typical hysteresis of metallic junctions which originates from Joule heating induced in the bridge when the bias current is swept back from the resistive to the dissipationless state [41].
## "Short" Dayem Bridge Diode Performance
Let us now discuss how the "short" Dayem bridge in Fig.1a can be used as a supercurrent diode. Nonreciprocal dissipationless transport is revealed by comparing the switching currents while sweeping the biasing current from zero to positive values (\(I_{sw}^{+}\)) or from zero to negative values (\(I_{sw}^{-}\)) in the presence of an out-of-plane magnetic field \(B_{z}\). The switching currents at \(T=0.3\,\)K are reported in Fig. 2a. The magnetic field in
Figure 2: **Diode effect in a “short” Dayem bridge.****a**, Out-of-plane magnetic field dependence of the switching current \(I_{sw}^{+}\) (\(|I_{sw}^{-}|\)) for positive (negative) bias current recorded at 300 mK. **b**, \(\Delta I_{sw}\) obtained from **a**. The magnetic field values at which the maximum (\(B_{max}\)) and sign reversal (\(B_{R}\)) of the rectification occur are marked by red and black dashed lines, respectively. **c**, \(IV\) curves with positive (top) and negative (bottom) rectification recorded at the magnetic fields marked by bars in panel **a**. **d**, Color plot of the rectification efficiency \(\eta(B_{z},T)\) versus bath temperature and magnetic field. **e**, (Top panel) Rectification parameters \(\eta_{max}\equiv(\eta_{max}(B_{z}>0)+\eta_{max}(B_{z}<0))/2\) (left vertical axis) and field-to-rectification efficiency transfer function \(\Gamma\) (right vertical axis) vs normalized temperature. \(\eta_{max}\) is the rectification value of the low-field peak at \(B_{max}\). (Bottom panel) (\(B_{max}\)) and (\(B_{R}\)) magnetic fields versus normalized temperature. \(T_{c}^{S}\) denotes the critical temperature of the “short” bridge. **f**\(\eta(B_{z})\) for selected values of bath temperature marked by dashed lines in panel **d**. Curves are vertically offset for clarity. \(\eta(B_{z})\) function exhibits two extrema, below and above \(B_{R}\). The rectification peak \(\eta_{max}\), identified by \(B_{max}\), decreases in magnitude and field for \(T\geq 1.75\,\)K = 0.4 \(T_{c}^{S}\), until the minimum in the rectification at \(B>B_{R}\) becomes an absolute extremum at T = 3 K (orange plot). For discussion, we keep \(B_{max}\) and \(\eta_{max}\) as the nomenclature for the maximum rectification.
creasingly reduces the superconducting gap and thereby both the switching currents. A linear decrease in \(B_{z}\) of \(I_{sw}^{+}\) and \(|I_{sw}^{-}|\) is observed up to \(\sim 0.07\,\)T. At larger fields, the dependence of the switching currents on \(B_{z}\) is sublinear. Notably, both \(I_{sw}(B_{z})\) are not antisymmetric with respect to the magnetic field (\(I_{sw}(B_{z})\neq I_{sw}(-B_{z})\)) while the symmetry relation \(I_{sw}^{+}(B_{z})\simeq-I_{sw}^{-}(-B_{z})\) is respected, within the small experimental fluctuations, as theoretically expected. This symmetry relation is further confirmed in the switching currents difference \(\Delta I_{sw}\equiv I_{sw}^{+}-|I_{sw}^{-}|\) displaying an odd-in-\(B_{z}\) superconducting diode effect (\(\Delta I_{sw}(B_{z})\simeq-\Delta I_{sw}(-B_{z})\)) as shown in Figure 2b. \(\Delta I_{sw}\) is characterized by a maximum at \(B_{max}\simeq 0.05\,\)T and a sign inversion at \(B_{R}\simeq 0.1\,\)T where \(I_{sw}^{+}\) and \(|I_{sw}^{-}|\) have a crossing (see Fig. 2a). From now on, \(B_{max}\) indicates the position in field of the rectification peak. Two \(IV\) curves, recorded for magnetic fields lower and larger than \(B_{R}\), are plotted in Fig. 2c to emphasize the sign change in the rectification.
Nonreciprocal transport can be conveniently quantified by the rectification efficiency defined as \(\eta=\frac{I_{sw}^{+}-|I_{sw}^{-}|}{I_{sw}^{+}+|I_{sw}^{-}|}\). Figure 2d shows the evolution of \(\eta\) versus \(B_{z}\) and \(T\). \(\eta(B_{z})\) is substantially unaffected by thermal effects up to \(T\simeq 1.75\,\)K = \(0.41\,T_{c}^{S}\) where a maximum rectification \(\eta_{max}\sim 27\%\) is obtained. The evolution of \(\eta_{max}\) in temperature is displayed in the top panel of Fig. 2e (left vertical axis). In addition, we parametrize the diode sensitivity to the magnetic field in the vicinity of the abrupt sign change as \(\Gamma=\eta_{max}/(|B_{max}-B_{R}|)\). A maximum value \(\Gamma\sim 650\,\)T\({}^{-1}\) is achieved around \(2.25\,\)K (see Fig. 2e, top panel and right vertical axis). At higher temperatures, the quantities \(\eta_{max}\), \(\Gamma\), and the characteristic magnetic fields \(B_{max}\) and \(B_{R}\) related to rectification (see Fig. 2e, bottom panel) all decrease in a similar fashion. The full
Figure 3: **Diode effect in a “long” Dayem bridge.****a**, Magnetic field dependence of the switching current \(I_{sw}^{+}\) (\(|I_{sw}^{-}|\)) for positive (negative) bias current recorded at \(50\,\)mK. B, \(\Delta I_{sw}\) obtained from panel **a**. The rectification efficiency increases linearly in \(B_{z}\) until \(|B_{z}\simeq 015|\,\)T. Red and black dashed lines mark the magnetic field values corresponding to sign reversal (\(B_{R}\)) and maximum rectification (\(B_{max}\)), respectively. Inset: Blow-up of \(\Delta I_{sw}\) for large positive (black) and negative (brown) fields displaying several changes of signs. **c**, \(IV\) characteristics with positive (top) and negative rectification (bottom) recorded at the magnetic fields marked by bars in panel **d**, Color plot of the rectification efficiency as a function of temperature and magnetic field, \(\eta(B_{z},T)\). Dashed lines are guides for the eye to highlight the different temperature trends in \(B_{max}\) and \(B_{R}\). **e**, (Top panel) Rectification parameters \(\eta_{max}\equiv(\eta_{max}(B_{z}>0)+\eta_{max}(B_{z}<0))/2\) (left vertical axis) and field-to-rectification efficiency transfer function \(\Gamma\) (right vertical axis) vs normalized temperature. (Bottom panel) (\(B_{max}\)) and (\(B_{R}\)) magnetic fields versus normalized temperature. \(T_{c}^{L}\) denotes the critical temperature of the “long” bridge. **f**\(\eta_{B_{z}}\) for selected bath temperatures marked by dashed lines in panel **d**. Curves are vertically offset for clarity.
profile of the rectification efficiency versus \(B_{z}\) is better visualized in Fig.2f where \(\eta(B_{z})\) is plotted for a few selected values of temperature.
## "Long" Dayem Bridge Diode Performance
Next, we characterize the "long" nanobridge shown in Fig. 1a. Figure 3a reports the decay of \(I_{sw}^{+}\) and \(|I_{sw}^{-}|\) as a function of \(B_{z}\). At first, we notice that the switching currents are damped down to \(\sim 60\%\) of their zero-field value at \(B_{z}\simeq 0.3\,\)T, whereas in the previous sample, the same damping is achieved for lower fields (\(B_{z}\simeq 0.07\,\)T see Fig. 2a). Figure 3b displays \(\Delta I_{sw}\) versus \(B_{z}\). For low magnetic fields, \(\Delta I_{sw}(B_{z})\) exhibits a linear relation. While increasing \(|B_{z}|\) further, \(\Delta I_{sw}\) bends and then inverts its trend: an abrupt jump realizes a sign reversal at \(|B_{R}|\sim 0.34\,\)T. Then, a relative peak at \(|B_{max}|\simeq 0.38\,\)T, marked by a red dashed line, represents the field at which maximum rectification efficiency is achieved as before. Next to the sign change, \(\Delta I_{sw}\) leaves the clean trend and looks noisy. Such small jumps are reproducible, thus ruling out a stochastic nature of the underlying processes. Finally, \(\Delta I_{sw}\) oscillates at higher magnetic fields, as shown in the inset of Fig. 3b. Two \(IV\) curves, for fields lower and larger than \(B_{R}\), are plotted in Fig. 3c to highlight that the rectification sign changes from negative to positive as the field \(B_{z}>0\) increases contrary to the "short" bridge. This change in symmetry is attributed to the vortex nucleation as discussed later.
The magnetic field and temperature dependence of the rectification efficiency \(\eta\) is presented as a color plot in Fig. 2d. The sign change and the maximum rectification are affected by temperature in a different way as compared to the short constriction. The linear increase of the rectification at low fields smears out with temperature, reducing \(B_{R}\) until it vanishes at \(T\simeq 1.1\,\)K = \(0.5\,T_{c}^{L}\). Figure 3e shows that \(B_{max}\) is more robust in temperature than \(B_{R}\): it is still observable at \(T\simeq 1.8\,\)K = \(0.8\,T_{c}^{L}\). The sudden change of sign is quantified by a maximum \(\Gamma\sim 360\,\)T\({}^{-1}\) at \(0.15\,\)K. As before, the profile of rectification efficiency as a function of \(B_{z}\) is shown in Fig. 3f for a few selected values of temperature. The difference in the temperature trend between \(B_{R}\) and \(B_{max}\) (see bottom panel of Fig. 3e) suggests two different energy scales responsible for the sign reversal and the maximum rectification, as confirmed by measurements obtained in another similar sample (see Extended Data Fig. 1). Rectification on the second sample exhibits similar \(\eta(B_{z})\) lineshape with almost identical \(B_{max}\) and \(\eta_{max}\) values and temperature dependence. In this sample, low-field features fade more rapidly with temperature, which appears to be sample dependent.
## Modeling the sign reversal of the diode effect
We propose two physical scenarios compatible with our devices that may explain our experimental findings. Both of them rely on non-sinusoidal CPRs, typical of superconducting nanobridges [39], combined with a source of an inversion-symmetry breaker. In model I, this is represented by a supercurrent vortex, while in model II by spin-orbit couplings. An out-of-plane magnetic field parametrized by \(B^{*}\) is considered, where \(B^{*}\) is given by the zero rectification field \(\eta(B^{*})=0\) and by the vanishing of high-harmonic amplitude.
In model I, the Dayem nanobridge is schematized as a one-dimensional chain of weak links of width \(w\) formed by the Nb grains. A supercurrent vortex can nucleate in one of these weak links [42; 43], as sketched in Fig. 4a, which induces a phase winding in the superconducting order parameter. These vortices have a typical size of the order of \(\xi\), so only a few of them can be accommodated within the bridge. Notice that such vorticity is not a screening of the \(B\)-field, since the small dimensions of the bridge (\(w\ll\lambda_{L}\)) allow full penetration of \(B_{z}\). In this framework, the CPR is affected by two phase shifts: the conventional vector potential associated to \(B_{z}\) and the phase winding of the vortex. It is indeed the interplay of these two contributions that is responsible for a sign change of the rectification parameter. Though on a different length scale, this physical scenario is similar to that of Josephson phase vortices [28; 44]. The rectification parameter \(\eta\) is then evaluated by determining the maximum and minimum values of the Josephson current with respect to the phase bias, see Methods for details.
Figure 4 reports the evolution of \(\eta\) in \(B\) for different amplitudes of the second harmonic, \(I_{2}\), of the CPR. The magnitude of \(\eta\) scales with \(I_{2}\) showing multiple nodes, whose position in \(B\) is independent on \(I_{2}\). The sign change also depends on the position of the vortex, as displayed by Fig. 4c, where \(\eta(B)\) is evaluated for a vortex nucleated at different distances from the lateral edge of the bridge (\(x_{\nu}\)). This behavior suggests a phase-shift competition dominated at low fields by the vortex phase slip, and at large fields by the vector potential.
Another scenario that is able to describe the sign reversal in the diode rectification can be envisioned by combining the colored CPR of the nanobridge with an anomalous phase shift [45; 46; 47; 48; 14] induced by spin-orbit interactions and magnetic fields. In particular, we can expect that mirror symmetry can be locally or globally broken in polycrystalline films[49], thereby leading to spin-orbit interaction of both Rashba and Dresselhaus types (see Methods). Figure 4d shows a sketch of the bridge modeled as an effective \(SS^{\prime}S\) structure, where the \(S\) and \(S^{\prime}\) components have different amplitudes of the superconducting gap and different spin-orbit couplings breaking horizontal and vertical mirror symmetries. The anisotropic spin-orbit interaction generates an anoma
lous phase shift in the CPR that varies with the magnetic field, as explicitly shown in Extended Data Fig. 2. The anomalous phase is then introduced in the CPR via a phenomenological parameter \(\Gamma_{B}\) providing a first-order cosine component in the Fourier expansion, i.e., \(I=\sum_{n}I_{n}\sin(\varphi)+\Gamma_{B}\cos(\varphi)\) (see Methods for details). The anomalous phase \(\varphi_{0}\) is related to the amplitude of \(\Gamma_{B}\), while we assume a linear damping of the high order harmonics \(I_{n}=I_{n,0}(1-B/B^{*})\) which defines the scale \(B^{*}\).
From model II, the diode sign reversal takes place only in the presence of a sizable third harmonic component. Figure 4e reports \(\eta(B)\) for some values of \(I_{3,0}\). By increasing \(I_{3,0}\), the sign inversion gets more pronounced, whereas the maxima and minima of the rectification (\(B^{*}_{max}\simeq 0.74B^{*}\), \(B^{*}_{min}\simeq 0.89B^{*}\)) are barely affected by the weight of the harmonic. Moreover, by including more harmonics in the CPR, the lineshape of \(\eta(B)\) is modified. For instance, Figure 4f shows that a fourth-order harmonic affects the magnetic field dependence by substantially removing the sign change.
## IV Discussion
The comparison between our experimental findings and the proposed models reveals some important features supporting the proposed mechanisms. In particular, for both bridges \(\eta\) has an almost monotonic damping in temperature that can be explained in both models by the reduction of high-order harmonics. This is expected in long metallic weak links where the CPR evolves from highly distorted to sinusoidal-like shapes at large temperatures [39, 50]. Moreover, as shown in Figs. 2e and 3e, \(B_{max}\), is temperature resilient until \(T\geq 0.5\,T_{c}\). This fea
Figure 4: **Modelling supercurrent across a nanobridge for sign-tunable diode effect.****a**, (Top panel) Sketch of the theoretical framework for model I, with the arrows indicating the phase winding associated with a vortex nucleated nearby the grain boundary. (Bottom panel) Characteristic CPR of the weak link hosting a vortex close to the grain boundary. **b**, Non-reciprocal rectification efficiency \(\eta\) calculated for a few values of the second harmonic at a given position of the vortex core with \((x_{v},y_{v})=(0.4\,w,0.4\,w)\) and \(\gamma_{v}=1\). **c**, Non-reciprocal rectification efficiency \(\eta\) for different positions \((x_{v})\) of the vortex with \(y_{v}=0.4w\) and \(I_{2,0}=0.2\). Variation of the vortex position leads to substantial modifications in \(\eta\) at low magnetic fields. **d**, (Top panel) Sketch of the nanobridge, with SS’S indicating the regions with different amplitudes of the superconducting gap. Here we assume the presence of Rashba and Dresselhaus spin-orbit couplings. (Bottom panel) Representative skewed and asymmetric CPR originated by high-harmonic components (up to the third one) and an anomalous phase offset \(\varphi_{0}\). **e**, Non-reciprocal rectification efficiency \(\eta\) calculated for several values of the third harmonic component \(I_{3,0}\), assuming that \(I_{1,0}=1\), \(I_{2,0}=-0.3\) and \(\Gamma_{B}=0.2B/B^{*}\). \(\eta\) changes sign after maximum rectification is reached. **f**, Impact of the fourth harmonic in the rectification, with \(I_{1,0}=1\), \(I_{2,0}=-0.3\) and \(I_{3,0}=0.25\).
ture is fairly captured by both models, as the maximum rectification looks almost independent of the harmonic content (see Fig. 4b for model I and Fig. 4e for model II). However, "long" bridges exhibit features that are mostly accounted by model I, while "short" ones are more compatible with model II. For example, in long (short) bridges the sign reversal is present below (above) \(B_{max}\) as shown in Fig. 3b and Fig. 2b to be compared with Fig. 4b and e, respectively. Multiple sign reversal nodes appear in high fields only for long bridges, as shown in Fig. 3b and well described by the interferometric mechanism of model I. While the rectification lineshape given by model II presents only one inversion node. Moreover, the quick damping of the rectification inversion observed at low fields (\(<0.3\,\mathrm{T}\)) in Fig. 3f is captured by the vortex dynamics described in Fig. 4c. The relative size of the vortex \(\xi/w\) is temperature dependent and influences the vortex position in the bridge. Thus, it is plausible to expect that variations of the vortex size mostly affect the rectification lineshape at low fields while remaining substantially unchanged for larger fields, as shown in Fig.4c.
Finally, it is interesting to note that by extending the proposed models to in-plane magnetic fields, a sizable supercurrent rectification is anticipated but without sign reversal. In particular, for model I, no phase shift is expected from the spatial dependence of the vector potential, since the orbital coupling between an in-plane field and the electron momentum becomes negligible. Thus, the source of phase interference with the vortex winding is eliminated. For model II, the anomalous phase and the harmonic content would be differently affected by an in-plane Zeeman field compared to the out-of-plane orientation not resulting in sign reversal.
## IV Conclusions
In summary, we have demonstrated the implementation of supercurrent diodes in Nb Dayem nanobridges. By breaking the time-reversal symmetry with an out-of-plane magnetic field, we demonstrate that both the amplitude and sign of the rectification amplitude and sign can be tuned without inverting the polarity of the applied field. We have developed two theoretical models to account for the sources of time- and inversion-symmetry breaking, one based on a vortex phase winding, and one that takes into account the spin-orbit interactions present in polycrystalline heavy materials. Yet, a quantitative description of the supercurrent diode effect in metallic nanoconstrictions should account for both scenarios, which complement each other and can coexist. Furthermore, the fabrication process is simple when compared to that of other platforms, a compelling step towards scalability. Analogous nanobridges can be realized from several elemental superconductors currently at the base of other architectures, such as nanocytorons [51], rapid single-flux quanta (RSFQ) [52] and memories [53], which would ease a potential integration.
Finally, the sharp sign reversal of the diode rectification allows us to envisage applications of Dayem nanobridges as \(B\)-field threshold detectors. When biased in the vicinity of the rectification node, small variations of an environmental magnetic field would result in modifications of the sign of the rectification parameter.
|
2309.06367 | Modeling Cognitive-Affective Processes with Appraisal and Reinforcement
Learning | Computational models can advance affective science by shedding light onto the
interplay between cognition and emotion from an information processing point of
view. We propose a computational model of emotion that integrates reinforcement
learning (RL) and appraisal theory, establishing a formal relationship between
reward processing, goal-directed task learning, cognitive appraisal and
emotional experiences. The model achieves this by formalizing evaluative checks
from the component process model (CPM) in terms of temporal difference learning
updates. We formalized novelty, goal relevance, goal conduciveness, and power.
The formalization is task independent and can be applied to any task that can
be represented as a Markov decision problem (MDP) and solved using RL. We
investigated to what extent CPM-RL enables simulation of emotional responses
cased by interactive task events. We evaluate the model by predicting a range
of human emotions based on a series of vignette studies, highlighting its
potential in improving our understanding of the role of reward processing in
affective experiences. | Jiayi Zhang, Joost Broekens, Jussi Jokinen | 2023-09-12T16:30:06Z | http://arxiv.org/abs/2309.06367v2 | # Modeling Cognitive-Affective Processes with Appraisal and Reinforcement Learning
###### Abstract
Computational models can advance affective science by shedding light onto the interplay between cognition and emotion from an information processing point of view. We propose a computational model of emotion that integrates reinforcement learning (RL) and appraisal theory, establishing a formal relationship between reward processing, goal-directed task learning, cognitive appraisal, and emotional experiences. The model achieves this by formalizing three evaluative checks from the component process model (CPM) in terms of temporal difference learning updates: goal relevance, goal conduciveness, and power. The formalism is task independent and can be applied to any task that is represented as a Markov decision problem (MDP) and solved using RL. We evaluate the model by predicting a range of human emotions based on a series of vignette studies, highlighting its potential to improve our understanding of the role of reward processing in affective experiences.
Emotion modelling, reinforcement learning, appraisal theory.
## I Introduction
Computational cognitive models of emotion contribute significantly to the field of affective computing [1, 2, 3]. They formalize hypotheses linking cognitive processes to emotional responses. This elucidates the interplay between goal-oriented behavior, cognitive processing, and emotional states. Such understanding improves the precision of emotion prediction in affective computing, crucial for machines that adapt to their users [4]. Emotions have an integral role in human goal-directed behavior and problem-solving, motivating actions and providing explanatory frameworks [5, 6]. They also participate in the feedback mechanisms underlying learning and adaptation, essential for effective task performance. However, modeling emotion's role in motivation and adaptation is challenging due to the latent nature of cognitive processes. The expansive theoretical space linking task events to emotional responses requires robust priors for tractable modeling.
In this paper, we develop a computational cognitive model that simulates emotion elicitation within goal-oriented interactive tasks. The model addresses the interplay between cognition and emotion by integrating a component process model (CPM) of emotional appraisal with a reinforcement learning (RL) framework for goal-directed behavior. Appraisal theory posits emotion as an evaluative cognitive process [7]. CPM offers a detailed account of this evaluation, analyzing it into cognitive checks that assess event significance and coping capacities [8, 9]. RL serves as a computational framework for decision-making in complex settings, outlining adaptive behavior through learning [10]. Our model's key contribution is the operationalization of specific CPM checks via RL computations. Consequently, we present emotion as inherently coupled with goal-oriented behavior, emerging from the same adaptive processes. The paper investigates the extent to which this CPM-RL integration replicates emotional responses in interactive task environments.
Our work builds on two key theoretical insights. First, we treat emotional appraisal as a dynamic cognitive process that assesses event characteristics to predict emotions. Existing models like the Emotion and Adaptation (EMA) model [3] accomplish this by computing factors such as relevance and desirability and deriving emotions from these computed patterns. However, these models lack an autonomous agent capable of evaluating actions to optimize expected outcomes in complex interaction. The second insight of our model fills this gap by conceptualizing emotions as a manifestation of reward processing, particularly within the computational evaluation of how an event subjectively alters situational prospects [11].
To illustrate the types of problems motivating our work and our model's solution, consider a scenario where a goal-oriented interaction is abruptly interrupted (Figure 1). Frank, a novice trainee, faces a challenging computer error while working on an important project. His inexperience renders him powerless, leading to feelings of desperation. In contrast, David, a seasoned expert, encounters the same error but reacts with anger. He has the expertise to solve the issue but recognizes the time cost involved. The critical difference in their emotional responses hinges on their respective levels of power over the situation. Our model addresses this variance by computing CPM checks through RL updates. The error serves as a negative feedback signal, which, when combined with individual assessment factors like perceived power, generates an appraisal pattern. This pattern maps onto a range of possible emotions. In our example, the same event elicits differing emotions: Frank's low perceived power steers him towards desperation, while David's higher level of power inclines him towards frustration and anger.
Our model makes the following contributions to the state-of-the-art in emotion modeling:
* Integration of the component process model (CPM) with reinforcement learning (RL), providing a computational architecture for predicting emotional responses in specific interactive tasks.
* Introduction of a formal computational framework for four key appraisal components: suddenness, goal relevance, conduciveness, and power.
* Empirical validation of the model's predictions through human data collected from a series of vignette experiments.
## II Related work
### _Emotion and Cognition_
The influence of cognition on human emotions has been recognized since the 1960s, primarily through the introduction of appraisal theories highlighting the role of cognitive evaluation in emotional experiences [12]. This framework has been instrumental for understanding the cognitive aspects of stress and emotion regulation [13]. It asserts that emotional experiences originate from the appraisal of situational importance, a cognitive event reliant on information processing [14].
Appraisal theories can be expressed as models that integrate information from diverse sources like senses, memory, and reasoning, culminating in emotion as a dynamic process rather than a static state [15, 16, 17]. Throughout this process, multiple appraisal dimensions such as goal relevance and coping abilities are assessed. Though these models offer an abstract depiction of the appraisal information flow, neural correlates have been identified, linking brain computations to specific appraisal operations [18, 19].
Humans employ appraisal mechanisms to assess environmental stimuli and trigger emotional responses, while artificial agents use reward functions to evaluate virtual environments and determine actions [20]. This parallel between human appraisal and agent reward functions provides a lens for designing and interpreting AI behavior. This view enhances both the understanding of agent-environment interactions and the potential for developing more human-aligned, adaptive AI systems [4].
Understanding the psychological underpinnings of human emotion is crucial for the design of interactive systems. Design features can elicit both positive and negative emotions, affecting user satisfaction and engagement with technology [21, 22]. Affective computing focuses on the recognition, interpretation, and expression of emotions in computer systems to improve user experiences [23]. Various emotions, such as joy [24], frustration [25], pride [26], shame [27], boredom [28], and confusion [29], arise during interactions with technology. Their detection has become a key research area in HCI [30]. However, given that emotion is a mental process, its detection based solely on observable behavior is limiting. A computational model is needed to articulate hypotheses about how latent user states, like goals and knowledge, interact with observed behavior to generate emotions. Despite progress in understanding emotion in human-computer interactions, a computational framework linking interactive events, user cognition, and emotional outcomes remains to be developed.
The Component Process Model (CPM) by Scherer [8] provides a structured approach to understanding appraisal. It systematically dissects evaluative processes to gauge an event's personal significance [9, 31]. The model consists of four appraisal check classes: relevance, implications, coping potential, and normative significance, each evaluating specific facets of an event in relation to individual goals, capabilities, and societal norms. For instance, relevance appraisals consider novelty and intrinsic pleasantness, while implications focus on goal-conduciveness. Coping potential assesses the agent's capacity to manage the event and its potential outcomes, and whether anyone would have such capacity. Normative significance evaluates the event's compatibility with internal and external standards, such as cultural norms. In total, the CPM specifies 14 individual appraisal checks [9, 8]. Upon encountering a stimulus, the individual's appraisal process ex
Fig. 1: Emotional response to an event may vary based on cognitive factors. In the top row, Frank, an inexperienced trainee, encounters a fatal computer problem during an important work task, prompting an appraisal response and resulting in Frank feeling desperation. At the bottom, David, who is an expert, will have a different emotional response to the same event due to the appraisal response checking that David has some power to deal with the situation, yet the result is still obstructive to David’s goals. Our model predicts emotional responses based on the decision process and event outcomes, considering key appraisals such as suddenness, goal relevance, conduciveness, and power. The model calculates these values to generate an appraisal vector, enabling the prediction of the resulting emotion. The divergent emotional predictions for Frank and David primarily hinge on their contrasting levels of power. Within the RL framework, power is conceptualized as an agent’s ability to choose actions that can influence its environment or outcomes, subsequently affecting its reward. Model predictions are matched with human data from vignette experiments.
amines these checks, shaping the resultant emotional response.
Appraisal theory, particularly as articulated through the CPM, excels in identifying the cognitive basis of emotional experiences in interactive settings. In contrast to basic emotion theory, which focuses on physiological patterns and corresponding basic emotions [32], and core affect theory, which emphasizes a two-dimensional core affect [33], appraisal theory specifies the cognitive variables and processes shaping emotional responses. Though valuable for affective computing, especially in sensor-based emotion detection [23], both basic emotion and core affect theories fall short in detailing the cognitive dimensions of emotion elicitation. Appraisal theory does not confine itself to a small set of basic emotions but acknowledges that specific appraisal patterns recur frequently, thereby meriting emotion labels for easier representation and communication. These are termed _modal emotions_, which encompass not only basic emotions like joy and disgust but also other emotions distinguished by unique but frequent appraisal patterns [17].
### _Computational Models of Emotion_
Affective computing has made notable strides in sensory-based emotion prediction, successfully identifying cues like facial expressions and vocal tones [23, 30, 34]. However, the field still lacks reliable, general emotion sensing capabilities [35, 36]. We argue, along with others [37, 38], that this limitation stems from an over-reliance on bodily signals, which overlooks the complex role of latent cognitive processes in shaping emotion. A way to address this limitation is to model the latent processes that cause the emotional responses associated with the observable physical patterns in the body.
Emotions arise from a dynamic interplay between cognitive appraisals and emotional experiences, necessitating a computational architecture that captures this complexity [39]. The Emotion and Adaptation (EMA) model highlights the role of appraisal processes in emotion generation [3]. This model has been employed across domains like virtual agents and affective computing to simulate and predict emotional responses grounded in cognitive appraisals [40]. The OCC model by Ortony, Clore, and Collins identifies 22 unique emotions stemming from the appraisal of three kinds of events: goal-relevant events, agent actions, and object aspects [41]. Its applicability spans AI, virtual agents, and affective computing [1, 2, 42, 43]. Meanwhile, RL-based models use the notion of reward-processing to simulate learning and adaptation, potentially capturing the dynamic, adaptive character of human emotion and cognition [44, 45].
Existing approaches like the OCC and EMA have demonstrated success in emotion modeling, but may lack the nuanced representation needed to capture the goal-directed nature of human interaction and its relation to emotional response [6]. There is an evident gap in models that can both predict realistic behavioral trajectories in a goal-directed manner and also model the reward-based learning mechanisms that underpin human-like emotional responses [45]. Integrating appraisal and RL can bridge this gap, allowing for dynamic, context-sensitive modeling of emotional responses, along with the ability to generate and justify behavioral trajectories based on goals, abilities, and the task environment.
## III Model
At the core of our approach is the integration of appraisal processes with RL, which emphasizes the adaptation of organisms and the associated emotional processes. RL provides a computational framework for modeling an agent's learning and decision-making processes by updating value expectations for actions in specific situations. These value expectations are intrinsically linked to the goals that guide an agent's behavior, and thus play a vital role in the learning process. In contrast, appraisal theory addresses emotion from a goal-directed perspective, examining the evaluation of events and their implications, for instance, concerning personal relevance and coping potential. Our model combines RL with appraisal, simulating how adaptation and generation of learning signals that update value expectations are connected to emotions. This fusion not only enriches our understanding of human decision-making but also grounds emotion simulation in the adaptive capabilities of AI systems.
Figure 2 presents an overview of our model. Employing a mathematical formalism to describe task environments and model the learning of an agent within a specific environment, the model predicts appraisal "checks" as a result of computing learning signals. These predictions are categorized based on established connections between appraisal patterns and emotion labels, empirically reported by Scherer and his colleagues [46, 47]. Consequently, the model can predict the intensity of the agent's experience of particular emotions, such as joy or frustration, in response to an event during the interaction that the model simulates.
A Markov decision process (MDP) is a mathematical framework for modeling decision-making problems in stochastic environments [10]. It is formally defined as a tuple \((S,A,T,R,\gamma)\), where \(S\) denotes the set of states and \(A\) represents the set of actions that the agent can take. The state transition function \(T(s,a,s^{\prime})\) describes the probability of transitioning from state \(s\) to state \(s^{\prime}\) when taking action \(a\). The reward function \(R(s,a,s^{\prime})\) defines the immediate real-valued reward \(r\in\mathcal{R}\) an agent receives when transitioning from state \(s\) to state \(s^{\prime}\) by performing action \(a\). Finally, the discount factor \(\gamma\) discounts future rewards when calculating the value of actions.
Fig. 2: An overview of our computational model of emotion. A reinforcement learning agent is trained within a task environment described as a Markov decision process. Learning signals are transformed into an appraisal prediction, which can be labeled with the assistance of a pre-trained classifier.
In order to solve an MDP, an RL agent interacts with the environment to derive an optimal policy \(\pi^{*}\), which is a mapping from states to action probabilities such that behavior according to it maximizes the expected cumulative reward over time. The _value function_ of a state \(s\) under a policy \(\pi\), denoted as \(v_{\pi}(s)\), is the expected return when starting in state \(s\) and following policy \(\pi\) thereafter. Here, \(\mathbb{E}\pi\) denotes the expected value of a random variable given that the agent follows policy \(\pi\). The function \(v\pi(s)\) is the state-value function for policy \(\pi\).
\[v_{\pi}(s)=\mathbb{E}_{\pi}[G_{t}|S_{t}=s],\text{ for all }s\in S, \tag{1}\]
where \(G_{t}=\sum_{k=0}^{\infty}\gamma^{k}R_{k}\) represents the expected discounted return. The value of performing an action \(a\in A\) while in a state \(s\in S\) is defined as:
\[q_{\pi}(s,a)=\mathbb{E}_{\pi}[G_{t}|S_{t}=s,A_{t}=a],\text{ for all }s\in S \text{ and }a\in A. \tag{2}\]
An optimal policy \(\pi^{*}\) provides state-action associations that maximize the expected return or utility:
\[q_{\pi^{*}}(s,a)=\sum_{s^{\prime},r}p(s^{\prime},r|s,a)[r+\gamma\max_{a^{ \prime}}q_{\pi^{*}}(s^{\prime},a^{\prime})]. \tag{3}\]
The agent learns the optimal policy by interacting with the environment, receiving feedback in the form of rewards, and updating its value estimates for state-action pairs. In temporal difference (TD) learning, the value estimates are based on the difference between the expected and the observed value:
\[v(s)\gets v(s)+\alpha[R_{s^{\prime}}+\gamma v(s^{\prime})-v(s)], \tag{4}\]
where \(\alpha\) represents a learning-rate parameter. This operation updates the value associated with the state \(s\in S\) as soon as the new state \(s^{\prime}\in S\) is reached, by computing the difference between predicted and observed values. Combining equations 3 and 4 results in a form of TD learning called Q-learning, which can be expressed as
\[q(s,a)\gets q(s,a)+\alpha[R(s,a)+\gamma\max_{a^{\prime}}q(s^{\prime},a^{ \prime})-q(s,a)]. \tag{5}\]
Building upon the TD update of value functions, we propose to derive appraisal computations that permit an assessment of events in connection with an agent's objectives and cognition. Our model formalizes four appraisals out of the 14 from Scherer's CPM within the RL formalism: suddenness, goal relevance, conduciveness, and power. We selected these four based on a minimal set that can be used to differentiate between emotions that are reportedly important and prevalent in interaction, such as joy, irritation, or boredom. Eventually, other checks should be implemented for an accurate and extensive model.
_Suddenness_ is a part of the relevance appraisal in the CPM, and plays a role in checking how novel an event is. Building on the appraisal criterion of novelty from the CPM, we define suddenness as the frequency at which a transition into a state \(s^{\prime}\) occurs, given a previous state \(s\) and action \(a\) taken in it by the agent. On a computational level, this will be represented as the relative frequency of the state's visitation, determining the level of suddenness of the event. States to which the simulation transitions more frequently, given a previous state, are considered less sudden, and conversely, infrequent visitation results in greater suddenness appraisal of the event. To compute this, we introduce a suddenness measure \(A_{s}\) defined as:
\[A_{s}\propto 1-\frac{\hat{T}(s,a,s^{\prime})}{\sum_{s^{\prime\prime}\in S}\hat{T} (s,a,s^{\prime\prime})}.\]
Here, \(\hat{T}\) is a _world model_, an approximation of \(T\), based on the agent's accumulated experiences in its environment.
_Goal relevance_ is also checked during relevance appraisal in the CPM, and checks how relevant an event is, given the agent's current goal. While some goals are fairly general (e.g., survival), in interactive tasks, the goal relevance of an event can be related to the user's goals. Generally, highly goal-relevant events elicit stronger emotional reactions than those less relevant to the agent's objectives. In our computational framework, we operationalize the goal relevance to be proportional to the magnitude of the TD error observed during value prediction updates. The reasoning for this choice is that the TD error focuses the agent's attention on events pertinent to the agent's goal via the learned utility function \(q\), signaling that something in the environment has happened that impacts how the goal can be reached. Both negative and positive implications of an event may be considered goal relevant:
\[A_{gr}\propto|\alpha[R(s,a)+\gamma\max_{a^{\prime}}q(s^{\prime},a^{\prime})-q( s,a)]|.\]
The intuition of this equation is that goal relevance is not an inherent property of the event, but a result of the agent's cognition computing the importance of the event to the eventual outcome that has relevance for the agent.
_Conduciveness_ appraisal is part of the implication assessment in the CPM, and checks if the event facilitates the attainment of the agent's goal. It's essential to note that the intrinsic nature of events doesn't label them as conducive or obstructive. Instead, an agent's past experiences that associate these events with positive or negative outcomes play a role. In our computational model, conduciveness is represented not merely by the direction of the discrepancy between anticipated and actual outcomes, but also by the scale of that difference. We've quantified this by standardizing its values between 0 and 1. Therefore, a positive TD error indicates that the actual outcome was better than expected, and how much it surpassed expectations. Correspondingly, 1 is a very conducive event, indicating a considerable positive disparity. Conversely, a negative TD error might induce differing levels of negative emotions based on its value, with 0 representing a highly unconducive event, signifying a significant negative discrepancy. A value of 0.5, meanwhile, represents a neutral event, reflecting an event outcome that met the initial expectations. Again, an event does not have an intrinsic conduciveness, but it depends on the value update carried by the cognition of the agent, given existing expectations about the environment and the goals of the agent.
Goal conduciveness is therefore defined as:
\[A_{gc}=min(max(\Delta,-1),1)*0.5+0.5\]
where \(\Delta\) is the TD error of the event (see Eq 5): \(\Delta(s,a)=\alpha[R(s,a)+\gamma\max_{a^{\prime}}q(s^{\prime},a^{\prime})-q(s,a)]\).
_Power_ appraisal is part of the coping assessment in the CPM. It evaluates the agent's ability to impact the result of an event. For instance, in our example in the introduction, the experienced user has power because they possess the knowledge to address the error, whereas the novice user lacks this capability. In our model, power is based on the agent's capacity to choose between useful and non-useful actions in a given state. If there are differences in the q values associated with different actions, the agent is presumed to have power to influence the event's outcome. Conversely, if the q values associated with alternative actions are identical or if there is only one possible action to choose from, the agent is not considered to have power. We quantify power as the difference between average q values and the minimum q value at a state, with higher values indicating a greater sense of power:
\[A_{p}\propto\frac{\sum_{a}q(s^{\prime})}{\|a\|}-\min_{a^{\prime}}q(s^{\prime }).\]
Our model predicts four appraisal checks as a result of a state transition event. This vector of four scalar values is then classified for predicting intensities of select modal emotions. To this end, a Support Vector Machine (SVM) classifier was trained to predict modal emotions from appraisal patterns. The mapping of patterns to modal emotions was adapted from an existing table [47], where, given a modal emotion, each appraisal check was given an intensity on a nominal scale. For instance, the modal emotion "joy" is associated with high suddenness, high goal relevance, positive goal conduciveness, and medium power. Because our model outputs scalar values for the appraisal checks, rather than words, we transformed the nominal scaling used in the table into distributions of values, as shown in Table I. The reason for using a distribution rather than exact numbers is that nominal values of appraisals, such as "low" or "very high" would be difficult to match to an exact number, leading to difficulties in the classifier. As an example, values for low appraisal are distributed half-normally (denoted as \(\mathcal{N},x\geq 0\)), with mean \(\mu=0\) and standard deviation \(\sigma=0.1\). As a result of training the classifier on the theoretical data extracted from the original table, it can predict intensities of modal emotions from scalar appraisal profiles that are generated from our RL appraisal agent. These profiles, adapted from [47], are shown in Table II.
This study raises a potential discrepancy with Scherer's appraisal theory regarding the suddenness appraisal of shame. While Scherer's table categorizes shame as low in suddenness, our analysis suggests that this emotion should be classified as open. This discrepancy arises from the fact that shame can result from sudden events, such as making a mistake in front of others, but it can also be a more persistent emotion related to a person's self-identity and self-worth. As a result, we argue that the suddenness appraisal of shame should be more flexible and context-dependent. Other than that, the appraisal values in Table II are the same as Scherer's table. It is an extraction of the original table, only including the appraisals and emotions that are modeled in our two experiments.
## IV Experiments
### _General Method_
Our model was validated via two vignette studies, requiring human participants to read short narratives and evaluate the emotions experienced by the protagonists. These stories portrayed the protagonists interacting with various scenarios in which they were likely to experience a specific emotion. Each story was constructed using the principles of appraisal theory to ensure the elicitation of the desired emotional response. For example, to induce an emotion associated with a low power appraisal, we developed a story highlighting the protagonist's lack of control, consistent with the appraisal profile of that emotion (see Table II). After reading each vignette, participants were asked to provide intensity ratings for the emotions they believed the protagonist would experience. This approach facilitated the comparison of the human intuitive understanding of emotion and the model's predictions.
#### Iv-A1 Materials
We developed 11 narratives, each designed to elicit one of the 11 different emotions outlined in Table II. Each vignette, ranging between 90 and 200 words, depicted a protagonist interacting with technology in a way that elicits specific emotional responses. We crafted the content of each story to align with the appraisal profile corresponding to the targeted emotion. For instance, in a narrative aimed at eliciting fear, the event portrayed was sudden and highly relevant to the protagonist's goals but presented significant obstacles and offered the protagonist little power to alter the outcome.
#### Iv-A2 Procedure
Participants were recruited online and directed to a website hosting the vignettes. After reading each story, they completed a questionnaire asking them to rate the intensity of various emotions they believed the protagonist would experience on a scale from 0 (not at all) to 10 (extremely). To mitigate the potential influence of story sequence on the results, we employed a Latin square counterbalancing design, ensuring the presentation order varied across participants.
#### Iv-A3 Data Analysis
Data collected from the online experiment were aggregated to calculate the mean rating for each emotion in every story. Before this, each participant's responses to each story was standardized to minimize the effect of different individuals using the scales differently. We utilized multilevel modeling (via the lme4 package for R) to test the hypothesis that the story influenced emotion ratings. In addition, in the figures showing human and model data, we include approximate 95% confidence intervals, which can be used to assess the spread of human responses but are not formal hypothesis tests. To facilitate comparison between human data and model predictions, we rescaled the human ratings to range from 0 to 1.
For generating model predictions, we designed 11 MDPs to represent the key events in each story. All stories and the associated MDPs are detailed in Appendices A and B. A tabular RL agent was trained using Q-learning to converge on a policy for each MDP. From the converged models, we computed four appraisal measures using the equations outlined in the previous section. The resulting appraisal vectors were classified into modal emotion probabilities using a support vector machine (SVM).
The training and testing data for the classifier were simulated data derived from Scherer's table, see Table I and II. The classifier was trained using a Support Vector Machine (SVM), a supervised machine learning algorithm used for classification. One of the essential hyperparameters for SVM is the penalty parameter \(c\) which controls the trade-off between maximizing the margin and minimizing classification error. In determining the optimal \(c\) value, we looked to human performance data as a reference. Specifically, we derived a mapping from observed human performance metrics to potential \(c\) values. To illustrate, when considering a specific human precision - which reflects the probability that participants correctly identified a targeted emotion, we determine the corresponding \(c\) value. Using this value within the SVM on simulated data would produce a similar precision in its prediction confidence of the targeted emotion. Through training multiple SVM classifiers over a spectrum of \(c\) values on the simulated data, we selected the \(c\) value that best mirrored the human precision, denoting as \(c_{\text{mean}}\).
Recognizing the importance of individual variability in human performance, we didn't solely rely on one SVM classifier to emulate all participants. We computed the variance in human precision, which in turn informed the variance of the \(c\) value, labeled as \(c_{\text{var}}\). Consequently, for each of the n participants in our experiments, we created an SVM classifier. The \(c\) value for each classifier was drawn from a normal distribution defined by \(c_{\text{mean}}\) and \(c_{\text{var}}\). Importantly, to avoid overfitting, the value \(c\) was not fitted to maximize our emotion predictions with the human data, but merely to align the classifier's precision with human precision.
### _Experiment 1_
In the first experiment, we selected seven stories, each targeting a distinct modal emotion. These emotions were happiness, joy, pride, boredom, sadness, shame, and fear. The selection comprises three positive emotions (happiness, joy, and pride), three negative emotions (sadness, shame, and fear), and one neutral emotion (boredom). Each emotion could be predicted by appraisal theory and therefore by our model based on the four appraisals we implemented computationally (see Table III.
Figure 3 illustrated an example MDP used in the experiment. It shows how we model the story for fear, where the individual with no computer skills is taking an online exam when the internet suddenly goes off. In the MDP, we describe the goal of the participant as a state G, which produces a positive reward. Taking the only available action in state S1 often leads to the goal, leading the RL agent to have an expectancy of a stable internet connection. However, sometimes the internet connection fails, resulting in the MDP transitioning to a problem state P, where the only available action is to move forward to a negatively rewarding error state E. The appraisal analysis occurs at the onset of the problem, i.e., when the agent transitions from S1 to P. This unexpected event carries a negative TD update due to the expected error state that follows the problem state. The agent has no power to cope with the situation, because there are no alternative actions. The resulting appraisal profile corresponds to fear. Table III shows all model-generated appraisal profiles.
The first experiment was conducted as an online study with 42 participants (37 women, 5 men) and a mean age of \(39.5\) (\(sd=12.8\)). Their average ratings for the seven vignettes, as compared to the model's predictions, are presented in Figure 4. There was a statistically significant interaction
Fig. 3: Fear MDP
effect on the rating between emotion and story, meaning that the stories impacted emotion ratings, as hypothesized, \(F(36,2009)=176,p<.001\). A notable observation from these data is the clear distinction between positive and negative emotions. Stories designed to evoke happiness, joy, or pride resulted in high ratings for all three emotions. Similarly, in vignettes intended to elicit fear, sadness, or shame, these emotions were rated more prevalently. For the vignette aiming to induce boredom, participants correctly identified this as the most probable emotional response of the protagonist. Overall, our model achieved a reasonable degree of fit to the data (\(c_{\text{mean}}=0.0032,c_{\text{var}}=0.0002,R^{2}=0.65,\text{RMSE}=0.09\)).
The results indicate a human tendency to identify multiple co-occurring emotions within the same scenario, a finding that has been made frequently with studies that permit free rating of multiple modal emotions [48, 49, 50, 51]. For instance, the story that was designed to evoke happiness also resulted in high ratings for joy and pride, suggesting that these emotions, while distinct, are often experienced together. Similarly, negative emotions such as fear, sadness, and shame were all rated highly in their respective scenarios. By fitting our model's sensitivity parameter \(c\), we replicated this phenomenon, effectively accounting for the co-existence of multiple emotions within a single narrative.
However, while our model demonstrated a satisfactory degree of fit in terms of error, the somewhat lower \(R^{2}\) results from not much variance to explain due to the relatively uniform ratings for all positive or negative emotions within each story. In light of this, we designed a follow-up experiment to constrain the participants' freedom in assigning intensity ratings to each emotion. This permits uncovering if a single modal emotion is more probable in each narrative context. Similarly, the sensitivity of the model can be limited to align its predictions with the more constrained human ratings.
### _Experiment 2_
The second experiment retained the materials from the first experiment, but modified the procedure to involve selecting a singular, most prominent emotion for each story. The selection was done only after reading each story one by one. In the final stage of the trial, the participants could again see all stories, and had to assign one modal emotion to each story, using each emotion in the process. New participants (\(N=30\)) were recruited online, with a mean age of \(35\) (\(sd=5\)), 26 women and 4 men. The probabilities of each emotion being selected as the most prominent in a given story were compared to the model's predicted intensity, as visualized in Figure 5.
There was a statistically significant interaction effect on the rating between emotion and story, meaning that the stories impacted emotion ratings, as hypothesized, \(F(36,1421)=153,p<.001\). The participants can be seen to associate the intended modal emotion with the corresponding story successfully. Model fit was good, \(c_{\text{mean}}=0.014,c_{\text{var}}=0.0056,R^{2}=0.92,\text{RMSE}=0.09\).
The outcome of the second experiment demonstrates our model's ability to predict the most likely modal emotion within stereotypical contexts involving technology interactions. The parameter \(c\) can be used to calibrate our model's sensitivity to a particular modal emotion vs. a wider array of emotions and their intensities. There was a marked consensus between human participants and the model regarding the predominant
Fig. 4: Comparison of human and model predicted emotional ratings for each vignette. The bars represent the mean ratings of each emotion by the participants (in blue) and the model’s predictions (in orange) for each of the seven stories. The clear division between positive and negative emotions across stories is evident in both human ratings and model predictions. The error bars indicate approximate 95% confidence intervals.
Fig. 5: Comparison of the probabilities of each emotion being selected as the most prominent in a given story by human participants (blue bars) against the model’s predicted intensity (orange bars). Each cluster of bars represents a story, with the seven emotions on the x-axis and the probability or intensity on the y-axis. The error bars indicate approximate 95% confidence intervals.
modal emotion in each story, thanks to carefully designed narratives and their corresponding MDPs, which implemented the four appraisals: suddenness, goal relevance, conduciveness, and power. Despite the added complexity of the task, requiring a mapping of each of the seven emotions to a unique story, the participants and the model performed well. This outcome reinforces the notion of an inherent human capacity for "intuitive appraisal", enabling us to model and predict others' emotions effectively in everyday situations [52].
These findings open up a question: although humans and the model accurately differentiate between starkly contrasting emotions, such as happiness and sadness - a capability that aligns with the predictions of the appraisal theory and the Component Process Model (CPM) - can they distinguish between nuanced emotional states that diverge on just one appraisal? This question prompts our third experiment, where we aim to explore this capability to discern between closely related emotions that, nonetheless, differ on a key appraisal. If humans and our model can indeed model emotions according to the CPM, even a minor change in the power appraisal of the protagonist or the simulated user should yield a different modal emotion.
### _Experiment 3_
In the third experiment, we developed new materials: four narratives were specifically crafted to elicit negative emotions. These emotions - anxiety, desperation, irritation, and rage - could be evoked by adjusting one or two selected appraisals: suddenness and power. These four emotions share certain traits in their appraisal profiles: all are elicited by events that are goal-relevant and obstructive (non-conducive). However, they differ in their requirement for suddenness: desperation and rage necessitate a sudden event, whereas anxiety and irritation do not (see Table IV). Furthermore, the appraisal of power enables differentiation between anxiety and desperation (where there is a lack of power) from irritation and rage (where there is power). This design permits the implementation of four distinct emotional scenarios through the manipulation of just two appraisals. If both human participants and our model can discern these subtleties, it would improve the plausibility of the CPM and our computational implementation of it. All model-generated appraisal profiles of these four emotions are shown in Table IV.
Incorporating elements from the first two experiments, the procedure for the third experiment asked human participants to both freely rate the emotions after reading each story (as in Experiment 1) and later, associate each story with the most probable modal emotion (as in Experiment 2). The study comprised 29 online participants, with 26 women, 8 men, and a mean age of \(M_{age}=35.2\), \(sd=11.9\).
Figure 6 displays the average ratings for the four vignettes alongside the model's predictions. There was a statistically significant interaction effect on rating between emotion and story, meaning that the stories impacted emotion ratings, as hypothetized, \(F(9,528)=12,p<.001\). Given that all stories suggested a similar negative emotion, the ratings exhibited limited variance. The results were consistent with those of the first experiment, and our model could replicate the overall outcome, \(c_{\text{mean}}=0.0013,c_{\text{var}}=0.0001,R^{2}=0.29,\text{RMSE}=0.04\). Despite the shared variance between model predictions and human data being relatively small due to the limited variance between the ratings, the model error was minimal, indicating a good fit between the model and human responses.
In the forced-choice responses, where participants had to assign one unique emotion to each story, they were able to accurately identify the intended modal emotion, as shown in Figure 7. There was a statistically significant interaction effect on rating between emotion and story, meaning that the stories impacted emotion ratings, as hypothetized, \(F(9,528)=27,p<.001\). The model demonstrated a reasonable fit with human data, \(c_{\text{mean}}=0.0034,c_{\text{var}}=0.001,R^{2}=0.62,\text{RMSE}=0.16\). The shared variance increased as both humans and the model were compelled to be more sensitive to specific emotions. While the model error was higher, the model was still able to predict the most salient emotion in a manner comparable to humans.
The findings from our third experiment provide more nuanced insight about both our model and human emotional perception. In contrast to the clear emotional distinctions of the previous experiments, the third experiment tested the ability
Fig. 6: Comparison of human and model predicted emotional ratings for each vignette. The bars represent the mean ratings of each emotion by the participants (in blue) and the model’s predictions (in orange) for each of the four stories. The error bars indicate approximate 95% confidence intervals.
to discern between closely related emotions that differ in only one or two key appraisals. Despite the increased complexity of this task, both human participants and our model were able to match emotions accurately with their intended modal emotions.
The outcome of the experiment supports the CPM approach to modeling human emotion, reinforcing the notion that emotions can be differentiated by varying the appraisals of power and suddenness. The ability of our model to mimic human responses across all three experiments provides validity to our RL-based computational implementation of the CPM, even when dealing with subtler emotional differences. It suggests that our model has effectively captured the underlying appraisal processes that govern emotional responses.
For an overview of the model fit across the three experiments, a summary is provided in Table V. The results of the experiments also illuminate the flexibility of human emotional perception. When given freedom, as in Experiment 1, participants displayed an ability to perceive multiple emotions in response to a single narrative. This could be matched by our model by varying the sensitivity parameter \(c\). However, when constrained, as in Experiments 2 and 3, humans and our model could also pinpoint a single most salient emotion, even when the differences between emotions were subtly manipulated through appraisal variations. This adaptability underscores the complexity of human emotion recognition and the efficacy of the CPM in modeling such processes.
## V General Discussion and Conclusion
### _General Discussion_
In this work, we propose a novel computational model that harmonizes elements of Reinforcement Learning and appraisal theory. This approach distinguishes our work from related work, facilitating the development of a comprehensive theoretical framework capable of generating behavioral trajectories based on set goals, individual capabilities, and the task environment. Consequently, it provides an enriched understanding of the cognitive substrates involved in emotional experiences during interactions.
The distinct advantage of our model resides in its capability to generalize across a wide range of appraisals and tasks, which can be represented as a Markov Decision Process (MDP). There exists a substantial body of work employing MDPs and RL for interaction modeling, encompassing areas such as visual search [53], multitasking [54], and typing [55], among others. Significantly, our model can be applied to anticipate emotions in these interaction tasks.
Our model has certain limitations. Despite its capability to generalize across several appraisals, our model does not encompass all potential cognitive evaluations, thus omitting possible influences on emotional responses. For instance, while internal and external standards could be represented in an MDP model, we have excluded them here due to the added complexity of introducing multiple agents. Furthermore, representing appraisal urgency presents a challenge within the constraints of MDP models. In the future, we aspire to advance our model to predict emotions during various task stages, rather than solely at a specific state. This refinement represents an important trajectory for future research.
Looking forward, we aim to enhance our model by integrating a broader array of cognitive evaluations and exploring alternate learning paradigms. In parallel, we envisage transitioning our focus from controlled vignette studies to actual human interactions and emotional responses. Starting with vignettes remains a worthwhile approach in our study, as they provide a standardized, controlled platform that allows for consistently examining emotional responses across different scenarios. But they inherently lack the authenticity of real-life experiences. This dual approach will not only address current limitations but also amplify the model's versatility and applicability, thereby providing deeper insights into the intricate interplay between cognition and emotion.
### _Conclusion_
The introduction of our model marks a significant advancement in the field of emotion modeling, primarily due to its unique combination of appraisal theory and reinforcement learning (RL). This integration facilitates a more robust and nuanced understanding of emotional responses. By incorporating cognitive appraisal, our model acknowledges that emotions are not merely reactive, but rather are closely tied to our evaluations and interpretations of events. This positions our model to better simulate the subjective and highly personal nature of emotional experiences.
Fig. 7: Comparison of the probabilities of each emotion being selected as the most prominent in a given story by human participants (blue bars) against the model’s predicted intensity (orange bars). Each cluster of bars represents a story, with the four emotions on the x-axis and the probability or intensity on the y-axis. The error bars indicate approximate 95% confidence intervals.
As we conclude, the future of emotion modeling, particularly within the realm of affective computing, is exciting. Our research represents one step forward, highlighting the interplay between cognitive processes, reward-based learning, and emotional experiences. As computational models grow more sophisticated and versatile, we anticipate a future where we can simulate human emotional responses with increasing accuracy, catering to various applications for human-computer interactions.
|
2309.09054 | On B type family of Dubrovin-Frobenius manifolds and their integrable
systems | According to D.Zuo and an unpulished work of M.Bertola, there is a two--index
series of Dubrovin--Frobenius manifold structures associated to a B type
Coxeter group. We study the relations between these structures for the
different values of these indices. We show that part of the data of such
Dubrovin--Frobenius manifold indexed by $(k,l)$ can be recovered by the
$(k+r,l+r)$ Dubrovin--Frobenius manifold. Continuing the program of
arXiv:2007.11974 we associate an infinite system of commuting PDEs to these
Dubrovin--Frobenius manifolds and show that these PDEs extend the
dispersionless BKP hierarchy. | Alexey Basalaev | 2023-09-16T17:29:04Z | http://arxiv.org/abs/2309.09054v3 | # On B type family of Dubrovin-Frobenius manifolds and their integrable systems
###### Abstract.
According to M.Bertola and D.Zuo there is a two-index series of Dubrovin-Frobenius manifold structures associated to a B type Coxeter group. We study the relations between these structures for the different values of the indices. We show that part of the data of such Dubrovin-Frobenius manifold indexed by \((k,l)\) can be recovered by the \((k+r,l+r)\) Dubrovin-Frobenius manifold. Continuing the program of [BDbN] we associate an infinite system of commuting PDEs to any difference of the indices \(d=l-k\in\mathbb{N}\). Every such system is an extension of the dispersionless BKP hierarchy.
## 1. Introduction
Introduced by B.Dubrovin in the early 90s, Dubrovin-Frobenius manifolds appeared to be important in the different areas of mathematics. The simpliest examples of the Dubrovin-Frobenius manifold come from the invariants theory of the simple Coxter group. It was found by B.Dubrovin that the orbit space of every such groups has polynomial Dubrovin-Frobenius manifold structure (cf. [D1, D2]).
One of the important applications of the Dubrovin-Frobenius manifolds is in the study of integrable systems. B.Dubrovin and Y.Zhang associated an integrable hierarchy to any Dubrovin-Frobenius manifold given ([DZ], consider also the other constructions like [B1, B2] and [FGM],[DVV]). Their hierarchies are bihamiltonian, however it's not easy to write down explicitly the flows of them. For the case of simple Coxeter groups the hierarchies of Dubrovin-Zhang were investigated in details in [DLZ]. In particular, the construction of Dubrovin-Zhang for \(A_{N}\) Dubrovin-Frobenius manifold gave the Gelfand-Dikey hierarchy and \(D_{N}\) -- respective Drinfeld-Sokolov hierarchies. For the \(B_{N}\) case one only got the dispersionless Drinfeld-Sokolov hierarchy (see also [LRZ]).
Completely different approach to the construction of an integrable hierarchy with the help of Dubrovin-Frobenius manifolds was introduced in [BDbN]. There the authors assumed an infinite series of Dubrovin-Frobenius manifolds, satisfying some stabilization conditions, in order to construct an infinite system of commuting PDEs. Such stabilization conditions were found for \(A\), \(B\) and \(D\) Dubrovin-Frobenius manifolds, giving dispersionless KP, dispersionless BKP and \(1\)-component reduced \(2\)-component BKP hierarchies respectively. The first advantage of this new approach was that the flows are written immediately via the potentials of the Dubrovin-Frobenius manifold. Another advantage is that the compatibility of the PDEs was derived just from the associativity equation of the Dubrovin-Frobenius manifold. The approach of [BDbN] was later extended in [B22] beyond the theory of Dubrovin-Frobenius manifold.
In this paper we deepen the study of B-type Dubrovin-Frobenius manifolds and their integrable systems.
### \(B_{k,l}\) Dubrovin-Frobenius manifold
For a \(B_{l}\) Coxeter group and any \(k\), s.t. \(1\leq k\leq l\), D. Zuo introduced in [Z07] the structure of a Dubrovin-Frobenius manifold on \(M:=\mathbb{C}^{l-1}\times\mathbb{C}^{*}\). Denote it by \(B_{k,l}\) later in the text. The potential of this structure is polynomial in all the variables except one, that comes in both positive and negative powers. Let \(\mathcal{F}_{k,l}\) stand for this potential. Renumbering the variables we have
\[\mathcal{F}_{k,l}=\mathcal{F}_{k,l}(v^{k+1-l},\ldots,v^{-1},v^{0},v^{1}, \ldots,v^{k})\in\mathbb{Q}[v^{k-l},\ldots,v^{k}]\otimes\mathbb{Q}[v^{k+1-l},( v^{k+1-l})^{-1}].\]
The unit of this Dubrovin-Frobenius manifold is given then by \(\frac{\partial}{\partial v^{1}}\) and the metric \(\eta\) satisfies
\[\eta_{\alpha\beta}=\frac{\partial^{3}\mathcal{F}_{k,l}}{\partial v^{1}\partial v ^{\alpha}\partial v^{\beta}}=\begin{cases}1/2&\text{if $\alpha=k+1-l,\beta=0$ or $\alpha=0,\beta=k+1-l$},\\ 1/4(l-k)&\text{if $k-l\leq\alpha,\beta\leq-1$ and $\alpha+\beta=k+1-l$},\\ 1/4k&\text{if $1\leq\alpha,\beta\leq k$ and $\alpha+\beta=k+1$},\\ 0&\text{otherwise}.\end{cases} \tag{1.1}\]
The first theorem of ours is the following stabilization statement. Note that we distinguish between the upper and lower indices of the variables \(v^{\bullet}\) and \(v_{\bullet}\).
**Theorem 1.1**.: _Fix \(\alpha,\beta\in\mathbb{Z}\). For any \(p\geq 1\) we have_
\[\frac{\partial^{2}\mathcal{F}_{k,l}}{\partial v^{\alpha}\partial v^{\beta}} \mid_{v^{\gamma}=\eta^{\delta}v_{\text{s}}}=\frac{\partial^{2}\mathcal{F}_{k+ p,l+p}}{\partial v^{\alpha}\partial v^{\beta}}\mid_{v^{\gamma}=\eta^{\delta}v_{ \text{s}}}.\]
_whenever \(k\leq l\) are such that_
\[\begin{split}& k\geq\alpha+\beta-1\quad\text{and}\quad l\geq 2 +k-\alpha-\beta&\text{for}&\alpha\geq\beta\geq 1,\\ & k\geq\alpha,\quad\text{and}\quad l\geq k+1-\beta& \text{for}&\alpha\geq 1,\ \beta\leq 0,\\ & k\geq 2,\quad\text{and}\quad l\geq k+2-\alpha-\beta& \text{for}&\alpha\leq\beta\leq 0.\end{split} \tag{1.2}\]
Proof is given in Section 3.1.
### Commuting PDEs
In [1] and [1] the authors proved the equalities similar to that of Theorem 1.1 for the \(A_{l},B_{l},\)\(D_{l}\) Coxeter groups and their "open extensions". It was also observed that the associativity equations of the corresponding algebra structures concludes the consistence of the system of infinite PDEs. We extend this result here for \(B_{k,l}\) Dubrovin-Frobenius manifolds.
Let \(f=f(\dots,t_{-1},t_{0},t_{1},t_{2},\dots)\) be a formal function depending on the variables \(t_{\bullet}\) with both positive and negative indices. Denote \(\partial_{\alpha}:=\frac{\partial}{\partial t_{\alpha}}\). Fix some \(d\geq 1\) and consider the system of PDEs
(d-PDEs) \[\partial_{\alpha}\partial_{\beta}f=\frac{\partial^{2}\mathcal{F}_{k_{min},k_{ min}+d}}{\partial v^{\alpha}\partial v^{\beta}}\mid_{v^{\gamma}=\eta^{\gamma \delta}\partial_{1}\partial_{\delta}f},\]
where \(k_{min}\) is minimal index, s.t. \(k=k_{min}\) and \(l=k_{min}+d\) satisfy Eq. (1.2) for a given pair \(\alpha,\beta\).
The right hand side of this equation is a rational function of \(\partial_{1}\partial_{\bullet}f\) with the rational coefficients. Theorem 1.1 asserts, that the right hand side is well-defined. The set \(\{\partial_{1}\partial_{\delta}f\}_{\delta=-\infty}^{\infty}\) should be considered as the initial condition data, and the PDEs above express all the second order derivatives of \(f\) via this initial condition data.
**Theorem 1.2**.:
* _The system (_d-PDEs_) with_ \(\alpha,\beta>0\) _coincides with the dispersionless BKP hierarchy written in Fay form._
* _For any indices_ \(\alpha,\beta\) _let the initial condition data satisfy_ \[\partial_{\alpha}(\partial_{\beta}\partial_{1}f)=\sum_{\mu,\nu}\frac{\partial ^{3}\mathcal{F}_{k,k+d}}{\partial v^{\alpha}\partial v^{\beta}\partial v^{\mu }}\mid_{v^{\gamma}=\eta^{\gamma\delta}\partial_{1}\partial_{\delta}f}\eta^{ \mu\nu}\partial_{1}(\partial_{1}\partial_{\nu}f)\] _for any_ \(k\) _and_ \(l=k+d\) _as in Eq. (_1.2_)._ _Then the system (_d-PDEs_) is consistent._
* _The function_ \(f=\mathcal{F}_{k,k+d}\mid_{v^{\gamma}=t_{\gamma}}\) _is a solution to (_d-PDEs_) with_ \(\alpha\) _and_ \(\beta\) _satisfying Eq. (_1.2_) for the given_ \(k\) _and_ \(l=k+d\)_._
Proof is given in Section 3.5.
### Examples
Denote \(f_{\alpha,\beta}:=\partial_{\alpha}\partial_{\beta}f\). The flows of (d-PDEs) with either \(\beta=1\) are just \(f_{1,\alpha}=f_{1,\alpha}\). The more complicated flows read
\[f_{2,2} =\frac{4}{3}f_{1,1}^{3}-2f_{1,2}f_{1,1}+f_{1,3},\] \[f_{2,3} =4f_{1,2}f_{1,1}^{2}-2f_{1,3}f_{1,1}-2f_{1,2}^{2}+f_{1,4},\] \[f_{2,4} =4f_{1,3}f_{1,1}^{2}+4f_{1,2}^{2}f_{1,1}-2f_{1,4}f_{1,1}-4f_{1,2}f _{1,3}+f_{1,5}.\]
These are dispersionless BKP flows that are the same for any \(d\).
For \(d=1\) we get additionally
\[f_{0,2} =4f_{1,0}f_{1,1},\] \[f_{0,3} =4f_{1,0}(2f_{1,1}^{2}+f_{1,2}),\] \[f_{0,4} =4f_{1,0}(4f_{1,1}^{3}+4f_{1,2}f_{1,1}+f_{1,3}),\]
plus infinitely more flows on \(f_{0,\alpha}\) with \(\alpha\geq 1\). We dont get any flows with the negative indices.
For \(d=2\) in addition to BKP flows we get
\[f_{0,2} =2f_{1,0}f_{1,1},\] \[f_{0,3} =2f_{1,0}(2f_{1,1}^{2}+f_{1,2}),\] \[f_{0,4} =2f_{1,0}(4f_{1,1}^{3}+4f_{1,2}f_{1,1}+f_{1,3}),\]
\[f_{-1,2} =\frac{16}{3}f_{1,0}^{3}+2f_{1,-1}f_{1,1},\] \[f_{-1,3} =32f_{1,1}f_{1,0}^{3}+4f_{1,-1}f_{1,1}^{2}+2f_{1,-1}f_{1,2},\qquad f _{0,0}=\frac{f_{1,-1}}{8f_{1,0}}.\]
plus infinitely more flows on \(f_{-1,\alpha}\) and \(f_{0,\alpha}\) with \(\alpha\geq 1\).
### Acknowledgements
The work of Alexey Basalaev was supported by International Laboratory of Cluster Geometry NRU HSE, RF Government grant, ag. no. 075-15-2021-608 dated 08.06.2021.
## 2. \(B_{k,l}\) Dubrovin-Frobenius manifold
Fix \(k,l\), s.t. \(l\geq 1\) and \(1\leq k\leq l\). Let \(x^{1},\ldots,x^{l}\) be the orthonormal coodinates of \(\mathbb{C}^{l}\) equipped with the scalar product \((\cdot,\cdot)\).
Let \(\sigma_{i}\) be the \(i\)-th elementary symmetric function and \(y^{i}:=\sigma_{i}((x^{1})^{2},\ldots,(x^{l})^{2})\). Then \(y^{1},\ldots,y^{l}\) are coordinates on the orbit space of \(B_{l}\).
Denote by \(g\) the matrix with the following components
\[g^{ij}:=(dy^{i},dy^{j})=\sum_{p=1}^{l}\frac{\partial y^{i}}{\partial x^{p}} \frac{\partial y^{j}}{\partial x^{p}}.\]
It defines the metric on the orbit space of \(B_{l}\). For \(P_{l}:=u^{2l}+\sum_{j=1}^{l}u^{2(l-j)}y^{j}\) we have (cf. Proposition 2.2.2 of [SYS])
\[\sum_{i,j=1}g^{ij}u^{2(l-i)}v^{2(l-j)}=\frac{2}{u^{2}-v^{2}}(uP_{l}^{\prime}(u )P_{l}(v)-vP_{l}(u)P_{l}^{\prime}(v)), \tag{2.1}\]
The orbit space of \(B_{l}\) carries another bilinear form \(\eta\) with the components
\[\eta^{ij}:=\frac{\partial g^{ij}}{\partial y^{k}},\]
whose determinant is a constant multiple of \((y^{l})^{l-k}\). In particular, \(\eta\) is nondegenerate on \(M:=\{(y^{1},\ldots,y^{l})\in\mathbb{C}^{l}\mid y^{l}\neq 0\}\). It follows immediately that \(\eta\) is block-diagonal with the \(k\times k\) and \((l-k)\times(l-k)\) blocks.
**Convention 2.1**.: _In what follows \(g^{\bullet,\bullet}\) will stay for the components of the metric \(g\) in \(dt^{\bullet}\) basis if the indices are Greek and in the \(dy^{\bullet}\) basis is the indices are latin._
The following theorem summarizes Sections 2 and 3 of [207].
**Theorem 2.1** ([207]).: _The metrics \(\eta\) and \(g\) form a flat pencil. In particular, \(M\) carries the flat coordinates \(t^{1},\dots,t^{l}\), s.t. \(\eta\) is constant in these coordinates with the components given by Eq. (1.1) and the components of \(g\) can be integrated:_
\[\frac{g^{\alpha\beta}}{\widetilde{d}_{\alpha}+\widetilde{d}_{\beta}}=\eta^{ \alpha\gamma}\eta^{\beta\delta}\frac{\partial^{2}\mathcal{F}_{k,l}}{\partial t ^{\gamma}\partial t^{\delta}},\]
_with_
\[\widetilde{d}_{\alpha}=\frac{2\alpha-1}{2k},\ \widetilde{d}_{\gamma}=\frac{2(l+1- \gamma)-1}{2(l-k)},\quad\alpha\leq k,\ \gamma>k\]
_The potential \(\mathcal{F}_{k,l}\) depends polynomially on \(t^{1},\dots,t^{l-1}\) and is a Laurent polynomial in \(t^{l}\). The variable \(t^{k}\) is distinguished by_
\[\eta_{\alpha\beta}=\frac{\partial^{3}\mathcal{F}_{k,l}}{\partial t^{k} \partial t^{\alpha}\partial t^{\beta}}.\]
The construction of B.Dubrovin (cf. [1, 2]) is obtained by taking \(k=l\). In this case the determinant of \(\eta\) is a non-zero constant and \(M\) can be taken to be \(\mathbb{C}^{l}\).
It follows immediately from the block-diagonal form of \(\eta\) and also from the quasihomogeneity of the whole construction that the flat coordinates \(t^{\bullet}\) above, are expressed via the coordinates \(y^{\bullet}\) so that
\[\begin{split}& t^{\alpha}=t^{\alpha}(y^{1},\dots,y^{\alpha}) \quad\text{ if }\quad\alpha\leq k,\\ & t^{\beta}=t^{\gamma}(y^{\gamma},\dots,y^{l})\quad\text{ if }\quad\gamma\geq k+1.\end{split} \tag{2.2}\]
Moreover the coordinates \(t^{1},\dots,t^{k}\) are expressed via \(y^{1},\dots,y^{k}\) by the same formulae as in Dubrovin's \(B_{k}\) construction.
### Examples
Let the subscript \((l)\) indicate the rank of the group to which \(g\) corresponds. In the \(dy\) basis we have
\[g_{(4)}=\left(\begin{array}{cccc}4y^{1}&8y^{2}&12y^{3}&16y^{4}\\ 8y^{2}&4y^{1}y^{2}+12y^{3}&8y^{1}y^{3}+16y^{4}&12y^{1}y^{4}\\ 12y^{3}&8y^{1}y^{3}+16y^{4}&4y^{2}y^{3}+12y^{1}y^{4}&8y^{2}y^{4}\\ 16y^{4}&12y^{1}y^{4}&8y^{2}y^{4}&4y^{3}y^{4}\end{array}\right),\]
The flat coordinates are given by
\[\begin{split}& y^{1}=t^{1},\ y^{2}=t^{2}+\frac{(t^{1})^{2}}{4},\ y^{3}=t^{3}+\frac{(t^{1})^{3}}{108}+\frac{t^{1}t^{2}}{6}, \quad y^{4}=(t^{4})^{2}&k=3,\\ & y^{1}=t^{1},\ y^{2}=t^{2}+\frac{(t^{1})^{2}}{8},\quad y^{3}=t^{3}t^{4},\ y^{4}=(t^{4})^{4},&k=2,\\ & y^{1}=t^{1},\quad y^{2}=t^{2}t^{4}+\frac{1}{12}(t^{3})^{2},\ y^{3}=t^{3}(t^{ 4})^{3},\ y^{4}=(t^{4})^{6},&k=1.\end{split}\]
\[g_{(5)}=\left(\begin{array}{cccc}4y^{1}&8y^{2}&12y^{3}&16y^{4}&20y^{5}\\ 8y^{2}&4y^{1}y^{2}+12y^{3}&8y^{1}y^{3}+16y^{4}&12y^{1}y^{4}+20y^{5}&16y^{1}y^{5} \\ 12y^{3}&8y^{1}y^{3}+16y^{4}&4y^{2}y^{3}+12y^{1}y^{4}+20y^{5}&8y^{2}y^{4}+16y^{ 1}y^{5}&12y^{2}y^{5}\\ 16y^{4}&12y^{1}y^{4}+20y^{5}&8y^{2}y^{4}+16y^{1}y^{5}&4y^{3}y^{4}+12y^{2}y^{5}&8y ^{3}y^{5}\\ 20y^{5}&16y^{1}y^{5}&12y^{2}y^{5}&8y^{3}y^{5}&4y^{4}y^{5}\end{array}\right).\]
The flat coordinates are given by
\[y^{1}=t^{1},\ y^{2}=t^{2}+\frac{5(t^{1})^{2}}{16},\ y^{3}=t^{3}+\frac{(t^{1})^{ 3}}{32}+\frac{3t^{1}t^{2}}{8},\]
\[y^{4}=t^{4}+\frac{(t^{1})^{4}}{2048}+\frac{(t^{1})^{2}t^{2}}{64}+\frac{t^{1}t^{3}}{ 8}+\frac{(t^{2})^{2}}{16},\quad y^{5}=(t^{5})^{2}, k=4,\]
\[y^{1}=t^{1},\ y^{2}=t^{2}+\frac{(t^{1})^{2}}{4},\ y^{3}=t^{3}+\frac{(t^{1})^{3}}{ 108}+\frac{t^{2}t^{1}}{6},\qquad y^{4}=t^{4}t^{5},\ y^{5}=(t^{5})^{4} k=3,\]
\[y^{1}=t^{1},\ y^{2}=t^{2}+\frac{(t^{1})^{2}}{8},\qquad y^{3}=t^{3}t^{5}+\frac{ (t^{4})^{2}}{12},\ y^{4}=t^{4}(t^{5})^{3},\ y^{5}=(t^{5})^{6}, k=2,\]
\[y^{1}=t^{1}\qquad y^{2}=t^{2}t^{5}+\frac{t^{3}t^{4}}{8},\ y^{3}=t^{3}(t^{5})^{3 }+\frac{3}{16}(t^{4})^{2}(t^{5})^{2},\ y^{4}=t^{4}(t^{5})^{5},\ y^{5}=(t^{5})^ {8}, k=1.\]
The examples of the potentials \(\mathcal{F}_{k,l}\) are1.
Footnote 1: we follow Dubrovin’s convention using only lower indices in the examples
\[\mathcal{F}_{2,5}=\frac{t_{5}^{6}}{10}+\frac{t_{1}t_{4}t_{5}^{3}} {6}+\frac{t_{1}^{2}t_{3}t_{5}}{16}+\frac{t_{2}t_{3}t_{5}}{2}+\frac{t_{1}^{5}} {7680}+\frac{t_{1}t_{2}^{2}}{16}+\frac{t_{1}^{2}t_{4}^{2}}{16}+\frac{t_{2}t_{ 4}^{2}}{24}+\frac{t_{3}^{2}t_{4}}{24t_{5}}-\frac{t_{3}t_{4}^{3}}{216t_{5}^{2}} +\frac{t_{4}^{5}}{4320t_{5}^{3}}\] \[\mathcal{F}_{3,6}=\frac{t_{1}^{7}}{3265920}+\frac{t_{2}^{2}t_{1}^ {3}}{2592}+\frac{t_{2}^{2}t_{1}^{3}}{2592}+\frac{t_{4}t_{6}t_{1}^{3}}{216}+ \frac{t_{5}t_{6}^{3}t_{2}^{2}}{24}+\frac{t_{6}^{6}t_{1}}{10}-\frac{t_{2}^{3}t_ {1}}{432}+\frac{t_{3}^{2}t_{1}}{24}\] \[\quad+\frac{t_{2}t_{5}^{2}t_{1}}{144}+\frac{t_{2}t_{4}t_{6}t_{1}} {12}+\frac{t_{2}t_{5}t_{6}^{3}}{6}+\frac{t_{3}t_{5}^{2}}{24}+\frac{t_{2}^{2}t_ {3}}{24}+\frac{t_{3}t_{4}t_{6}}{2}+\frac{t_{4}^{2}t_{5}}{24t_{6}}-\frac{t_{4}t _{5}^{4}}{216t_{5}^{2}}+\frac{t_{5}^{5}}{4320t_{6}^{3}}\]
### Some zeros
The following proposition gives some insight on the structure of \(B_{k,l}\) Dubrovin-Frobenius manifold.
**Proposition 2.2**.: _Let \(k\geq 2\). For any \(\gamma,\delta,\alpha,\beta\), s.t. \(k+1\leq\gamma,\delta\leq l\) and \(1\leq\alpha,\beta\leq k\) we have_
* _if_ \(\gamma+\delta\leq k+l\)_, then_ \[\frac{\partial^{3}\mathcal{F}_{k,l}}{\partial t^{\alpha}\partial t^{\gamma} \partial t^{\delta}}=0,\]
* _if_ \(\alpha+\beta\geq k+1\)_, then_ \[\frac{\partial^{3}\mathcal{F}_{k,l}}{\partial t^{\gamma}\partial t^{\alpha} \partial t^{\beta}}=0.\]
Proof.: For case (a) we have to prove that the following expression vanishes.
\[\frac{\partial}{\partial t^{\alpha}}g^{k+l+1-\gamma,k+l+1-\delta}=\sum\frac{ \partial y^{c}}{\partial t^{\alpha}}\frac{\partial}{\partial y^{c}}\left(\frac {\partial t^{k+l+1-\gamma}}{\partial y^{a}}\frac{\partial t^{k+l+1-\delta}}{ \partial y^{b}}g^{ab}\right),\]
where the summation is taken over \(1\leq c\leq\alpha\) and \(l\geq a\geq k+l+1-\gamma\), \(l\geq b\geq k+l+1-\delta\) by Eq. (2.2). By using these equalities again we get
\[\frac{\partial}{\partial t^{\alpha}}g^{k+l+1-\gamma,k+l+1-\delta}=\sum\frac{ \partial y^{c}}{\partial t^{\alpha}}\frac{\partial t^{k+l+1-\gamma}}{\partial y ^{a}}\frac{\partial t^{k+l+1-\delta}}{\partial y^{b}}\frac{\partial g^{ab}}{ \partial y^{c}}.\]
To prove the proposition we show that \(\partial g^{ab}/\partial y^{c}=0\) under the conditions on the indices \(a,b,c\) above.
It follows from Eq. (2.1) that \(g^{ab}\) is at most quadratic in \(y^{\bullet}\). Moreover if one assigns the grading to \(y^{p}\) by \(\deg y^{p}=p\) then \(\deg g^{ab}=a+b-1\). We have
\[\deg\frac{\partial g^{ab}}{\partial y^{c}}\geq 2(k+l+1)-\gamma-\delta-1-c\geq k+l+1-c.\]
If \(g^{ab}\) is quadratic and the partial derivative above is non-zero, than its degree should be the degree of some \(y^{\bullet}\). With the bounds on \(c\) above this can not be ever reached.
The proof of claim (b) is completely similar and therefore skipped.
**Remark 2.3**.: _One notes immediately by looking at \(\mathcal{F}_{2,5}\) and \(\mathcal{F}_{3,6}\) above that the conditions on \(\alpha+\beta\) in (a) and \(\gamma+\delta\) in (b) above are strict._
**Corollary 2.4**.: _For \(\alpha+\beta\leq k+1\) we have_
\[\frac{\partial^{2}\mathcal{F}_{k,l}}{\partial v^{\alpha}\partial v^{\beta}}= \frac{\partial^{2}\mathcal{F}_{k,k}}{\partial v^{\alpha}\partial v^{\beta}}.\]
Proof.: This follows immediately that for \(1\leq\alpha,\beta\leq k\) in the flat basis the components \(g^{\alpha,\beta}\) for \(B_{k,l}\) coincide with the components \(g^{\alpha,\beta}\) for \(B_{k}\) up to a function depending on \(t^{\gamma}\) with \(\gamma\geq k+1\). The proposition above shows that this dependance is trivial if \(\alpha+\beta\leq k+1\).
## 3. Proofs
### Proof of Theorem 1.1
Let \(\alpha,\beta\in\mathbb{Z}\). Divide the proof into three parts. 1st: \(\alpha,\beta\geq 1\), 2nd: \(\alpha,\beta\leq 0\) and 3rd: \(\alpha\leq 0\) and \(\beta\geq 0\). We call these parts PP, NN and PN sectors respectively, keeping P for positive and N for negative.
In what follows we rewrite the stabilization statement of the theorem in the coordinates \(t^{\bullet}\). In particular, we will have no negative indices.
### Sector PP
This case follows from Corollary 2.4 and Proposition 4.3 of [1].
### Sector NN
Let \(1\leq\gamma,\delta\leq l-k\) s.t. \(\gamma+\delta\leq l-k\). Stabilization in NN sector is equivalent to
\[\frac{\partial^{2}\mathcal{F}_{k,l}}{\partial t^{k+\gamma}\partial t^{k+ \delta}}(\widetilde{t})=\frac{\partial^{2}\mathcal{F}_{k+r,l+r}}{\partial t^{ k+r+\gamma}\partial t^{k+r+\delta}}(\widetilde{t})\]
where \(\widetilde{t}=\widetilde{t}(t)\) is the change of the variables given by \(t^{\epsilon}=4k\cdot\widetilde{t}^{\epsilon}\) if \(1\leq\epsilon\leq k\) and \(t^{l}=2\cdot\widetilde{t}^{k+1-l}\), \(t^{k+1}=2\cdot\widetilde{t}^{0}\), \(t^{\nu}=4(l-k)\cdot\widetilde{t}^{k+1-\nu}\) for \(\nu\geq k+1\).
The equation above is equivalent to
\[\frac{g^{l+1-\gamma,l+1-\delta}_{(l)}(\widetilde{t})}{\widetilde{d}_{l+1- \gamma}+\widetilde{d}_{l+1-\delta}}=\frac{g^{l+r+1-\gamma,l+r+1-\delta}_{(l+r) }(\widetilde{t})}{\widetilde{d}_{l+r+1-\gamma}+\widetilde{d}_{l+r+1-\delta}}.\]
One notes immediately that the denominators on the both sides coincide. The numerator on the left hand side is expressed by
\[g^{l+1-\gamma,l+1-\delta}_{(l)}=\sum_{i=1}^{\gamma}\sum_{j=1}^{\delta}\frac{ \partial t^{l+1-\gamma}}{\partial y^{l+1-i}}\frac{\partial t^{l+1-\delta}}{ \partial y^{l+1-j}}g^{l+1-i,l+1-j}_{(l)}, \tag{3.1}\]
where we follow Convention 2.1.
The following lemma is the key to the stabilization in the NN sector. Let \(\widetilde{y}\) be the new coordinates s.t. \(\widetilde{y}^{i}=y^{k+1-i}\). Due to Proposition 2.2 in the NN sector the change of the variables \(t\to\widetilde{t}\) is equivalent to the change on the variables \(y\to\widetilde{y}\).
**Lemma 3.1**.: _For any \(r\geq 1\) and all \(i,j\), s.t. \(i+j\leq l\)_
\[g^{l+1-i,l+1-j}_{(l)}(\widetilde{y})=g^{l+r+1-i,l+r+1-j}_{(l+r)}(\widetilde{y }).\]
Proof.: Denote by \(W_{l}\) the left hand side generating function of Eq (2.1) written in coordinates \(\widetilde{y}^{\bullet}\). We are to show that the coefficient of \(u^{2(i-1)}v^{2(j-1)}\) in \(W_{l}\) equals the coefficient of \(u^{2(i+r-1)}v^{2(j+r-1)}\) in \(W_{l+r}\). This is equivalent to
\[W_{l}=W_{l+r}\quad\text{modulo}\ u^{2a}v^{2b}\ \text{ s.t. }a+b\geq 2l. \tag{3.2}\]
However \(P_{l}=u^{2l}+u^{2(l-1)}\widetilde{y}^{k}+\cdots+u^{2}\widetilde{y}^{k-l}+ \widetilde{y}^{k+1-l}\) and therefore
\[P_{l}=P_{l+r}\mid_{\widetilde{y}^{\bullet}=0,\ a>k}.\] \[\Leftrightarrow\ P_{l}-P_{l+r}\equiv 0\quad\text{modulo}\ u^{2l-1}.\]
The lemma follows now from Eq. (2.1).
**Corollary 3.2**.: _Under the conditions of the lemma above we have_
\[\eta^{l+1-i,l+1-j}_{(k,l)}(\widetilde{y})=\eta^{l+r+1-i,l+r+1-j}_{(k+r,l+r)}( \widetilde{y}).\]
Proof.: This follows by differentiating the statement of the lemma above w.r.t. \(\widetilde{y}^{1}\). The metric \(\eta_{(k,l)}\) is obtained from \(g_{(l)}\) by differentiating w.r.t. \(y^{k}=\widetilde{y}^{1}\).
The conditions of the lemma above hold true for Eq. (3.1) because of the bounds on \(\gamma+\delta\) and \(k\).
It follows from the corollary above that the matrices \(\{\partial t^{l+1-\gamma}/\partial y^{l+1-i}\}_{\gamma,i=1}^{l-k}\) also stabilize. In particular, the flat coordinates \(t^{k+1},\ldots,t^{l}\), expressed via \(\widetilde{y}\) coincide for \(B_{k,l}\) and \(B_{k+r,l+r}\).
Summing up we see that all three multiples of the sum in Eq. (3.1) stabilize. This completes the proof.
### Sector PN
Assume \(\alpha,\mu,\nu\) are s.t. \(1\leq\alpha\leq k\) and \(0\leq\mu,\nu\leq l-k-1\). The potential \(\mathcal{F}_{k,l}\) is at least cubic in its variables and the stabilization in the PN sector is equivalent to
\[\left[\frac{\partial}{\partial t^{k+1+\mu}}\frac{\partial^{2} \mathcal{F}_{k,l}}{\partial t^{k+1-\alpha}\partial t^{k+1+\nu}}\right]( \widetilde{t}) =\left[\frac{\partial}{\partial t^{r+k+1+\mu}}\frac{\partial^{2 }\mathcal{F}_{k+r,l+r}}{\partial t^{k+r+1-\alpha}\partial t^{k+r+1+\nu}} \right](\widetilde{t}) \tag{3.4}\] \[\left[\frac{\partial}{\partial t^{k+1+\mu}}\frac{\partial^{2} \mathcal{F}_{k,l}}{\partial t^{k+1-\alpha}\partial t^{k+1+\nu}}\right]( \widetilde{t}) =\left[\frac{\partial}{\partial t^{r+k+1+\mu}}\frac{\partial^{2 }\mathcal{F}_{k+r,l+r}}{\partial t^{k+r+1-\alpha}\partial t^{k+r+1+\nu}} \right](\widetilde{t}) \tag{3.3}\]
where \(\widetilde{t}=\widetilde{t}(t)\) is the change of the variables given by \(t^{\epsilon}=4k\cdot\widetilde{t}^{\epsilon}\) if \(1\leq\epsilon\leq k\) and \(t^{l}=2\cdot\widetilde{t}^{k+1-l}\), \(t^{k+1}=2\cdot\widetilde{t}^{0}\), \(t^{\nu}=4(l-k)\cdot\widetilde{t}^{k+1-\nu}\) for \(\nu\geq k+1\).
The first equality above is equivalent to
\[\left[\frac{\partial}{\partial t^{k+1-\alpha}}\frac{g_{(l)}^{l-\mu,l-\nu}}{ \widetilde{d}_{l-\mu}+\widetilde{d}_{l-\nu}}\right](\widetilde{t})=\left[ \frac{\partial}{\partial t^{k+r+1-\alpha}}\frac{g_{(l+r)}^{l+r-\mu,l+r-\nu}} {\widetilde{d}_{l+r-\mu}+\widetilde{d}_{l+r-\nu}}\right](\widetilde{t})\]
For the denominator we have
\[\widetilde{d}_{l-\mu}+\widetilde{d}_{l-\nu}=\frac{1+\mu+\nu}{l-k}=\widetilde{ d}_{l+r-\mu}+\widetilde{d}_{l+r-\nu}\]
and we consider the numerator in details.
Recall Convention 2.1. Compute
\[\frac{\partial}{\partial t^{k+1-\alpha}}g_{(l)}^{l-\mu,l-\nu}=\sum_{p=1}^{\nu +1}\sum_{q=1}^{\mu+1}\sum_{d=l-k+\alpha}^{l}\frac{\partial y^{l+1-d}}{ \partial t^{k+1-\alpha}}\frac{\partial t^{l-\nu}}{\partial y^{l+1-p}}\frac{ \partial t^{l-\mu}}{\partial y^{l+1-q}}\frac{\partial g_{(l)}^{l+1-p,l+1-q}}{ \partial y^{l+1-d}}. \tag{3.5}\]
Due to the quasinhomogeneity of \(g\) all the summands above with \(d\geq p+q-(l+1)\) are zero. It was proved in Section 3.3 that \(\partial t^{l-\nu}/\partial t^{l+1-p}\) and \(\partial t^{l-\mu}/\partial t^{l+1-q}\) stabilize. It was proved in [BDbN] that \(\partial y^{l+1-d}/\partial t^{k+1-\alpha}\) stabilize too. It remains to consider the fourth multiple above.
The expression of Eq. (3.5) is zero by Proposition 2.2 unless \(l-\nu+l-\mu\leq k+l+1\). For this range we have \(p+q\leq l+1-k\).
**Lemma 3.3**.: _Let \(\widetilde{y}\) be as in Section 3.3 and \(p,q\) be s.t. \(p+q<l+1\) then_
\[\frac{\partial g_{(l)}^{l+1-p,l+1-q}}{\partial y^{l+1-d}}(\widetilde{y})= \frac{\partial g_{(l+r)}^{l+r+1-p,l+r+1-q}}{\partial y^{l+r+1-d}}(\widetilde{y})\]
_as the functions of \(\widetilde{y}\)._
Proof.: This follows immediately from Eq. (2.1).
The partial derivatives in the lemma above are linear functions of \(y^{k+1},\ldots,y^{l}\). The change of the variables \(y\to\widetilde{y}\) in this range is enduces by \(t\to\widetilde{t}\). This lemma shows that all the four multiples in Eq (3.5) stabilize in \(\widetilde{t}\) coordinates.
The proof of Eq. (3.4) is skipped because it goes completely parallel.
### Proof of Theorem 1.2
Part (a) follows immediately from [BDbN][Theorem 6.3] and Corollary 2.4.
To prove part (b) we have to show that \(\partial_{\gamma}(\partial_{\alpha}\partial_{\beta}f)=\partial_{\alpha}(\partial_ {\beta}\partial_{\gamma}f)\) for any \(\alpha,\beta,\gamma\in\mathbb{Z}\).
Being the potential of a Dubrovin-Frobenius manifold, \(\mathcal{F}_{k,l}\) satisfies
\[\sum_{\nu,\mu}\frac{\partial^{3}\mathcal{F}_{k,l}}{\partial v^{\sigma}\partial v ^{\gamma}\partial v^{\nu}}\eta^{\nu\mu}\frac{\partial^{3}\mathcal{F}_{k,l}}{ \partial v^{\mu}\partial v^{\alpha}\partial v^{\beta}}=\sum_{\nu,\mu}\frac{ \partial^{3}\mathcal{F}_{k,l}}{\partial v^{\alpha}\partial v^{\gamma}\partial v ^{\gamma}}\eta^{\nu\mu}\frac{\partial^{3}\mathcal{F}_{k,l}}{\partial v^{\mu} \partial v^{\sigma}\partial v^{\beta}} \tag{3.6}\]
for any fixed indices \(\alpha,\beta,\sigma,\gamma\).
For given \(\alpha,\beta,\gamma\) let \(k\) and \(l\) be s.t. the conditions of Theorem 1.2 hold true for any pair of indices from the three given. Express the partial derivative \(\partial_{\gamma}(\partial_{\alpha}\partial_{\beta}f)\):
\[\partial_{\gamma}(\partial_{\alpha}\partial_{\beta}f) =\sum_{\nu,\mu}\partial_{\gamma}\partial_{1}\partial_{\nu}f\cdot \eta^{\nu\mu}\cdot\frac{\partial^{3}\mathcal{F}_{k,l}}{\partial v^{\mu} \partial v^{\alpha}\partial v^{\beta}}\mid_{v^{\sigma}=\eta^{\omega}\partial _{1}\partial_{\omega}f}\] \[=\sum_{\nu,\mu}\partial_{1}\left(\frac{\partial^{2}\mathcal{F}_{k,l}}{\partial v^{\gamma}\partial v^{\nu}}\mid_{v^{\sigma}=\eta^{\omega}\partial _{1}\partial_{\omega}f}\right)\cdot\eta^{\nu\mu}\cdot\frac{\partial^{3} \mathcal{F}_{k,l}}{\partial v^{\mu}\partial v^{\alpha}\partial v^{\beta}}\mid _{v^{\sigma}=\eta^{\omega}\partial_{1}\partial_{\omega}f}\] \[=\sum_{\delta,\sigma}\partial_{1}\partial_{1}\partial_{\delta}f \cdot\eta^{\delta\sigma}\left(\sum_{\nu,\mu}\frac{\partial^{3}\mathcal{F}_{k,l }}{\partial v^{\sigma}\partial v^{\gamma}\partial v^{\nu}}\cdot\eta^{\nu\mu} \cdot\frac{\partial^{3}\mathcal{F}_{k,l}}{\partial v^{\mu}\partial v^{\alpha} \partial v^{\beta}}\right)\mid_{v^{\epsilon}=\eta^{\omega}\partial_{1} \partial_{\omega}f}.\]
The expression obtained is symmetric in \(\alpha,\beta,\gamma\) by WDVV equation above.
For part (c) we have to show that
\[\frac{\partial^{2}\mathcal{F}_{k,l}}{\partial v^{\alpha}\partial v^{\beta}}= \frac{\partial^{2}\mathcal{F}_{k,l}}{\partial v^{\alpha}\partial v^{\beta}} \mid_{v^{\gamma}=\eta^{\varsigma}\partial_{1}\partial_{\delta}\mathcal{F}_{k,l }}.\]
The substitution on the RHS is
\[v^{\gamma}=\eta^{\varsigma}\frac{\partial^{2}\mathcal{F}_{k,l}}{\partial v^{1} \partial v^{\delta}}=v^{\gamma}\]
and the statement follows trivially.
|
2305.20075 | The Effective Field Theory of Large Scale Structures of a Fuzzy Dark
Matter Universe | Ultra-light scalar fields and their non-interacting class, the so-called
fuzzy dark matter (FDM), are candidates for dark matter, introduced to solve
the small-scale problems of the standard cold dark matter. In this paper, we
address whether the small-scale effects, specifically the quantum pressure,
could leave sizable imprints on the large-scale statistics of the matter. For
this purpose, We utilize the Effective Field Theory of Large Scale Structures
(EFT of LSS) wherein small-scale physics is integrated and represented on large
scales by only a set of free parameters. These parameters can be determined by
fitting to the cosmological simulations. We use the \textit{Gadget-2} code to
study the evolution of $512^3$ particles in a box of side length
$250\,h^{-1}\,\mathrm{Mpc}$. Fitting EFT predictions to the simulation data, we
determine the value of the speed of sound. We use the suppressed FDM initial
conditions for the FDM case, sufficient to produce accurate -- enough for our
purpose -- results on large scales. We perform three FDM simulations with
different masses and compare their sound speed with the standard cold dark
matter (CDM) simulation. We found that the FDM sound speed is slightly higher
than CDM's. The deviation of the sound speed for FDM from CDM is larger for
lower FDM masses. We conclude that the impact of the FDM is not limited to the
small scales alone, and we can search for them by studying the matter on large
scales. Though it is beyond the observations' scope today, it is possible to
discriminate it with upcoming observations. | Hamed Manouchehri Kousha, Sina Hooshangi, Aliakbar Abolhasani | 2023-05-31T17:48:46Z | http://arxiv.org/abs/2305.20075v2 | # The Effective Field Theory of Large Scale Structures of a Fuzzy Dark Matter Universe
###### Abstract
Ultra-light scalar fields and their non-interacting class, the so-called fuzzy dark matter (FDM), are candidates for dark matter, introduced to solve the small-scale problems of the standard cold dark matter. In this paper, we address whether the small-scale effects, specifically the quantum pressure, could leave sizable imprints on the large-scale statistics of the matter. For this purpose, We utilize the Effective Field Theory of Large Scale Structures (EFT of LSS) wherein small-scale physics is integrated and represented on large scales by only a set of free parameters. These parameters can be determined by fitting to the cosmological simulations. We use the _Gadget-2_ code to study the evolution of \(512^{3}\) particles in a box of side length \(250\,h^{-1}\,\mathrm{Mpc}\). Fitting EFT predictions to the simulation data, we determine the value of the speed of sound. We use the suppressed FDM initial conditions for the FDM case, sufficient to produce accurate- enough for our purpose- results on large scales. We perform three FDM simulations with different masses and compare their sound speed with the standard cold dark matter (CDM) simulation. We found that the FDM sound speed is slightly higher than CDM's. The deviation of the sound speed for FDM from CDM is larger for lower FDM masses. We conclude that the impact of the FDM is not limited to the small scales alone, and we can search for them by studying the matter on large scales. Though it is beyond the observations' scope today, it is possible to discriminate it with upcoming observations.
## 1 Introduction
According to the standard model of cosmology, nearly 26 percent of the universe's energy content consists of some cold matter with negligible non-gravitational interaction, called _dark matter_(Planck Collaboration et al., 2020). The standard candidate for dark matter is weakly interacting massive particles (WIMPs). The theoretical predictions based on WIMPs are consistent with large-scale observational data. However, in small scales (\(\sim 10\,\mathrm{kpc}\)), some discrepancies emerge, e.g., the core-cusp problem (Moore et al., 1999), the missing satellite problem (Moore et al., 1999), and the too-big-to-fail problem (Boylan-Kolchin et al., 2011). There are two main ways people try to resolve these problems: either by exploring baryonic feedbacks such as supernova explosions that might be responsible for the disruption of small-scale structures or by proposing other dark matter candidates with new physics in small-scales.
One of the alternative candidates for dark matter is ultra-light scalar fields (ULSFs, Hu et al., 2000). With a mass of about \(10^{-22}\,\mathrm{eV}\), they have a long de-Broglie wavelength showing quantum effects on galactic scales, which could resolve the small-scale problems (see e.g. Ferreira, 2021; Hui, 2021, for a recent review). Strictly speaking, the uncertainty principle appears as an additional pressure term in Euler's equation, the so-called _quantum pressure_ (QP). It results in smooth cores in the center of halos rather than sharp cusps, which is the prediction of the standard CDM. Furthermore, due to the QP, small-scale non-linearities are somewhat smeared. The suppression of the amplitude of the perturbations, in turn, leads to a fall-off of the matter power spectrum on small scales, which means that fewer low-mass halos and sub-halos will form. So the missing satellite problem could be resolved. This suppression, along with the lower maximum circular speed of the baryons in FDM halos, could also relieve the "too big to fail" problem. However, these are still controversial (see e.g. Deng et al., 2018; Robles et al., 2018), and there is no consensus on whether baryonic feedback or alternative candidates like FDM could resolve the small-scale problems completely in a coherent manner (see e.g. Hui et al., 2017; Del Popolo & Le Delliou, 2017; Bullock & Boylan-Kolchin, 2017, for review).
In addition to the suppression of small-scale structure formation, ULSFs have some other fingerprints, including the formation of some bound objects at the center of halos, due to the balance of gravity and QP, i.e., the so-called _solitonic cores_, or the formation of _quantum interference patterns_ of the size of the de-Broglie wavelength. In general, ULSFs could also have self-interactions. The most straightforward class of ULSF dark matter, which has zero self-interaction, often called "Fuzzy Dark matter" (FDM, see, e.g. Li et al., 2019, for a recent review).
In recent few years, the cosmological simulations based on FDM dynamics have been widely used to study interference patterns (e.g. Schive et al., 2014; Li et al., 2019), merger of solitonic cores (e.g. Schwabe et al., 2016; Edwards et al., 2018), suppression of mass power spectrum (e.g. Li et al., 2019; Nori and Baldi, 2018; May and Springel, 2021), suppression of halo mass function (e.g. Schive et al., 2016; May and Springel, 2021; Zhang et al., 2018), mixed fuzzy and cold dark matter (e.g. Schwabe et al., 2020), oscillations and random walk of the solitonic cores (e.g. Li et al., 2021; Schive et al., 2020), etc. All of these works have used relatively small-box simulations to study the small-scale new physics of FDM, so the possible back-reaction of these UV physics to the large-scale dynamics of the universe could not be seen in them. It needs FDM simulations with a box size of over \(\sim 200\,h^{-1}\,\mathrm{Mpc}\). The study of FDM structure formation at these large scales is the subject of this work.
The universe at its largest scales is almost homogeneous with tiny fluctuations. Hence, its dynamics are amenable to the perturbation theory. However, once we get closer to the scale of clusters and galaxies, the universe is clumpy and non-linear. The Effective Field Theory of large-scale structures (EFT of LSS) is a framework for the study of matter perturbations in linear and quasi-linear regimes (Baumann et al., 2012; Carrasco et al., 2012; Hertzberg, 2014; Carrasco et al., 2014, 2014; Senatore and Zaldarriaga, 2015; Baldauf et al., 2015; Foreman and Senatore, 2016; Foreman et al., 2016; Abolhasani et al., 2016). The primary mission of the EFT of LSS is to push the range of validity of the perturbation theory toward the non-linear regime. As long as we are interested in the dynamics of the large-scale perturbations, we can integrate short modes at the expense of appearing as effective sources on the right-hand side of fluid equations. While the EFT of LSS fixes the general form of those source terms, it cannot say anything about the actual values of the parameters of the effective fluid, particularly the speed of sound or bulk viscosity. To determine these parameters, one has to resort to large-box computer simulations (Baumann et al., 2012). Different cosmological parameters or dynamics on large scales could change the values of these parameters; however, it is not clear whether new physics on small scales could also change them. This work is to answer this question in the case of FDM. Specifically, we study the impact of the special UV physics of FDM, i.e., the QP, on the one-loop speed of sound parameter of EFT of LSS, in comparison to the standard cold dark matter(CDM). This parameter has been determined using large-box CDM simulations in Senatore and Zaldarriaga (2015),Foreman and Senatore (2016) and Foreman et al. (2016). We use the same procedure for large-box CDM and FDM simulations, performed using the public _Gadget-21_(Springel, 2005) (Zhang et al., 2018) code with the proper initial conditions for each, and compare the results.
Footnote 1: [https://wwwmpa.mpa-garching.mpg.de/gadget/](https://wwwmpa.mpa-garching.mpg.de/gadget/)
The paper is organized as follows: In Sec. 2, we briefly review the standard perturbation theory (SPT) and emphasize relevant essential points. In Sec. 3, we discuss the main ideas of FDM, including its dynamical equations and the FDM perturbation theory. The subject of Sec. 4 is to explain the details of our cosmological simulations of CDM and FDM. Finally, in Sec. 5, after explaining our procedure for determining EFT parameters of CDM and FDM using our simulations, we discuss and compare some of the main results.
## 2 Standard Perturbation Theory
Let us first briefly review the main lines of the SPT (see Bernardeau et al., 2002, for a comprehensive review). The equations governing the evolution of the matter density contrast, \(\delta\), and the velocity field of matter, \(\mathbf{v}\), within the SPT are
\[\delta^{\prime}+\theta= -\partial_{i}(\delta\ v^{i})\,, \tag{1}\] \[\mathbf{v}^{\prime}+\mathcal{H}\mathbf{v}+\nabla\phi= -\mathbf{v}\cdot\nabla\mathbf{v}, \tag{2}\]
where \(\phi\) is the gravitational field, \(\theta\) is the divergence of velocity field, \(\theta\equiv\nabla.\mathbf{v}\), the primes denote \(\partial/\partial\eta\equiv(1/a)\,\partial/\partial t\) in which \(a\) is the scale factor and \(\mathcal{H}\) is the conformal expansion rate. The solution to this coupled system of equations is usually presented perturbatively as a product of initial values of the fields \(\delta\) and \(\theta\) integrated against the so-called
SPT kernels as
\[\delta_{n}(\mathbf{k}) =\int_{q_{1}}\cdots\int_{q_{n}}(2\pi)^{3}\delta(\mathbf{k}-\mathbf{q}_{1}- \cdots-\mathbf{q}_{n})\,F_{n}(\mathbf{q}_{1},\ldots,\mathbf{q}_{n})\,\delta_{1}(\mathbf{q}_{1}) \ldots\delta_{1}(\mathbf{q}_{n}) \tag{3}\] \[\theta_{n}(\mathbf{k}) =\int_{q_{1}}\cdots\int_{q_{n}}(2\pi)^{3}\delta(\mathbf{k}-\mathbf{q}_{1}- \cdots-\mathbf{q}_{n})\,G_{n}(\mathbf{q}_{1},\ldots,\mathbf{q}_{n})\,\delta_{1}(\mathbf{q}_{1} )\ldots\delta_{1}(\mathbf{q}_{n})\,. \tag{4}\]
The SPT kernels can be calculated through following recursion formulas
\[F_{n}(\mathbf{q}_{1},\ldots,\mathbf{q}_{n}) =\sum_{m=1}^{n-1}\frac{G_{m}(\mathbf{q}_{1},\ldots,\mathbf{q}_{m})}{(2n+3 )(n-1)}\Big{[}(2n+1)\alpha(\mathbf{k}_{1},\mathbf{k}_{2})F_{n-m}(\mathbf{q}_{m+1},\ldots, \mathbf{q}_{n})\] \[+2\beta(\mathbf{k}_{1},\mathbf{k}_{2})G_{n-m}(\mathbf{q}_{m+1},\ldots,\mathbf{q}_ {n})\Big{]}, \tag{5}\] \[G_{n}(\mathbf{q}_{1},\ldots,\mathbf{q}_{n}) =\sum_{m=1}^{n-1}\frac{G_{m}(\mathbf{q}_{1},\ldots,\mathbf{q}_{m})}{(2n+3 )(n-1)}\Big{[}3\alpha(\mathbf{k}_{1},\mathbf{k}_{2})F_{n-m}(\mathbf{q}_{m+1},\ldots,\mathbf{q} _{n})\] \[+2n\beta(\mathbf{k}_{1},\mathbf{k}_{2})G_{n-m}(\mathbf{q}_{m+1},\ldots,\mathbf{q} _{n})\Big{]}. \tag{6}\]
where \(\alpha\) and \(\beta\) are vertex functions associated with the non-linear terms in the coupled equations governing the fluid dynamics
\[\alpha(\mathbf{k}_{1},\mathbf{k}_{2})\equiv\frac{\mathbf{k}_{12}\cdot\mathbf{k}_{1}}{k_{1}^{2 }},\ \ \ \ \ \beta(\mathbf{k}_{1},\mathbf{k}_{2})\equiv\frac{k_{12}^{2}(\mathbf{k}_{1}\cdot\mathbf{k}_{2})} {2k_{1}^{2}k_{2}^{2}}\,, \tag{7}\]
and we have defined, \(\mathbf{k}_{12}\equiv\mathbf{k}_{1}+\mathbf{k}_{2}\). In particular, the first non-trivial kernels are
\[F_{2}(\mathbf{q}_{1},\mathbf{q}_{2}) =\frac{5}{7}+\frac{1}{2}\frac{\mathbf{q}_{1}\cdot\mathbf{q}_{2}}{q_{1}q_{ 2}}(\frac{q_{1}}{q_{2}}+\frac{q_{2}}{q_{1}})+\frac{2}{7}\frac{(\mathbf{q}_{1}\cdot \mathbf{q}_{2})^{2}}{q_{1}^{2}q_{2}^{2}}, \tag{8}\] \[G_{2}(\mathbf{q}_{1},\mathbf{q}_{2}) =\frac{3}{7}+\frac{1}{2}\frac{\mathbf{q}_{1}\cdot\mathbf{q}_{2}}{q_{1}q_{ 2}}(\frac{q_{1}}{q_{2}}+\frac{q_{2}}{q_{1}})+\frac{4}{7}\frac{(\mathbf{q}_{1}\cdot \mathbf{q}_{2})^{2}}{q_{1}^{2}q_{2}^{2}}. \tag{9}\]
The perturbation theory can be organized into Feynman diagrams. For this purpose, as is customary, we depict these relations as
\[\delta_{n}(\mathbf{k})= \tag{10}\]
where every dashed line must be read as a linear density perturbation \(\delta_{1}(\mathbf{q})\), and the vertices are given by appropriate kernels \(F_{n}\) or \(G_{n}\).
However, there is an alternative way of organizing the perturbation theory, which is sometimes more elucidating. Following Crocce and Scoccimarro (2006); Bernardeau (2013), we introduce a doublet field
\[\Psi_{a}=\left(\delta,-\frac{1}{\mathcal{H}}\theta\right)\,. \tag{11}\]
The equations of motion are
\[\frac{\partial}{\partial\eta}\Psi_{a}(\mathbf{k},\eta)+\Omega_{ab}(\eta)\Psi_{b}( \mathbf{k},\eta)=\int\frac{d^{3}\mathbf{k}_{1}}{(2\pi)^{3}}\frac{d^{3}\mathbf{k}_{2}}{(2 \pi)^{3}}\;\gamma_{abc}^{(s)}(\mathbf{k},\mathbf{k}_{1},\mathbf{k}_{2})\,\Psi_{b}(\mathbf{k}_{ 1},\eta)\Psi_{c}(\mathbf{k}_{2},\eta), \tag{12}\]
in which, \(\eta=\log a\). Besides, the vertices matrix \(\Omega_{a\,b}\) is
\[\Omega_{a\,b}=\left[\begin{array}{cc}0&-1\\ -3/2&1/2\end{array}\right]. \tag{13}\]
The non-vanishing components of the symmetrized vertex functions \(\gamma^{(s)}\) are
\[\gamma_{121}(\mathbf{k},\mathbf{k}_{1},\mathbf{k}_{2}) = \delta^{3}(\mathbf{k}-\mathbf{k}_{1}-\mathbf{k}_{2})\,\alpha(\mathbf{k}_{1},\mathbf{k}_{2})/2\,, \tag{14}\] \[\gamma_{112}(\mathbf{k},\mathbf{k}_{1},\mathbf{k}_{2}) = \delta^{3}(\mathbf{k}-\mathbf{k}_{1}-\mathbf{k}_{2})\,\alpha(\mathbf{k}_{2},\mathbf{k}_{1})/2\,,\] (15) \[\gamma_{222}(\mathbf{k},\mathbf{k}_{1},\mathbf{k}_{2}) = \delta^{3}(\mathbf{k}-\mathbf{k}_{1}-\mathbf{k}_{2})\beta(\mathbf{k}_{1},\mathbf{k}_{2})\,. \tag{16}\]
for the \(\alpha\) and \(\beta\) defined in Eq. (7).A couple of filed perturbations, lower-order in perturbation theory, can be mixed via the above vertices to build up a higher-order one.
### IR limit of the Perturbations
One knows that the perturbation theory equations hold for the scales well above the non-linear scale, \(k_{NL}\), or equivalently for the wave-numbers well below some cut-off \(\Lambda\). Moreover, any higher-order term is a convolution of linear perturbations with some kernels, so the initial fields momenta can be "hard," in the sense that it can be pretty much close to this cut-off. We are generally interested in the correlation function of a set of "soft" modes, while in a complex Feynman diagram, some internal lines can be "hard." For the sake of clarity, we present these hard momenta with thick lines.
For a time-evolution diagram associated with a field perturbation, if all of the hard lines can be paired and contracted with each other, it is called a non-stochastic contribution. For instance, \(\delta_{3}\) with the following Feynman diagram
\[\begin{array}{c}\mbox{non-stochastic diagram:}\\ \mathbf{q}\end{array} \tag{17}\]
is a non-stochastic term in the field perturbation expansion. These contributions reflect how short-scale perturbations respond to the presence of long modes. It can be interpreted as the response -either linear or higher order- of short-scale physics to the large-scale fluctuations.
On the other hand, if the above does not occur, it corresponds to the short-scale perturbations coincidentally aligned together to make a long mode. For example, \(\delta_{2}\) given by the following diagram
\[\begin{array}{c}\mbox{stochastic diagram:}\\ \mathbf{q}\end{array} \tag{18}\]
is the simplest example. Note that one can contract _all_ initial hard lines in a non-stochastic diagram and subsequently integrate over their momenta. These diagrams contribute a deterministic value to the higher-order matter perturbations. However, for the stochastic diagrams, one has to take expectation-values with many such diagrams to pair the initial lines. In this sense, these diagrams lead to indeterministic contributions to the current value of field fluctuations.
In particular for the standard dark matter scenario, when the absolute value of a pair of momenta is much larger than that of the rest, we have
\[F_{n}^{(s)}(\mathbf{k}_{1},\dots,\mathbf{k}_{n-2},\mathbf{q},-\mathbf{q})\propto k^{2}/q^{2} \tag{19}\]
where \(F_{n}^{(s)}\) is the \(F_{n}\) kernel symmetrized for its arguments, namely incoming momenta, and \(k^{2}\equiv k_{1}^{2}+\dots+k_{n}^{2}\). It must be noted that \(G_{n}^{(s)}\), similarly, obey the same scaling. For a non-stochastic filed perturbation, correlating with a first order field perturbation \(\delta_{1}\), we get
\[P_{\rm non-stochastic}(k)\propto k^{2} \tag{20}\]
Besides, in the case of the stochastic field perturbation, correlating two such terms, the scaling of power with the soft external momentum is \(P_{\rm stochastic}\propto k^{4}\)Abolhasani et al. (2016).
## 3 Fuzzy dark matter
Let us consider the following action for a real scalar field minimally coupled to the metric with canonical kinetic term and without self-interaction as below (see, Hui et al., 2017, for a discussion):
\[S=\int\frac{d^{4}x}{\hbar c^{2}}\sqrt{-g}\left[\frac{1}{2}g^{\mu\nu}\partial_ {\mu}\phi\partial_{\nu}\phi-\frac{1}{2}\frac{m^{2}c^{2}}{\hbar^{2}}\phi^{2} \right]\,. \tag{21}\]
Coherent oscillations of this field around the minimum of its potential will play the role of the dark matter in the Universe, where the \(m\) is the mass of FDM particles. In the non-relativistic limit, one can express \(\phi\) in terms of a complex field \(\psi\)
\[\phi=\sqrt{\frac{\hbar^{3}c}{2m}}\left(\psi^{*}e^{-imc^{2}t/\hbar}+\psi e^{+ imc^{2}t/\hbar}\right). \tag{22}\]
Now we substitute this definition into the Klein-Gordon equation for \(\phi\) and use the perturbed Friedmann-Robertson-Walker metric
\[ds^{2}=\left(1+\frac{2\Phi}{c^{2}}\right)c^{2}dt^{2}-a^{2}(t)\left(1-\frac{2 \Phi}{c^{2}}\right)d\mathbf{r}^{2}\,, \tag{23}\]
to arrive at the Schrodinger equation in an expanding universe
\[i\hbar\left(\dot{\psi}+\frac{3}{2}H\psi\right)=\left(-\frac{\hbar^{2}}{2m\,a ^{2}}\nabla^{2}+m\Phi\right)\psi\,. \tag{24}\]
Note that in finding the above equation, considering the non-relativistic limit, we assumed \(\dot{\psi}\ll mc^{2}|\psi|/\hbar\) and \(\ddot{\psi}\ll mc^{2}|\dot{\psi}|/\hbar\). Here, \(a\) is the scale factor, \(H\) is the Hubble parameter, and \(\Phi\) is the gravitational potential satisfying the Poisson's equation
\[\nabla^{2}\Phi=4\pi Ga^{2}\left(\rho-\bar{\rho}\right), \tag{25}\]
where \(\rho\) is the energy density of the scalar field, which in the non-relativistic limit is related to \(\psi\) by
\[\rho=m|\psi|^{2}, \tag{26}\]
and \(\bar{\rho}\) is its mean value. Hence, the Schrodinger equation combined with Poisson's equation ultimately determines the dynamics of FDM in the non-relativistic limit, called the _wave formulation_ of the FDM dynamics.
Sometimes it is convenient to use another formulation for describing the FDM dynamics, namely _the fluid formulation_; for example, when one is interested in the perturbation theory of FDM (Li et al., 2019). To this end, one can use the so-called Madelung transformations.:
\[\psi\equiv\sqrt{\frac{\rho}{m}}e^{i\theta}\quad,\quad\mathbf{v}\equiv\frac{\hbar}{m\,a }\nabla\theta=\frac{\hbar}{2ima}\left(\frac{\nabla\psi}{\psi}-\frac{\nabla\psi^ {*}}{\psi^{*}}\right). \tag{27}\]
By the above substitution, the imaginary and real parts of the Schrodinger equation take the form of continuity and Euler equations, respectively
\[\dot{\rho}+3H\rho+\frac{1}{a}\nabla\cdot(\rho\mathbf{v}) =0 \tag{28}\] \[\dot{\mathbf{v}}+H\mathbf{v}+\frac{1}{a}(\mathbf{v}\cdot\nabla)\mathbf{v} =-\frac{1}{a}\nabla\Phi-\frac{\hbar^{2}}{2m^{2}\,a^{3}}\nabla p_{Q}, \tag{29}\]
where \(p_{Q}\) is the so-called quantum pressure that is responsible for the suppression of small-scale structures, given by
\[p_{Q}=-\frac{\nabla^{2}\sqrt{\rho}}{\sqrt{\rho}}. \tag{30}\]
The continuity and Euler's equations, (28) - (29), together with the Poisson's equation,(25), determine the dynamics of FDM in the fluid formulation. But it should be noted that this formulation breaks down in the regions where FDM multi-streams occur, because in these regions the _single_ velocity in Eq. (27) is no longer well-defined (Uhlemann et al., 2014; Mocz et al., 2018).
### Perturbation Theory
By assuming the velocity field is irrotational, we Rewrite the Eq. (28) and (29) for the velocity potential, \(\theta\), and density contrast, \(\delta\); at the linear order, we find
\[\delta^{\prime}+\theta =0 \tag{31}\] \[\theta^{\prime}+\mathcal{H}\theta +\frac{3}{2}\mathcal{H}^{2}\delta =\frac{\hbar^{2}}{4m^{2}\,a^{2}}\nabla^{2}\nabla^{2}\delta \tag{32}\]
Figure 1: Slice-Projection plots of (a) CDM and (b) FDM (\(m_{22}=0.1\)) simulations, at \(z=0\), were performed using Gadget-2 code with the proper initial conditions for each of them. They both have \(512^{3}\) particles and \(250\,h^{-1}\,\mathrm{Mpc}\) length of box. The projection slices’ thickness is \(1.25\,h^{-1}\,\mathrm{Mpc}\). To start with the _same_ realizations, we use the same random seed number to generate initial conditions. Consequently, Only tiny visual differences can be found between the figures, resulting from the suppressed FDM transfer function. However, EFT could systematically parameterize these tiny differences on large scales.
We used the Poisson equation \(\nabla^{2}\Phi/a^{2}=4\pi G\rho\) to eliminate gravitational potential \(\Phi\) in the second line. The coupled equations above are the same as the equation governing dynamics of CDM perturbation except for a pressure-like term in the Euler equation. At the linear order, the suppression of the FDM power spectrum relative to the CDM- due to the so-called quantum pressure- can be characterized by a transfer function, shown below (Hu et al., 2000)
\[P_{F}(k,z)=\left[\frac{\mathcal{T}_{FDM}(k,z)}{\mathcal{T}_{CDM}(k,z)}\right]^{ 2}P_{C}(k,z)=\mathcal{T}^{2}(k,z)P_{C}(k,z). \tag{33}\]
The transfer function \(\mathcal{T}(k,z)\) can be well approximated by the redshift-independent expression. In other words, the transfer function can be factorized into a growth function depending only on time and a time-independent transfer function \(\mathcal{T}\).
\[\mathcal{T}(k)=\frac{\cos x^{3}}{1+x^{8}},\quad\text{where:}\quad x=1.61\times \left(\frac{m_{f}}{10^{-22}\,\text{eV}}\right)^{1/18}\times\frac{k}{k_{J}}\,. \tag{34}\]
in which the parameter \(k_{J}=9\times(m_{f}/10^{-22}\,\text{eV})^{1/2}\,\text{Mpc}^{-1}\) is the critical scale of Jeans wavenumber at matter-radiation equality.
Going to the Fourier space is more convenient when going beyond the linear perturbation theory. We find the continuity equation in the Fourier space as
\[\delta^{\prime}(\mathbf{k},\eta)+\theta(\mathbf{k},\eta)=\int\frac{d^{3}\mathbf{k}_{1}}{(2 \pi)^{3}}\frac{d^{3}\mathbf{k}_{2}}{(2\pi)^{3}}\,\alpha(\mathbf{k}_{1},\mathbf{k}_{2})\, \theta(\mathbf{k}_{1},\eta)\delta(\mathbf{k}_{2},\eta), \tag{35}\]
and the Euler's equation is
\[\theta^{\prime}(\mathbf{k},\eta)+ \mathcal{H}(\eta)\theta(\mathbf{k},\eta)+\frac{3}{2}\Omega_{m}(\eta) \mathcal{H}^{2}(\eta)\delta(\mathbf{k},\eta)=\] \[\int\frac{d^{3}\mathbf{k}_{1}}{(2\pi)^{3}}\frac{d^{3}\mathbf{k}_{2}}{(2 \pi)^{3}}\,\beta(\mathbf{k}_{1},\mathbf{k}_{2})\,\theta(\mathbf{k}_{1},\eta)\theta(\mathbf{k}_ {2},\eta)+\frac{\hbar^{2}}{2m^{2}a^{2}}\left[\nabla^{2}\left(\frac{\nabla^{2} \sqrt{1+\delta}}{\sqrt{1+\delta}}\right)\right]_{\mathbf{k}}. \tag{36}\]
To simplify the quantum pressure, after some algebra, for an arbitrary function \(f\), we find the following identity
\[\nabla^{2}\left(\frac{\nabla^{2}f}{f}\right)=\frac{\nabla^{2}(\nabla^{2}f)}{ f}-\frac{2\nabla f.\nabla(\nabla^{2}f)}{f^{2}}-\frac{(\nabla^{2}f)(\nabla^{2}f)}{f ^{2}}+\frac{2|\nabla f|^{2}\nabla^{2}f}{f^{3}} \tag{37}\]
Now, using the above equation for \(f=\sqrt{1+\delta}\), we get
\[\nabla^{2}\left(\frac{\nabla^{2}\sqrt{1+\delta}}{\sqrt{1+\delta}}\right)= \frac{-1}{2}\delta\,\nabla^{2}\nabla^{2}\delta-\frac{3}{2}(\nabla\delta). \nabla(\nabla^{2}\delta)-\frac{1}{2}(\nabla^{2}\delta)(\nabla^{2}\delta)- \frac{1}{2}(\nabla\nabla\delta).(\nabla\nabla\delta)+\mathcal{O}(\delta^{3}) \tag{38}\]
Again, the perturbation theory organized in doublet representation reads as
\[\frac{\partial}{\partial\eta}\Psi_{a}(\mathbf{k},\eta)+\Omega_{ab}( \eta)\Psi_{b}(\mathbf{k},\eta)=\int\frac{d^{3}\mathbf{k}_{1}}{(2\pi)^{3}}\frac{d^{3}\bm {k}_{2}}{(2\pi)^{3}}\,\gamma^{(s)}_{abc}(\mathbf{k},\mathbf{k}_{1},\mathbf{k}_{2})\,\Psi_{b }(\mathbf{k}_{1},\eta)\Psi_{c}(\mathbf{k}_{2},\eta), \tag{39}\] \[+\delta_{a\,2}\,\sum_{n=1}^{\infty}\int\left(\prod_{i=1}^{n}\frac {d^{3}\mathbf{k}_{i}}{(2\pi)^{3}}\right)\,\Gamma^{(s)}_{n}(\mathbf{k},\mathbf{k}_{1},.., \mathbf{k}_{n})\,\left(\prod_{i=1}^{n}\Psi_{1}(\mathbf{k}_{i},\eta)\right) \tag{40}\]
The vertices matrix \(\Omega_{a\,b}\) is slightly different, particularly for a term delegating the quantum pressure
\[\Omega_{a\,b}=\left[\begin{array}{cc}0&-1\\ -\frac{3}{2}\Omega_{m}(\eta)+\frac{\hbar^{2}\,k^{4}}{2m^{2}a^{2}(\eta) \mathcal{H}^{2}(\eta)}&1+\frac{\mathcal{H}^{\prime}(\eta)}{\mathcal{H}(\eta )}\end{array}\right]. \tag{41}\]
The non-vanishing components of the symmetrized vertex functions \(\gamma^{(s)}\) are
\[\gamma_{121}(\mathbf{k},\mathbf{k}_{1},\mathbf{k}_{2}) =\delta^{3}(\mathbf{k}-\mathbf{k}_{1}-\mathbf{k}_{2})\,\alpha(\mathbf{k}_{1},\mathbf{k }_{2})/2\,, \tag{42}\] \[\gamma_{112}(\mathbf{k},\mathbf{k}_{1},\mathbf{k}_{2}) =\delta^{3}(\mathbf{k}-\mathbf{k}_{1}-\mathbf{k}_{2})\,\alpha(\mathbf{k}_{2},\mathbf{ k}_{1})/2\,,\] (43) \[\gamma_{222}(\mathbf{k},\mathbf{k}_{1},\mathbf{k}_{2}) =\delta^{3}(\mathbf{k}-\mathbf{k}_{1}-\mathbf{k}_{2})\beta(\mathbf{k}_{1},\mathbf{k}_{ 2})\,. \tag{44}\]
for the \(\alpha\) and \(\beta\) defined in Eq. (7). As can be seen, the quantum pressure leads to an infinite number of new vertices, \(\Gamma_{n}(\mathbf{k},\mathbf{k}_{1},..\mathbf{k}_{n})\), which combine \(n\) density fields to an \(n\)-th order velocity field. The first few \(\Gamma_{n}\)s are
\[\Gamma_{2}(\mathbf{k},\mathbf{k}_{1},\mathbf{k}_{2})=\frac{-\hbar^{2}}{8m^{2}a^{2}H^{2}} \left[(k_{1}^{2}+k_{2}^{2})^{2}+3\,(\mathbf{k}_{1}.\mathbf{k}_{2})(k_{1}^{2}+k_{2}^{2 })+2\,(\mathbf{k}_{1}.\mathbf{k}_{2})^{2}\right] \tag{45}\]
After some algebra, we find that this new vertex modifies the \(F_{2}\) by the following amount
\[\tilde{F}_{2}(\mathbf{k},\mathbf{k}_{1},\mathbf{k}_{2})=\frac{-\hbar^{2}}{132m^{2}\,H^{2}} \left[(k_{1}^{2}+k_{2}^{2})^{2}+3\,(\mathbf{k}_{1}.\mathbf{k}_{2})(k_{1}^{2}+k_{2}^{2 })+2\,(\mathbf{k}_{1}.\mathbf{k}_{2})^{2}\right] \tag{46}\]
The above kernel must satisfy our expectation for double-softness in external momentum. The double softness directly results from local interaction and momentum conservation. Double softness is crucial if we ignore the UV details and study the problem within the EFT framework. In the limit of soft external momentum \(\mathbf{k}\) we find
\[\tilde{F}_{2}(\mathbf{k},\mathbf{k}_{1},\mathbf{k}_{2})\sim\frac{(q/a)^{4}}{132m^{2}\,H^{ 2}}\left(\frac{k^{2}}{q^{2}}\right) \tag{47}\]
One may worry that our argument fails since the factor behind \((k/q)^{2}\) can grow arbitrarily for large enough \(q\) s. However, note that this can happen only for \(q\)s exceeding jeans momentum for which the power spectrum is exponentially suppressed. As a matter of principle, there is no need to know about the UV in the EFT approach to the large-scale structure. In this sense, it is not necessary to use the "modified" SPT kernels so that we can ignore the UV details of FDM. The above argument justifies using SPT kernels for the loop calculations.
## 4 The Cosmological Simulations
We compare the predictions for the matter power spectrum from the EFT of LSS in cases of CDM and FDM. In particular, we use cosmological simulations to determine the EFT parameters of CDM and FDM in 1-loop order. These simulations must be performed on a large enough box to encompass the quasi-linear regime, namely \(k\sim 0.1\,h\,\mathrm{Mpc}^{-1}\). Available N-body codes in which the classical Newtonian forces determine the dynamics of particles are indeed useless for FDM simulation. Recently, several FDM cosmological simulation codes have been developed that use different approaches to follow the FDM dynamics (see, Zhang et al., 2019). A primary class of these simulations solves the Schrodinger-Poisson equations for an expanding universe. Several works use wave formulation to perform cosmological simulation (e.g. Schive et al., 2014; Li et al., 2019; Schwabe et al., 2020; May and Springel, 2021). In this sort of simulation, the small-scale fingerprints of FDM, like the solitonic cores and interference patterns, are neatly captured while they fail to study the large scales. Since the velocity defined in Eq.(27) is given by the gradient of the wave function's phase, it could not exceed some maximum value because the difference between the phase values of two neighbor grids in the simulation has a maximum value of \(2\pi\). Accordingly, for the simulations based on the wave formulation of the FDM, the grid sizes should not exceed the de-Broglie wavelength (Li et al., 2019; May and Springel, 2021). Henceforth, large box simulations require more and more computational resources. Today, the largest FDM simulations ever performed using the wave formulation have the box size of order \(\sim 10\,h^{-1}\,\mathrm{Mpc}\) and are reliable only down to \(z\sim 3\) (see, e.g. May and Springel, 2021). Therefore, the reachable box size of this sort of simulation is not sufficient yet to study quasi-linear scales, i.e.\(\gtrsim 200\,h^{-1}\,\mathrm{Mpc}\).
The second approach is to use the fluid formulation of the FDM dynamics. In this approach, employing the Smoothed-Particle-Hydrodynamics (SPH) methods Veltmaat and Niemeyer (2016); Mocz and Succi (2015); Nori and Baldi (2018), one calculates the extra force due to the gradient of the QP term in Eq.(29) on the FDM particles in an N-body simulation. Although this approach could not reproduce the interference patterns and has some intrinsic inaccuracies in small scales (see, e.g., Zhang et al., 2019), it is still suitable for studying large-scale structure formation. For instance, simulations with a box size of \(50\,h^{-1}\,\mathrm{Mpc}\) are performed in (see, Zhang et al., 2018). However, larger simulations that could encompass quasi-linear scales are still beyond the reliability scope of these codes (Zhang et al., 2019).
Another alternative is to use the FDM initial conditions as an initial condition for ordinary CDM cosmological simulation codes. It has been shown that it is a good approximation if we are interested in large-scale structure formation. The difference between the mass power spectra of a full FDM simulation and a simulation with only FDM initial conditions is well below the percent level (see, e.g. Nori and Baldi, 2018). As one approaches the scales \(\sim 1\,h\,\mathrm{Mpc}^{-1}\) and smaller, the difference becomes utterly negligible. This fact, along with CDM codes' ability to successfully simulate quasi-linear scales makes this approach suitable for our current purpose.
We performed simulations using the publicly available _Gadget-2_ code. The _N-GenIC_ generates the initial conditions 2, where we have also implemented the Eq.34 to generate the suppressed FDM initial conditions. The cosmological parameters used in the simulations are \(\{\Omega_{m},\Omega_{b},\Omega_{\Lambda},h,n_{s},\sigma_{8}\}=\{0.295,0.0468,0.705,0.688,0.9676,0.835\}\). Simulations have \(512^{3}\) particles and a box size of \(250\,h^{-1}\,\mathrm{Mpc}\). We performed three FDM simulations with three different masses, namely \(m_{22}=0.1,\,0.4\,\mathrm{and}\,1.6\), which \(m_{22}\) is defined as \(m_{22}\equiv m_{\mathrm{FDM}}/10^{-22}\,\mathrm{eV}\). One needs about 28,000 core hours of computing resources to perform these simulations. For this purpose, we used High-Performance Computing Center (HPC) machine at Sharif University of Technology.
Footnote 2: [https://gitlab.Mpcdf.mpg.de/rwein/ngenic](https://gitlab.Mpcdf.mpg.de/rwein/ngenic)
In Fig. (1), we compare the slice-projection plots of the CDM and FDM simulations with the smallest mass, at \(z=0\). Due to the same random seed number used to generate the initial conditions, the plots appear superficially similar, ensuring that we started with the _same_ realizations. Nevertheless, the tiny visual differences that are encoded in the speed of sound parameters are rooted in the FDM initial power spectrum while mixed up in the subsequent non-linear dynamics.
Fig. (2) shows the matter power spectra of FDM simulations at the initial redshift, i.e., \(z=99\). As expected, the FDM power spectrum deviates from CDM in larger scales for smaller masses. As discussed in Sec. (2.1), the suppression of power spectrum in small scales at the initial redshift could slightly leak to the larger scales at lower redshifts and change the speed of sound parameter.
## 5 Power Spectrum: Effective Field Theory
We adopt the effective field theory approach to give a theoretical prediction for the large-scale perturbations of the Universe's large-scale structure. Momentum conservation and locality of the short-scale dynamics guarantee that the short fluctuations can only affect longer wavelength perturbation at \(k^{2}\) order, regardless of whatever physics holds on the UV scale Abolhasani et al. (2016). That is crucial for the effective field theory to give a viable description at a large scale when either we do not precisely know the physics governing the UV scales or UV physics is too complicated to be tracked. In this approach, the momentum integral in stochastic and non-stochastic diagrams should be cut at scale \(\Lambda\) so that higher-order field fluctuations would be cut-off dependent. However, the observed physical quantities
Figure 2: The power spectra of FDM Simulations with different masses at the initial redshift, i.e., \(z=99\), normalized by CDM. The wave number in which FDM deviates from CDM is smaller for lower FDM masses.
do not depend on the cut-off we choose. These dependencies must be exactly canceled by the appropriate counterterms coming from integrating our UV physics -namely our ignorance- within the context of the effective field theory.
Without going much to the details, we begin by the one-loop EFT formula for the power spectrum (Foreman et al., 2016)
\[P_{\text{EFT-1-loop}}(k,z)=[D_{1}(z)]^{2}P_{11}(k)+[D_{1}(z)]^{4}P_{\text{1-loop }}(k)+P_{\text{tree}}^{(c_{s})}(k,z)\, \tag{48}\]
where
\[P_{\text{tree}}^{(c_{s})}(k,z)=-2(2\pi)c_{s(1)}^{2}(z)[D_{1}(z)]^{2}\frac{k^{ 2}}{k_{\text{NL}^{2}}}P_{11}(k)\, \tag{49}\]
\(P_{\text{1-loop}}(k)\) is the one-loop correction to the linear power spectrum in SPT and \(c_{s(1)}^{2}\) is the so-called speed of sound which determines the magnitude of the counterterm introduced by EFT of LSS at the one-loop level. \(P_{\text{1-loop}}(k)\) is given by (Hertzberg, 2014)
\[P_{\text{1-loop}}(k)=P_{22}(k)+P_{13}(k)\,, \tag{50}\]
where
\[P_{22}(k)=\frac{k^{3}}{2\pi^{2}}\int\mathrm{d}r\,r^{2}\int\mathrm{d}xP_{11}(rk )P_{11}(k\sqrt{r^{2}-2rx+1})\left(\frac{7x+(3-10x^{2})r}{14r(r^{2}-2rx+1)} \right)^{2}\,, \tag{51}\]
and
\[\begin{split} P_{13}(k)=\frac{k^{3}}{252(2\pi)^{2}}P_{11}(k)\int \mathrm{d}r\,r^{2}P_{11}(kr)\Bigg{[}&\frac{12}{r^{4}}-\frac{158}{ r^{2}}+100-42r^{2}\\ &+\frac{3}{r^{5}}\left(7r^{2}+2\right)\left(r^{2}-1\right)^{3} \ln\left|\frac{1+r}{1-r}\right|\Bigg{]}\,.\end{split} \tag{52}\]
We calculated the linear power spectrum, \(P_{11}(k)\), using the CAMB3 code with the same cosmological parameters as of our simulations.
Figure 3: The value of one-loop speed of sound parameter obtained from our CDM simulation, when using different upper bounds for the fitting interval (\(k_{max}\)). The shaded blue region depicts the 2-\(\sigma\) error bars. As discussed in the text, the value \(k_{max}\simeq 0.30\) at which the \(c_{s(1)}^{2}\) exits, for the first time, from the previous error bars, should be chosen as the appropriate upper bound (\(k_{\text{fit}}\))
### Comparing the speed of sound for CDM and FDM
In order to determine the best-fit values of \(c_{s(1)}^{2}\), we fit Eq. (48) to the non-linear power spectrum obtained from either simulations or observations. However, one does not know the best "fitting interval" from the theory a priori. Foreman et al. (2016) proposes a systematic way to find the appropriate maximum wavenumber of the fitting interval, namely \(k_{\rm fit}\). In this procedure, by gradually increasing the maximum wavenumber of the fitting interval, \(k_{max}\), we get to a wave number at which the best-fit value of \(c_{s(1)}^{2}\) exceeds the error bars of the previous ones obtained for smaller \(k_{max}\)'s. We use this wave number as \(k_{\rm fit}\). As shown in Fig. (3), following this procedure using the power spectrum of our simulations, we arrive at the value \(k_{\rm fit}=0.28\,h\,{\rm Mpc}^{-1}\). So, we use this value as the upper bound of the fitting intervals in the following calculations.
We determine the value of \(c_{s(1)}^{2}\) using our CDM and FDM N-body simulations and then compare them. The simulations' box size is \(L=250\,h^{-1}\,{\rm Mpc}\) with \(512^{2}\) particles. Since we perform finite box/finite simulation, only a finite number of modes are at hand, so to calculate the integrals in Eqs.(51)-(52), we use the linear CAMB data for \(P_{11}(k)\) as an approximation. By choosing the fitting interval to be \(k\in[2\pi/L,0.28]\)\(h\,{\rm Mpc}^{-1}\), we get the best-fit value for the effective sound speed for CDM simulation to be
\[c_{s(1)}^{2}=1.14\pm 0.15\left(k_{\rm NL}/(2\,h\,{\rm Mpc}^{-1})\right)^{2}. \tag{53}\]
This result agrees very well with that of the previous studies (see, e.g. Senatore & Zaldarriaga, 2015; Carrasco et al., 2014; Foreman & Senatore, 2016). For instance, Foreman & Senatore (2016) found \(c_{s(1)}^{2}=1.05^{+0.05}_{-0.27}\left(k_{\rm NL}/(2\,h\,{\rm Mpc}^{-1}) \right)^{2}\) for a universe with \(\sigma_{8}=0.81\). If we use scaling relation, \(c_{s(1)}^{2}\propto\sigma_{8}^{3.5}\), suggested in that work, sound speed translates to \(c_{s(1)}^{2}=1.17^{+0.06}_{-0.30}\left(k_{\rm NL}/(2\,h\,{\rm Mpc}^{-1}) \right)^{2}\) for a universe with the same \(\sigma_{8}\) as ours, showing complete agreement between the results.
Now we repeat the same procedure for our FDM simulations. Noteworthy that the linear power spectrum, P, should be modified for the FDM case via Eq. (33). That is a good approximation for studying scales in which we are interested. As argued in 3.1, on large scales, one could use the CDM SPT kernels, Eqs. (51)-(52), for the FDM as well. The FDM one-loop power spectrum are shown in Fig. (4). The different contributions are depicted separately to compare their magnitude on different scales. One can see that the contribution of the one-loop corrections dominates over that of the linear term at the scales \(k\approx 0.3\,h\,{\rm Mpc}^{-1}\).
Figure 4: One-loop power spectrum for FDM and the comparison of its different contributing terms. We use the ordinary CDM SPT kernels, as justified in the text, and the Eq. (33) for the linear power spectrum, \(P_{11}(k)\).
Repeating the fitting procedure for the FDM simulation with the mass of \(m_{22}=0.1\), the best-fit FDM is found with the speed of sound to be \(c_{\rm s(1)}^{2}=1.18\pm 0.15\left(k_{\rm NL}/(2\,h\,{\rm Mpc}^{-1})\right)^{2}\). This result is 4% higher than the value we have obtained for the CDM. Though within the confidence interval that is consistent with the CDM, it suggests a slightly higher speed of sound that is consistent with our physical expectation.
Using FDM simulations with masses of \(m_{22}=0.4\) and \(m_{22}=1.6\), we get the values \(c_{s(1)}^{2}=1.16\pm 0.15\left(k_{\rm NL}/(2\,h\,{\rm Mpc}^{-1})\right)^{2}\) and \(c_{s(1)}^{2}=1.15\pm 0.15\left(k_{\rm NL}/(2\,h\,{\rm Mpc}^{-1})\right)^{2}\), respectively, exhibiting a decreasing trend with increasing FDM mass. By increasing the FDM mass, the speed of sound for FDM simulations tends to be that of the CDM simulations since the higher the FDM mass is, the smaller the structure formation suppression scales happen. In this sense, one may expect the FDM matter power spectrum to be more and more similar to the CDM one on large scales. The results are listed in Table 1 and depicted in Fig.5.
Fig. 6 compares the SPT and EFT predictions for the matter power spectrum- normalized with power calculated from simulation. As expected, one can push the theory's validity toward the small scales using the EFT of LSS. While the SPT's power spectrum deviates from simulation by more than 2-\(\sigma\) at \(k\approx 0.20,h\,{\rm Mpc}^{-1}\), EFT predictions are consistent with full non-linear simulation at 1-\(\sigma\) even at \(k\approx 0.54\,h\,{\rm Mpc}^{-1}\).
Figure 5: The value of the sound speed for variant FDM masses is shown here. The black line and the shaded area are the parameter value for the CDM simulation; 1-\(\sigma\) intervals are shown with bars.
## 6 Summary and Discussion
This paper compares the dark matter power spectrum prediction from EFT of LSS and nonlinear measurements from an FDM simulation with a box size of \(250\,h^{-1}\,\mathrm{Mpc}\) and \(512^{3}\) particles. We focused on effective sound speed to discriminate the CDM and FDM predictions on the quasi-linear regime. The difference is due to the back-reaction of the small-scale perturbations- that are sensitive to quantum effects- on the quasi-linear perturbations.
We found the speed of sound by fitting the EFT formula at one-loop order, Eq.(48), to the power spectrum of CDM/FDM simulations. Table.1 lists the values of \(c_{s(1)}^{2}\) derived from different simulations. The values of \(c_{s(1)}^{2}\) for the FDM simulations are a few percent higher than the CDM ones. This difference is more considerable for smaller FDM masses, as expected. These results imply the possibility of the different large-scale statistics for CDM and FDM universes that should be taken seriously.
Moreover, this suggests that the other alternative dark matter models that predict suppression of small-scale structures, such as warm dark matter, may also show the same effect of increasing the EFT's sound speed value. This claim could be the subject of future studies.
Nevertheless, our findings inevitably suffer from some uncertainties. Firstly, we have not performed simulations that solve the FDM dynamical equations since no FDM code is available for a large box (cosmological) simulation. Instead, we have used only the FDM initial conditions, which, as discussed before, is not such a severe worry for large scales. In this work, we found an increase in the speed of sound parameters solely by changing initial conditions, and we expect that turning on the FDM dynamics must lead to higher, if not the same, sound speeds.
Figure 6: EFT prediction of matter power spectrum at one-loop order for the CDM and FDM simulations with three different masses at \(z=0\). The grey-shaded regions depict the standard error of the simulations, and EFT error bars are shaded in blue. The fractional difference of SPT is also shown for comparison. As discussed in the text, the linear power spectrum used in SPT and EFT formula is the CAMB linear data. The plots are superficially identical since the differences are at the percent level.
As far as we know, this is the first attempt to examine the EFT of LSS for the FDM scenario. To compare CDM and FDM matter power at quasi-linear scales in high precision, one needs a simulation performed on a large-volume box with more particles- to give many more modes in k-space in the quasi-linear regime. That can reduce the uncertainties in the EFT predictions to discriminate FDM and CDM scenarios by studying matter power at these scales. Nevertheless, our results for different FDM masses suggest that the difference between CDM and FDM sound speeds is of physical meaning, so worth more investigation.
We would like to express our gratitude to Mohammad Hossein Namjoo for his valuable contributions and thought-provoking discussions throughout the project. We thank Leonardo Senatore, Mehrdad Mirbabayi, and Hossein Mos'hafi for the valuable and insightful conversations. We acknowledge using the _Wolfram Mathemtica_ and _YT-toolkit_ for the data analysis and making plots. The simulations were performed using the computational resources in the High-Performance Computing Center (HPC) at the Sharif University of Technology. We also thank Alireza Baraani, who kindly helped us use the Sharif HPC machine.
|
2309.16612 | Curvature of positive relative line modules over the quantum projective
spaces | We show that the curvature of a positive relative line module over quantum
projective space is given by $q$-integer deformation of its classical
curvature. This generalises a result of Majid for the Podle\'s sphere. | Andrey O. Krutov, Réamonn Ó Buachalla | 2023-09-28T17:12:33Z | http://arxiv.org/abs/2309.16612v1 | # Curvature of positive relative line modules over the quantum projective spaces
###### Abstract.
We show that the curvature of a positive relative line module over quantum projective space is given by \(q\)-integer deformation of its classical curvature. This generalises a result of Majid for the Podles sphere.
Key words and phrases:quantum groups, noncommutative geometry, quantum flag manifolds, complex geometry 2020 Mathematics Subject Classification: 46L87, 81R60, 81R50, 17B37, 16T05 A.K. was supported by the GACR Grant EXPRO 19-28628X. R.OB was supported by the Charles University PRIMUS grant _Spectral Noncommutative Geometry of Quantum Flag Manifolds_ PRIMUS/21/SCI/026. This article is based upon work from COST Action CaLISTA CA21109 supported by COST (European Cooperation in Science and Technology, www.cost.eu).
Introduction
Let \(B\) be a field of characteristic \(\mathbb{C}\), and let \(\Omega^{1}(B)\) be a field of characteristic \(\mathbb{C}\). We say that \(\Omega^{1}(B)\) is a _finite_ field of characteristic \(\mathbb{C}\) if \(\Omega^{1}(B)\) is a finite field of characteristic \(\mathbb{C}\). We say that \(\Omega^{1}(B)\) is _finite_ if \(\Omega^{1}(B)\) is finite.
satisfying
\[\nabla(bf)={\rm d}b\otimes f+b\nabla f, \text{for all }b\in B,f\in\mathcal{F}. \tag{1}\]
Any connection can be extended to a map \(\nabla:\Omega^{\bullet}\otimes_{B}\mathcal{F}\to\Omega^{\bullet}\otimes_{B} \mathcal{F}\) uniquely defined by
\[\nabla(\omega\otimes f)={\rm d}\omega\otimes f+(-1)^{|\omega|}\,\omega\wedge \nabla f,\]
where \(f\in\mathcal{F}\), and \(\omega\) is a homogeneous element of \(\Omega^{\bullet}\) of degree \(|\omega|\). The _curvature_ of a connection is the left \(B\)-module map \(\nabla^{2}:\mathcal{F}\to\Omega^{2}\otimes_{B}\mathcal{F}\). A connection is said to be _flat_ if \(\nabla^{2}=0\).
#### 2.2.2. Complex Structures
In this subsection we recall the definition of a complex structure for a differential calculus, as introduced in [KLvS, BS1], see also [BM1]. This gives an abstract characterisation of the properties of the de Rham complex of a classical complex manifold [Huy].
**Definition 2.1**.: A _complex structure_\(\Omega^{(\bullet,\bullet)}\), for a differential \(*\)-calculus \((\Omega^{\bullet},{\rm d})\), is an \(\mathbb{N}_{0}^{2}\)-algebra grading \(\bigoplus_{(a,b)\in\mathbb{N}_{0}^{2}}\Omega^{(a,b)}\) for \(\Omega^{\bullet}\) such that, for all \((a,b)\in\mathbb{N}_{0}^{2}\):
\[\Omega^{k}=\bigoplus_{a+b=k}\Omega^{(a,b)},\qquad\big{(}\Omega^{(a,b)}\big{)}^ {*}=\Omega^{(b,a)},\qquad{\rm d}\Omega^{(a,b)}\subseteq\Omega^{(a+1,b)} \oplus\Omega^{(a,b+1)}.\]
An element of \(\Omega^{(a,b)}\) is called an \((a,b)\)_-form_. For \({\rm proj}_{\Omega^{(a+1,b)}}\), and \({\rm proj}_{\Omega^{(a,b+1)}}\), the projections from \(\Omega^{a+b+1}\) to \(\Omega^{(a+1,b)}\), and \(\Omega^{(a,b+1)}\) respectively, we write
\[\partial|_{\Omega^{(a,b)}}:={\rm proj}_{\Omega^{(a+1,b)}}\circ{ \rm d}, \overline{\partial}|_{\Omega^{(a,b)}}:={\rm proj}_{\Omega^{(a,b+1)}} \circ{\rm d}.\]
It follows from Definition 2.1 that for any complex structure,
\[{\rm d}=\partial+\overline{\partial}, \overline{\partial}\circ\partial=-\,\partial\circ\overline{ \partial}, \partial^{2}=\overline{\partial}^{2}=0.\]
Thus \(\big{(}\bigoplus_{(a,b)\in\mathbb{N}_{0}^{2}}\Omega^{(a,b)},\partial, \overline{\partial}\big{)}\) is a double complex. Both \(\partial\) and \(\overline{\partial}\) satisfy the graded Leibniz rule. Moreover,
\[\partial(\omega^{*})=\big{(}\overline{\partial}\omega\big{)}^{*}, \overline{\partial}(\omega^{*})=\big{(}\partial\omega\big{)}^{*}, \text{for all }\omega\in\Omega^{\bullet}.\]
Associated with any complex structure \(\Omega^{(\bullet,\bullet)}\) we have a second complex structure, called its _opposite complex structure_, defined as
\[\overline{\Omega}^{(\bullet,\bullet)}:=\bigoplus_{(a,b)\in\mathbb{N}_{0}^{2}} \overline{\Omega}^{(a,b)}, \text{where }\overline{\Omega}^{(a,b)}:=\Omega^{(b,a)}.\]
See [BM1, SS1] or [OB2] for a more detailed discussion of complex structures.
#### 2.2.3. Holomorphic Modules
In this subsection we present the notion a holomorphic left \(B\)-module for an algebra \(B\). Such a module should be thought of as a noncommutative holomorphic vector bundle, as has been considered in a number of previous papers, see for example [BS1], [PS], [KLvS], [DKOSS1], and [DKOSS2]. Indeed, the definition for holomorphic modules is motivated by the classical Koszul-Malgrange characterisation of holomorphic bundles [KM]. See [OBSvR] for a more detailed discussion.
With respect to a choice \(\Omega^{(\bullet,\bullet)}\) of complex structure on \(\Omega^{\bullet}\), a _\((0,1)\)-connection on \(\mathcal{F}\)_ is a connection with respect to the differential calculus \((\Omega^{(0,\bullet)},\overline{\partial})\).
**Definition 2.2**.: Let \((\Omega^{\bullet},\mathrm{d})\) be a differential \(*\)-calculus over a \(*\)-algebra \(B\), equipped with a complex structure \(\Omega^{(\bullet,\bullet)}\). A _holomorphic_ left \(B\)-module is a pair \((\mathcal{F},\overline{\partial}_{\mathcal{F}})\), where \(\mathcal{F}\) is a finitely generated projective left \(B\)-module, and \(\overline{\partial}_{\mathcal{F}}:\mathcal{F}\to\Omega^{(0,1)}\otimes_{B} \mathcal{F}\) is a flat \((0,1)\)-connection. We call \(\overline{\partial}_{\mathcal{F}}\) the _holomorphic structure_ of the holomorphic left \(B\)-module.
### Quantum Principal Bundles and Principal Connections
We say that a right \(H\)-comodule algebra \((P,\Delta_{R})\) is a _Hopf-Galois extension_ of \(B:=P^{\mathrm{co}(H)}\) if an isomorphism is given by
\[\mathrm{can}:=(m_{P}\otimes\mathrm{id})\circ(\mathrm{id}\otimes\Delta_{R}):P \otimes_{B}P\to P\otimes H,\qquad r\otimes s\mapsto rs_{(1)}\otimes s_{(2)},\]
where \(m_{P}\) denotes the multiplication in \(P\). It was shown [Brz, Proposition 3.6] that \(P\) is a Hopf-Galois extension of \(B=P^{\mathrm{co}(H)}\) if and only if an exact sequence is given by
\[0\longrightarrow P\Omega^{1}_{u}(B)P\smash{\mathop{\longrightarrow}\limits^ {\iota}}\Omega^{1}_{u}(P)\smash{\mathop{\longrightarrow}\limits^{\overline{ \mathrm{can}}}}P\otimes H^{+}\longrightarrow 0, \tag{2}\]
where \(\Omega^{1}_{u}(B)\) is the restriction of \(\Omega^{1}_{u}(P)\) to \(B\), we denote by \(\iota\) the inclusion, and \(\overline{\mathrm{can}}\) is the restriction to \(\Omega^{1}_{u}(P)\) of the map
\[(m_{P}\otimes\mathrm{id})\circ(\mathrm{id}\otimes\Delta_{R}):P\otimes P \to P\otimes H.\]
(Note that the map's domain of definition is \(P\otimes P\), rather than \(P\otimes_{B}P\).) The following definition, due to Brzezinski and Majid [BM2, BM3], presents sufficient criteria for the existence of a non-universal version of this sequence. A non-universal calculus on \(P\) is said to be _right \(H\)-covariant_ if the following, necessarily unique, map is well defined
\[\Delta_{R}:\Omega^{1}(P)\to\Omega^{1}(P)\otimes H,\qquad\qquad r\mathrm{d}s \mapsto r_{(0)}\mathrm{d}s_{(0)}\otimes r_{(1)}s_{(1)}.\]
**Definition 2.3**.: Let \(H\) be a Hopf algebra. A _quantum principal \(H\)-bundle_ is a pair \((P,\Omega^{1}(P))\), consisting of a right \(H\)-comodule algebra \((P,\Delta_{R})\), such that \(P\) is a Hopf-Galois extension of \(B=P^{\mathrm{co}(H)}\), together with a choice of right-\(H\)-covariant calculus \(\Omega^{1}(P)\), such that for \(N\subseteq\Omega^{1}_{u}(P)\) the corresponding sub-bimodule of the universal calculus, we have \(\overline{\mathrm{can}}(N)=P\otimes I\), for some Ad-sub-comodule right ideal
\[I\subseteq H^{+}:=\ker(\varepsilon:H\to\mathbb{C}),\]
where \(\operatorname{Ad}:H\to H\otimes H\) is defined by \(\operatorname{Ad}(h):=h_{(2)}\otimes S(h_{(1)})h_{(3)}\).
Denoting by \(\Omega^{1}(B)\) the restriction of \(\Omega^{1}(P)\) to \(B\), and \(\Lambda^{1}_{H}:=H^{+}/I\), the quantum principal bundle definition implies that an exact sequence is given by
\[0\longrightarrow P\Omega^{1}(B)P\smash{\mathop{\longrightarrow}\limits^{ \iota}}\Omega^{1}(P)\smash{\mathop{\longrightarrow}\limits^{\operatorname{ \overline{can}}}\limits}P\otimes\Lambda^{1}_{H}\longrightarrow 0, \tag{3}\]
where by abuse of notation, \(\operatorname{\overline{can}}\) denotes the map induced on \(\Omega^{1}(P)\) by identifying \(\Omega^{1}(P)\) as a quotient of \(\Omega^{1}_{u}(P)\) (for details see [Haj]).
A _principal connection_ for a quantum principal \(H\)-bundle \((P,\Omega^{1}(P))\) is a right \(H\)-comodule, left \(P\)-module, projection \(\Pi:\Omega^{1}(P)\to\Omega^{1}(P)\) satisfying
\[\ker(\Pi)=P\Omega^{1}(B)P.\]
The existence of a principal connection is equivalent to the existence of a left \(P\)-module, right \(H\)-comodule, splitting of the exact sequence given in (3). A principal connection \(\Pi\) is called _strong_ if \((\operatorname{id}-\Pi)\bigl{(}\operatorname{d}P\bigr{)}\subseteq\Omega^{1}( B)P\).
### Quantum Principal Bundles and Quantum Homogeneous Spaces
We now restrict to the case of a homogeneous quantum principal bundle, which is to say, a quantum principal bundle whose composite \(H\)-comodule algebra is a _quantum homogeneous space_\(B:=A^{\operatorname{co}(H)}\), given by a surjective Hopf algebra map \(\pi:A\to H\). For this special case, it is natural to restrict to calculi on \(A\) which are left \(A\)-covariant. Any such calculus \(\Omega^{1}(A)\) is an object in \({}^{A}_{A}\mathsf{Mod}_{A}\), and so, by the fundamental theorem of two-sided Hopf modules (Appendix A.2), we have the isomorphism
\[\operatorname{U}:\Omega^{1}(A)\simeq A\otimes F\bigl{(}\Omega^{1}(A)\bigr{)}.\]
As a direct calculation will verify, with respect to the right \(H\)-coaction
\[F(\Omega^{1}(A))\to F(\Omega^{1}(A))\otimes H,\qquad\qquad[\omega]\mapsto[ \omega_{(0)}]\otimes\pi(S(\omega_{(-1)}))\omega_{(1)},\]
the unit \(\operatorname{U}\) of the equivalence is a right \(H\)-comodule map. (Here the right \(H\)-coaction on \(A\otimes F(\Omega^{1}(A))\) is the usual tensor product coaction.) Thus a left \(A\)-covariant principal connection is equivalent to a choice of right \(H\)-comodule decomposition
\[F(\Omega^{1}(A))\simeq F(A\Omega^{1}(B)A)\oplus F(A\otimes\Lambda^{1}_{H}) \simeq F(A\Omega^{1}(B)A)\oplus\Lambda^{1}_{H}.\]
As established in [CMO], for a homogeneous quantum principal bundle with cosemisimple composite Hopf algebras, all principal connections are strong.
Next, we come connections on any \(\mathcal{F}\in{}^{A}_{B}\mathsf{mod}_{0}\). Note first that we have a natural embedding
\[j:\Omega^{1}(B)\otimes_{B}\mathcal{F}\hookrightarrow\Omega^{1}(B)A\,\square_ {H}\Phi(\mathcal{F}),\qquad\qquad\omega\otimes f\mapsto\omega f_{(-1)}\otimes[ f_{(0)}].\]
We claim that a strong principal connection \(\Pi\) defines a connection \(\nabla\) on \(\mathcal{F}\) by
\[\nabla:\mathcal{F}\to\Omega^{1}(B)\otimes_{B}\mathcal{F},\qquad\qquad f\mapsto j ^{-1}\bigl{(}(\operatorname{id}-\Pi)(\operatorname{d}f_{(-1)})\otimes[f_{(0)} ]\bigr{)}.\]
Indeed, since \(\operatorname{d}\) and the projection \(\Pi\) are both right \(H\)-comodule maps, their composition \((\operatorname{id}-\Pi)\circ\operatorname{d}\) is a right \(H\)-comodule map. Hence the image of \((\operatorname{id}-\Pi)\circ\operatorname{d}\)
is contained in \(j\left(\Omega^{1}(B)\otimes_{B}\mathcal{F}\right)\), and \(\nabla\) defines a connection. Moreover, if the principal connection \(\Pi\) is a left \(A\)-comodule map, then the connection \(\nabla\) is also a left \(A\)-comodule map.
## 3. Curvature of Line Bundles over Quantum Projective Spaces
In this section we compute curvature of positive homogeneous line bundles over quantum projective space using the framework of quantum principal bundles.
### Quantum Projective Spaces
In this subsection we recall the definition of _quantum projective space_, which is to say the \(A\)-series irreducible quantum flag manifold \(\mathcal{O}_{q}(\mathbb{CP}^{n})\). For details and notation see [1, 1] or [1].
Fix \(q\in\mathbb{C}\) such that \(q\neq 0\) and \(q^{n}\neq 1\) for any integer \(n\geq 1\). The _quantised universal enveloping algebra_\(U_{q}(\mathfrak{sl}_{n+1})\) is the associative algebra generated by the elements \(E_{i}\), \(F_{i}\), \(K_{i}\), and \(K_{i}^{-1}\), for \(i=1,\ldots,n\), subject to the relations
\[K_{i}E_{j}K_{i}^{-1}=q^{2\delta_{i,j}-\delta_{i,j-1}-\delta_{i,j+1}}E_{j}, \qquad K_{i}F_{j}K_{i}^{-1}=q^{-2\delta_{i,j}+\delta_{i,j-1}-\delta_{i,j+1}}F_{ j},\qquad K_{i}K_{j}=1,\]
\[K_{i}^{-1}K_{i}=K_{i}K_{i}^{-1}=1,\qquad E_{i}F_{j}-F_{j}E_{i}=\delta_{i,j} \frac{K_{i}-K_{i}^{-1}}{q-q^{-1}},\]
together with the _quantum Serre relations_
\[E_{i}E_{j}=E_{j}E_{i},\qquad F_{i}F_{j}=F_{j}F_{i},\quad|i-j|\geq 2,\] \[E_{i}^{2}E_{i\pm 1}-(q+q^{-1})E_{i}E_{i\pm 1}E_{i}+E_{i\pm 1}E_{i}^ {2}=0,\] \[F_{i}^{2}F_{i\pm 1}-(q+q^{-1})F_{i}F_{i\pm 1}F_{i}+F_{i\pm 1}F_{i}^ {2}=0.\]
The algebra \(U_{q}(\mathfrak{sl}_{n+1})\) is a Hopf algebra with structure maps given by
\[\Delta E_{i}=E_{i}\otimes K_{i}+1\otimes E_{i},\quad\Delta F_{i}=F_{i}\otimes 1 +K_{i}^{-1}\otimes F_{i},\quad\Delta K_{i}^{\pm}=K_{i}^{\pm}\otimes K_{i}^{\pm},\]
\[S(E_{i})=-E_{i}K_{i}^{-1},\quad S(F_{i})=-K_{i}F_{i},\quad S(K_{i}^{\pm})=K_{ i}^{\mp},\]
\[\varepsilon(K_{i})=0,\quad\varepsilon(E_{i})=\varepsilon(F_{i})=0.\]
For \(q\in\mathbb{R}\), a Hopf \(*\)-algebra structure, called the _compact real form_ of \(U_{q}(\mathfrak{sl}_{n+1})\), is defined by
\[E_{i}^{*}:=K_{i}F_{i},\qquad F_{i}^{*}:=E_{i}K_{i}^{-1},\qquad K_{i}^{*}:=K_{ i}.\]
and denoted by \(U_{q}(\mathfrak{sl}_{n+1})\).
Let \(V\) be a finite-dimensional left \(U_{q}(\mathfrak{sl}_{n+1})\)-module, \(v\in V\), \(f\in V^{*}\). Consider the function \(c_{f,v}^{V}\colon U_{q}(\mathfrak{sl}_{n+1})\to\mathbb{C}\) defined by \(c_{f,v}^{V}:=f(Xv)\). The _coordinate space_ is the subspace
\[C(V):=\operatorname{span}_{\mathbb{C}}(c_{f,v}^{V}\mid v\in V,f\in V^{*}) \subseteq U_{q}(\mathfrak{sl}_{n+1})^{\circ},\]
where \(U_{q}(\mathfrak{sl}_{n+1})^{\circ}\) denote the dual Hopf algebra of \(U_{q}(\mathfrak{sl}_{n+1})\). A \(U_{q}(\mathfrak{sl}_{n+1})\)-bimodule structure on \(C(V)\) is given by
\[(Yc_{f,v}^{V}Z)(X):=f(YXZv)=c_{fZ,Yv}^{V}(X)\]
Let \(\mathcal{P}_{+}\) be the set of dominant integral weights of \(\mathfrak{sl}_{n+1}\). For \(\lambda\in\mathcal{P}_{+}\), denote by \(V_{\lambda}\) the irreducible \(U_{q}(\mathfrak{sl}_{n+1})\)-module with the highest weight \(\lambda\). It is easily checked, that the subspace
\[O_{q}(SU_{n+1}):=\bigoplus_{\mu\in\mathcal{P}_{+}}C(V_{\mu})\]
is a Hopf \(*\)-subalgebra of \(U_{q}(\mathfrak{su}_{n+1})^{\circ}\).
The quantum Levi subalgebra corresponding to the quantum projective space is defined by
\[U_{q}(\mathfrak{l}):=\left\langle K_{i},E_{j},F_{j}\,|\,i=1,\ldots,n+1;j=2, \ldots,n+1\right\rangle\subseteq U_{q}(\mathfrak{sl}_{n+1}).\]
The Hopf \(*\)-algebra embedding \(\iota\colon U_{q}(\mathfrak{l})\to U_{q}(\mathfrak{su}_{n+1})\) induces a dual Hopf algebra map \(\iota^{\circ}\colon U_{q}(\mathfrak{su}_{n+1})^{\circ}\to U_{q}( \mathfrak{l})^{\circ}\). By construction \(\mathcal{O}_{q}(SU_{n+1})\subset U_{q}(\mathfrak{su}_{n+1})^{\circ}\), so we can define the restriction Hopf algebra map
\[\pi:=\iota^{\circ}|_{\mathcal{O}_{q}(SU_{n+1})}:\mathcal{O}_{q}(SU_{n+1}) \to U_{q}(\mathfrak{l}_{S})^{\circ}.\]
Motivated by the classical situation, we denote \(\mathcal{O}_{q}(U_{n}):=\pi\left(\mathcal{O}_{q}(SU_{n+1})\right)\) (see [11, OB1]). Clearly, \(\mathcal{O}_{q}(U_{n})\) is a Hopf \(*\)-algebra. The quantum homogeneous space
\[\mathcal{O}_{q}(\mathbb{CP}^{n}):=\mathcal{O}_{q}(SU_{n+1})^{\mathrm{co} \mathcal{O}_{q}(U_{n})}\]
corresponding to the surjective Hopf \(*\)-algebra map \(\pi\) is called _quanutm projective space_.
### Line Bundles over Quantum Projective Space
The quantum homogeneous space \(\mathcal{O}_{q}(SU_{n+1}/SU_{n})\) is the invariant subspace of \(\mathcal{O}_{q}(SU_{n})\) with respect to action of the Hopf subalgebra
\[U_{q}(\mathfrak{sl}_{n}):=\left\langle K_{i},E_{i},F_{i}\,|\,i=2,\ldots,n+1 \right\rangle\subseteq U_{q}(\mathfrak{sl}_{n+1}).\]
In this special case, the quantum space is usually denoted by \(\mathcal{O}_{q}(S^{2n+1})\) and called the _\((2n+1)\)-dimensional quantum sphere_.
Since every finite-dimensional representation of \(\mathfrak{sl}_{n+1}\) is contained in some tensor power of the first fundamental representation \(V_{\varpi_{1}}\) of \(\mathfrak{sl}_{n+1}\), the matrix coefficients of \(V_{\varpi_{1}}\) generate \(\mathcal{O}_{q}(SU_{n+1})\) as an algebra. In particular, we can choose a weight basis \(\{v_{j}\}_{j=1}^{n+1}\) of \(V_{\varpi_{1}}\) such that the matrix coefficients \(u_{j}^{i}:=c_{f_{i},v_{j}}^{\varpi_{1}}\), for \(i,j=1,\ldots,n+1\), coincide with the well-known FRT-presentation of \(\mathcal{O}_{q}(SU_{n+1})\), see [12] or [13, SS9] for details.
With respect to this presentation, the quantum sphere \(\mathcal{O}_{q}(S^{2n+1})\) is generated as an algebra by the elements
\[z_{i}:=u_{1}^{i},\qquad\text{and}\qquad\overline{z}_{i}:=S(u_{i}^{1}),\qquad \text{for }i=1,\ldots,n.\]
The central element \(Z\) in \(U_{q}(\mathfrak{l})\), see the discussion in [1, SS4.4], is explicitly given by
\[Z=K_{1}^{n}K_{2}^{n-1}\cdot\ldots\cdot K_{n},\]
In terms of the \(\mathbb{Z}\)-grading induced by the action of \(Z\),
\[\mathcal{O}_{q}(S^{2n+1})=\bigoplus_{k\in\mathbb{Z}}\mathcal{E}_{k},\]
the elements \(z_{i}\) have degree \(1\), while the elements \(\bar{z}_{j}\) have degree \(-1\). Thus, for every \(k\in\mathbb{N}_{0}\), as objects
\[\mathcal{E}_{k},\,\mathcal{E}_{-k}\in\overset{\mathcal{O}_{q}(SU_{n+1})}{ \mathcal{O}_{q}(\mathbb{CP}^{n})}\mathsf{mod}_{0}\]
the line bundle \(\mathcal{E}_{k}\) is generated by the element \(z_{1}^{k}\), while \(\mathcal{E}_{-k}\) is generated by the element \(\overline{z}_{1}^{k}\).
Finally, for any \(k\in\mathbb{N}\), we find it convenient to consider the element \(v_{\pm k}\in\Phi(\mathcal{E}_{\pm k})\), uniquely defined by
\[\mathrm{U}(e)=e\otimes v_{\pm k},\qquad\qquad\qquad\text{ for all }e\in\mathcal{E}_{\pm k}. \tag{4}\]
As is readily confirmed, \(v_{k}=[z_{1}^{k}]\), for all \(k\in\mathbb{N}\).
The irreducible quantum flag manifolds (and, in particular, quantum projective spaces) are distinguished by the existence of an essentially unique \(q\)-deformation of their classical de Rham complexes. The existence of such a canonical deformation is one of the most important results in the noncommutative geometry of quantum groups, establishing it as a solid base from which to investigate more general classes of quantum spaces. The following theorem is a direct consequence of results established in [11], [12].
**Theorem 3.1**.: _Over any quantum projective space \(\mathcal{O}_{q}(\mathbb{CP}^{n})\), there exists a unique finite-dimensional left \(\mathcal{O}_{q}(SU_{n})\)-covariant differential \(*\)-calculus_
\[\Omega_{q}^{\bullet}(\mathbb{CP}^{n})\in\overset{\mathcal{O}_{q}(SU_{n})}{ \mathcal{O}_{q}(\mathbb{CP}^{n})}\mathsf{mod}_{0},\]
_of classical dimension, that is to say, satisfying_
\[\dim\Phi\big{(}\Omega_{q}^{k}(\mathbb{CP}^{n})\big{)}=\binom{2M}{k},\qquad \qquad\text{ for all }\,k=0,\dots,2M,\]
_where \(M\) is the complex dimension of the corresponding classical manifold \(\mathbb{CP}^{n}\)._
The calculus \(\Omega_{q}^{\bullet}(\mathbb{CP}^{n})\), which we refer to as the _Heckenberger-Kolb calculus_ of \(\mathcal{O}_{q}(\mathbb{CP}^{n})\), has many remarkable properties. We recall here only the existence of a unique covariant complex structure, following from the results of [11], [12], and [13].
A Quantum Principal Bundle Presentation of the Chern Connection of \(\mathcal{O}_{q}(\mathbb{CP}^{n})\)
In this subsection we recall the quantum principal bundle description of the Heckenberger-Kolb calculus introduced in [13]. The constituent calculus \(\Omega_{q}^{1}(SU_{n+1})\) of the quantum principal bundle was originally constructed as a distinguished quotient of the standard bicovariant calculus on \(\mathcal{O}_{q}(SU_{n+1})\); see [14, Maj1]. Here we confine ourselves to those properties of the calculus which are relevant to our calculations below, and refer the interested reader to [13, SS4].
We stress that the calculus is far from being a natural \(q\)-deformation of the space of differential \(1\)-forms of \(SU_{n+1}\), instead it should be considered as a convenient tool for performing explicit calculations.
The calculus is left \(\mathcal{O}_{q}(SU_{n+1})\)-covariant, right \(\mathcal{O}_{q}(U_{n})\)-covariant, and restricts to the Heckenberger-Kolb calculus \(\Omega^{1}_{q}(\mathbb{CP}^{n})\). Thus it gives us a quantum principal bundle presentation of \(\Omega^{1}_{q}(\mathbb{CP}^{n})\), with associated short exact sequence
\[0\to\mathcal{O}_{q}(SU_{n+1})\Omega^{1}_{q}(\mathbb{CP}^{n})\mathcal{O}_{q}(SU _{n+1})\smash{\mathop{\longrightarrow}\limits^{\iota}}\,\Omega^{1}_{q}(SU_{ n+1})\smash{\mathop{\longrightarrow}\limits^{\overline{\text{\tiny can}}}} \,\mathcal{O}_{q}(SU_{n+1})\otimes\Lambda^{1}_{\mathcal{O}_{q}(U_{n})}\to 0.\]
Since the calculus is left \(\mathcal{O}_{q}(SU_{n+1})\)-covariant, it is an \(\mathcal{O}_{q}(SU_{n+1})\)-Hopf module. A basis of \(F(\Omega^{1}_{q}(SU_{n+1}))\) is given by
\[e^{+}_{i}:=[\text{d}u^{1+1}_{1}],\qquad e^{0}:=[\text{d}u^{1}_{1}],\qquad e^ {-}_{i}:=[\text{d}u^{1}_{i+1}],\qquad\text{ for }i=1,\dots,n.\]
Moreover, \([u^{i}_{j}]=0\), if both \(i,j\neq 1\). Let us now denote
\[\Lambda^{(1,0)}:=\text{span}_{\mathbb{C}}\{e^{+}_{i}\,|\,i=1,\dots,n\},\qquad \Lambda^{(0,1)}:=\text{span}_{\mathbb{C}}\{e^{-}_{i}\,|\,i=1,\dots,n\}.\]
The space \(\Lambda^{(1,0)}\oplus\Lambda^{(0,1)}\) is a right \(\mathcal{O}_{q}(SU_{n+1})\)-sub-module of \(\Phi_{\mathcal{O}_{q}(SU_{n+1})}\big{(}\Omega^{1}_{q}(SU_{n+1})\big{)}\). Explicitly, its \(\mathcal{O}_{q}(SU_{n+1})\)-sub-module structure is given by
\[e^{\pm}_{i}\triangleleft u^{k}_{k}=q^{\delta_{i+1,k}+\delta_{1k}-2/(n+1)}e^{\pm }_{i},\qquad\qquad e^{\pm}_{i}\triangleleft u^{k}_{l}=0,\quad\text{ for all }k\neq l. \tag{5}\]
It is important to note that the subspace \(\mathbb{C}e^{0}\) is _not_ an \(\mathcal{O}_{q}(SU_{n+1})\)-sub-module of \(F(\mathcal{O}_{q}(SU_{n+1}))\), nor is it even an \(\mathcal{O}_{q}(S^{2n-1})\)-sub-module. However, as shown in [1, Proposition 6.2], it _is_ a sub-module over \(\mathbb{C}\langle z_{1}\rangle\), the \(*\)-sub-algebra of \(\mathcal{O}_{q}(S^{2n+1})\) generated by \(z_{1}\).
It follows from the results of [1, SS5] that
\[F\left(\mathcal{O}_{q}(SU_{n+1})\Omega^{1}_{q}(\mathbb{CP}^{n})\mathcal{O}_{q} (SU_{n+1})\right)=\Lambda^{(1,0)}\oplus\Lambda^{(0,1)}.\]
Moreover, a decomposition of right \(\mathcal{O}_{q}(U_{n})\)-comodules is given by
\[F\left(\Omega^{1}_{q}(SU_{n+1})\right)=\Lambda^{(1,0)}\oplus\Lambda^{(0,1)} \oplus\mathbb{C}e^{0}.\]
Thus we have a left \(\mathcal{O}_{q}(SU_{n+1})\)-covariant strong principal connection \(\Pi\), uniquely defined by
\[F(\Pi):F\big{(}\Omega^{1}_{q}(SU_{n+1})\big{)}\to\mathbb{C}e^{0}.\]
For an arbitrary covariant vector bundle \(\mathcal{F}\), let us now look at the associated connection
\[\nabla:\mathcal{F}\to\Omega^{1}_{q}(\mathbb{CP}^{n})\otimes_{\mathcal{O}_{q} (\mathbb{CP}^{n})}\mathcal{F}\]
associated to \(\Pi\). The linear map
\[\overline{\partial}_{\mathcal{F}}:=(\text{proj}_{\Omega^{(0,1)}}\otimes\text {id})\circ\nabla:\mathcal{F}\to\Omega^{(0,1)}\otimes_{\mathcal{O}_{q}(\mathbb{ CP}^{n})}\mathcal{F}\]
is a \((0,1)\)-connection. Moreover, we have an analogously defined \((1,0)\)-connection for \(\mathcal{F}\), which we denote by \(\partial_{\mathcal{F}}\). Consider next the obvious linear projections
\[\Pi^{(1,0)}:F\big{(}\Omega^{1}_{q}(SU_{n+1})\big{)}\to\Lambda^{(1,0)},\qquad \quad\Pi^{(0,1)}:F\big{(}\Omega^{1}_{q}(SU_{n+1})\big{)}\to\Lambda^{(0,1)}. \tag{6}\]
In terms of these operators, we have the following useful formulae:
\[\partial_{\mathcal{F}} =j^{-1}\circ((\Pi^{(1,0)}\circ\mathrm{d})\otimes\mathrm{id})\circ \mathrm{U},\] \[\overline{\partial}_{\mathcal{F}} =j^{-1}\circ((\Pi^{(0,1)}\circ\mathrm{d})\otimes\mathrm{id})\circ \mathrm{U}.\]
For the special case of the covariant line bundles, it follows from the uniqueness of \((0,1)\)-connections, presented in [1, Theorem 4.5], that \(\overline{\partial}_{\mathcal{E}_{k}}\) is equal to the holomorphic structure of \(\mathcal{E}_{k}\), justifying the choice of notation. We have an analogous result for the \((1,0)\)-connection \(\partial_{\mathcal{E}_{k}}\). Thus \(\nabla=\partial_{\mathcal{E}_{k}}+\overline{\partial}_{\mathcal{E}_{k}}\) is equal to the Chern connection of \(\mathcal{E}_{k}\).
### Chern Curvature of the Positive Line Bundles of \(\mathcal{O}_{q}(\mathbb{CP}^{n})\)
In this subsection we explicitly calculate the curvature of the positive line bundles over quantum projective space. We begin with the following technical lemma.
**Lemma 3.2**.: _It holds that, for all \(k\in\mathbb{N}\),_
\[\Pi^{(1,0)}\circ\mathrm{d}(z_{1}^{k})=(k)_{q^{2/(n+1)}}\big{(}\Pi^{(1,0)} \circ\mathrm{d}(z_{1})\big{)}z_{1}^{k-1}.\]
Proof.: We will prove the formula using induction. For \(k=1\), the formula is trivially satisfied. For \(k=2\), we see that
\[\mathrm{U}\big{(}\Pi^{(1,0)}\circ\mathrm{d}(z_{1})z_{1}\big{)} =\,\mathrm{U}\big{(}\Pi^{(1,0)}\circ\mathrm{d}(u_{1}^{1})\big{)}u _{1}^{1}\] \[=\,\left(\sum_{a=2}^{n}u_{a}^{1}\otimes[\mathrm{d}(u_{1}^{a})] \right)u_{1}^{1}\] \[=\,\sum_{b=1}^{n}\sum_{a=2}^{n}u_{a}^{1}u_{1}^{b}\otimes[\mathrm{ d}(u_{1}^{a})u_{b}^{1}].\]
Recalling the identities in (5) and the definition of \(\Pi^{(1,0)}\) given in (6), we see that
\[\sum_{b=1}^{n}\sum_{a=2}^{n}u_{a}^{1}u_{1}^{b}\otimes[\mathrm{d}(u_{1}^{a})u_ {b}^{1}]=\sum_{a=2}^{n}u_{a}^{1}u_{1}^{1}\otimes[\mathrm{d}(u_{1}^{a})u_{1}^{ 1}]=\,\,q^{1-\frac{2}{n+1}}\sum_{a=2}^{n}u_{a}^{1}u_{1}^{1}\otimes[\mathrm{d}( u_{1}^{a})].\]
The commutation relations of \(\mathcal{O}_{q}(SU_{n+1})\) tell us that \(u_{a}^{1}u_{1}^{1}=q^{-1}u_{1}^{1}u_{a}^{1}\) (see, for example, [2, SS1] or [1, SS9.2] for details). Thus
\[q^{1-\frac{2}{n+1}}\sum_{a=2}^{n}u_{a}^{1}u_{1}^{1}\otimes[\mathrm{ d}u_{1}^{a}] =q^{-\frac{2}{n+1}}u_{1}^{1}\sum_{a=2}^{n}u_{a}^{1}\otimes[\mathrm{ d}u_{1}^{a}]\] \[=q^{-\frac{2}{n+1}}\mathrm{U}\,\big{(}z_{1}\,\Pi^{(1,0)}\circ \mathrm{d}(z_{1})\big{)}.\]
Hence we see that \(z_{1}\big{(}\Pi^{(1,0)}\circ\mathrm{d}(z_{1})\big{)}=q^{\frac{2}{n+1}}\left( \Pi^{(1,0)}\circ\mathrm{d}(z_{1})\right)z_{1}\).
Let us now assume that the formula holds for some general \(k\). By the Leibniz rule
\[\Pi^{(1,0)}\circ\mathrm{d}(z_{1}^{k+1})=\Pi^{(1,0)}\big{(}(\mathrm{d}z_{1}^{k -1})z_{1}+z_{1}^{k-1}\mathrm{d}z_{1}\big{)}\,.\]
Since \(\Pi^{(1,0)}\) is a left \(\mathcal{O}_{q}(SU_{n+1})\)-module map, it must hold that
\[\Pi^{(1,0)}\big{(}(\mathrm{d}z_{1}^{k-1})z_{1}+z_{1}^{k-1}\mathrm{d}z_{1}\big{)}= \Pi^{(1,0)}\big{(}\mathrm{d}z_{1}^{k-1}z_{1}\big{)}+z_{1}^{k-1}\Pi^{(1,0)}\,( \mathrm{d}z_{1})\,.\]
Moreover, since \(\mathbb{C}e^{0}\) is a \(\mathbb{C}\langle z_{1}\rangle\)-sub-module of \(F\big{(}\Omega_{q}^{1}(SU_{n+1})\big{)}\), the projection \(\Pi^{(1,0)}\) must be a right \(\mathbb{C}\langle z_{1}\rangle\)-module map. Thus we see that
\[\Pi^{(1,0)}\big{(}\mathrm{d}z_{1}^{k-1}z_{1}\big{)}+z_{1}^{k-1}\Pi^{(1,0)}\,( \mathrm{d}z_{1})=\Pi^{(1,0)}\big{(}\mathrm{d}z_{1}^{k-1}\big{)}\,z_{1}+z_{1}^ {k-1}\Pi^{(1,0)}\,(\mathrm{d}z_{1})\,.\]
Using our inductive assumption, we can reduce this expression to
\[(k-1)_{q^{2/(n+1)}}\Pi^{(1,0)}(\mathrm{d}z_{1})\,z_{1}^{k-1}+q^{2(k-1)/(n+1)} \Pi^{(1,0)}(\mathrm{d}z_{1})z_{1}^{k-1}.\]
By the definition of the quantum integer, this in turn reduces to
\[(k)_{q^{2/(n+1)}}\Pi^{(1,0)}(\mathrm{d}z_{1})\,z_{1}^{k-1}.\]
The claimed formula now follows by induction.
**Theorem 3.3**.: _For any positive line bundle \(\mathcal{E}_{k}\) over quantum projective space \(\mathcal{O}_{q}(\mathbb{CP}^{n})\), it holds that_
\[\nabla^{2}(e)=-(k)_{q^{-2/(n+1)}}\mathbf{i}\kappa\otimes e,\] _for all \[e\in\mathcal{E}_{k},\]_
_where we have chosen the unique Kahler form \(\kappa\) satisfying_
\[\nabla^{2}(e)=-\mathbf{i}\kappa\otimes e,\] _for all \[e\in\mathcal{E}_{1}. \tag{7}\]
Proof.: We are free to calculate the curvature of \(\mathcal{E}_{k}\) by letting \(\nabla^{2}\) act on any non-zero element of \(\mathcal{E}_{k}\). The element \(z_{1}^{k}\) presents itself as a convenient choice since \(\overline{\partial}_{\mathcal{E}_{k}}(z_{1}^{k})=0\), as proved in [1, SS???]. In particular, it holds that
\[\nabla^{2}(z_{1}^{k})=(\overline{\partial}_{\mathcal{E}_{k}}\circ\partial_{ \mathcal{E}_{k}}+\partial_{\mathcal{E}_{k}}\circ\overline{\partial}_{ \mathcal{E}_{k}})(z_{1}^{k})=\overline{\partial}_{\mathcal{E}_{k}}\circ \partial_{\mathcal{E}_{k}}(z_{1}^{k}).\]
For convenience, let us now denote \(\alpha:=q^{-2/(n+1)}\). From the quantum principal bundle presentation of \(\partial_{\mathcal{E}_{k}}\) given in the previous subsection, together with Lemma 3.2, we see that
\[\partial_{\mathcal{E}_{k}}(z_{1}^{k}) =j^{-1}\big{(}\Pi^{(1,0)}\circ\mathrm{d}(z_{1}^{k})\otimes v_{k} \big{)}\] \[=(k)_{\alpha}\,j^{-1}\big{(}\big{(}\Pi^{(1,0)}\circ\mathrm{d}(z_ {1}^{k})\big{)}z_{1}^{k-1}\otimes v_{k}\big{)},\]
where in the first identity we have used (4). We now present this expression as an element in \(\Omega^{(1,0)}\otimes_{B}(A\square_{H}\Phi(\mathcal{E}_{k}))\):
\[(k)_{\alpha}j^{-1}\big{(}\big{(}\Pi^{(1,0)}\circ\mathrm{d}(z_{1 })\big{)}z_{1}^{k-1}\otimes v_{k}\big{)}= \sum_{i=1}^{n+1}(k)_{\alpha}j^{-1}\,\big{(}\Pi^{(1,0)}\circ \mathrm{d}(u_{1}^{1})S(u_{i}^{1})u_{1}^{i}z_{1}^{k-1}\otimes v_{k}\big{)}\] \[= \sum_{i=1}^{n+1}(k)_{\alpha}\partial(u_{1}^{1}S(u_{i}^{1})) \otimes(z_{i}z_{1}^{k-1}\otimes v_{k}).\]
Acting on this element by \(\overline{\partial}_{\mathcal{E}_{k}}\), and recalling that \(\overline{\partial}z_{i}=0\), for all \(i=1,\ldots,n\), gives us the identity
\[\sum_{i=1}^{n+1}\overline{\partial}_{\mathcal{E}_{k}}\Big{(}(k)_{\alpha} \partial(u_{1}^{1}S(u_{i}^{1}))\otimes(z_{i}z_{1}^{k-1}\otimes v_{k})\Big{)}= \,\sum_{i=1}^{n+1}(k)_{\alpha}\overline{\partial}\partial(u_{1}^{1}S(u_{i}^{1} ))\otimes(z_{i}z_{1}^{k-1}\otimes v_{k}).\]
Operating by \(\mathrm{id}\otimes\mathrm{U}^{-1}\) produces the expression
\[(k)_{\alpha}\sum_{i=1}^{n+1}\overline{\partial}\partial(u_{1}^{1}S(u_{i}^{1} ))\otimes z_{i}z_{1}^{k-1}=(k)_{\alpha}\nabla(u_{1}^{1})z_{1}^{k-1},\]
where the right multiplication of \(\nabla(u_{1}^{1})\) by \(z_{1}^{k-1}\) is defined with respect to the canonical embeddings of \(\Omega^{(1,1)}\otimes_{B}\mathcal{E}_{1}\) and \(\Omega^{(1,1)}\otimes_{B}\mathcal{E}_{k}\) into \(\Omega^{(1,1)}\otimes_{B}\mathcal{O}_{q}(S^{2n-1})\). Finally, recalling that we have chosen a scaling of our Kahler form to satisfy (7), we have
\[\nabla^{2}(z_{1}^{k})=(k)_{\alpha}(-\mathbf{i}\,\kappa\otimes z_{1})z_{1}^{k- 1}=-(k)_{\alpha}\mathbf{i}\kappa\otimes z_{1}^{k},\]
which gives us the claimed identity.
**Remark 3.4**.: It is worth noting that the Chern curvature is clearly independent of any quantum principal bundle presentation of the calculus \(\Omega^{1}_{q}(\mathbb{CP}^{n})\). However, the quantum bundle presentation allows us to calculate the curvature in a systematic manner, and provides us with concrete insight into why the curvature undergoes a \(q\)-integer deformation.
## Appendix A Some Categorical Equivalences
In this appendix we present a number of categorical equivalences, all ultimately derived from Takeuchi's equivalence [Tak]. (We note that similar results hold under much weaker assumptions, see [Skr].) These equivalences play a prominent role in the paper, giving us a formal framework in which to understand covariant differential calculi as noncommutative homogeneous vector bundles.
### Takeuchi's Bimodule Equivalence
Let \(A\) and \(H\) be Hopf algebras, and \(B=A^{\mathrm{co}(H)}\) the quantum homogeneous space associated to a surjective Hopf algebra map \(\pi:A\to H\). We define \({}^{A}_{B}\mathsf{Mod}_{B}\) to be the category whose objects are left \(A\)-comodules \(\Delta_{L}:\mathcal{F}\to A\otimes\mathcal{F}\), endowed with a \(B\)-bimodule structure, such that
\[\Delta_{L}(bfc)=\Delta_{L}(b)\Delta_{L}(f)\Delta_{L}(c),\qquad\qquad\text{ for all }f\in\mathcal{F},b,c\in B, \tag{8}\]
and whose morphisms are left \(A\)-comodule, \(B\)-bimodule, maps. Let \({}^{H}\mathsf{Mod}\) denote the category whose objects are left \(H\)-comodules, and whose morphisms are left \(H\)-comodule maps.
If \(\mathcal{F}\in{}^{A}_{B}\mathsf{Mod}_{B}\), and \(B^{+}:=B\cap\ker(\varepsilon:A\to\mathbb{C})\), then \(\mathcal{F}/(B^{+}\mathcal{F})\) becomes an object in \({}^{H}\mathsf{Mod}_{B}\) with the obvious right \(B\)-action, and left \(H\)-coaction given by
\[\Delta_{L}[f]=\pi(f_{(-1)})\otimes[f_{(0)}],\qquad\qquad\qquad\text{ for }f\in\mathcal{F}, \tag{9}\]
where \([f]\) denotes the coset of \(f\) in \(\mathcal{F}/(B^{+}\mathcal{F})\). A functor
\[\Phi:{}^{A}_{B}\mathsf{Mod}_{B}\to{}^{H}\mathsf{Mod}_{B} \tag{10}\]
is now defined as follows: \(\Phi(\mathcal{F}):=\mathcal{F}/(B^{+}\mathcal{F})\), and if \(g:\mathcal{F}\to\mathcal{D}\) is a morphism in \({}^{A}_{B}\mathsf{Mod}_{B}\), then \(\Phi(g):\Phi(\mathcal{F})\to\Phi(\mathcal{D})\) is the map uniquely defined by \(\Phi(g)[f]:=[g(f)]\).
If \(V\in{}^{H}\mathsf{Mod}_{B}\) with coaction \(\Delta_{L}:V\to H\otimes V\), then the _cotensor product_ of \(A\) and \(V\) is defined by
\[A\,\square_{H}V:=\ker(\Delta_{R}\otimes\operatorname{id}-\operatorname{id} \otimes\Delta_{L}:A\otimes V\to A\otimes H\otimes V),\]
where \(\Delta_{R}:A\to A\otimes H\) denotes the homogenous right \(H\)-coaction on \(A\). The cotensor product becomes an object in \({}^{A}_{B}\mathsf{Mod}_{B}\) by defining a left \(B\)-bimodule structure, and left \(A\)-comodule structure, on the first tensor factor in the obvious way, and defining a right \(B\)-module structure by
\[\left(\sum a_{i}\otimes v_{i}\right)b:=\sum a_{i}b_{(1)}\otimes\left(v_{i} \triangleleft b_{(2)}\right),\]
for any \(b\in B\), and any \(\sum a_{i}\otimes v_{i}\in A\square_{H}V\). A functor
\[\Psi:{}^{H}\mathsf{Mod}_{B}\to{}^{A}_{B}\mathsf{Mod}_{B}\]
is now defined as follows: \(\Psi(V):=A\square_{H}V,\) and if \(\gamma\) is a morphism in \({}^{H}\mathsf{Mod}_{B}\), then \(\Psi(\gamma):=\operatorname{id}\otimes\gamma\).
For a quantum homogeneous space \(B=A^{\operatorname{co}(H)}\), the algebra \(A\) is said to be _faithfully flat_ as a right \(B\)-module if the functor \(A\otimes_{B}-:{}_{B}\mathsf{Mod}\to{}_{\mathbb{C}}\mathsf{Mod}\), from the category of left \(B\)-modules to the category of complex vector spaces, preserves and reflects exact sequences. As shown in [Chi, Corollary 3.4.5], for any coideal \(*\)-subalgebra of a CQGA faithful flatness is automatic. For example, \(\mathcal{O}_{q}(G)\) is faithfully flat as a left module over any quantum flag manifold \(\mathcal{O}_{q}(G/L_{S})\). The following equivalence was established in [Tak, Theorem 1].
**Theorem A.1** (Takeuchi's Equivalence).: _Let \(B=A^{\operatorname{co}(H)}\) be a quantum homogeneous space such that \(A\) is faithfully flat as a right \(B\)-module. An adjoint equivalence of categories between \({}^{A}_{B}\mathsf{Mod}_{B}\) and \({}^{H}\mathsf{Mod}_{B}\) is given by the functors \(\Phi\) and \(\Psi\) and unit, and counit, natural isomorphisms_
\[\operatorname{U} :\mathcal{F}\to\Psi\circ\Phi(\mathcal{F}), f\mapsto f_{(-1)}\otimes[f_{(0)}],\] \[\operatorname{C} :\Phi\circ\Psi(V)\to V, \Big{[}\sum_{i}a^{i}\otimes v^{i}\Big{]}\mapsto\sum_{i}\varepsilon( a^{i})v^{i}.\]
As observed in [OB2, Corollary 2.7], the inverse of the unit \(\operatorname{U}\) of the equivalence admits a useful explicit description:
\[\operatorname{U}^{-1}\!\left(\sum_{i}f_{i}\otimes[g_{i}]\right)=\sum_{i}f_{i} S\big{(}(g_{i})_{(-1)}\big{)}(g_{i})_{(0)}. \tag{11}\]
### The Fundamental Theorem of Two-Sided Hopf Modules
In this subsection we consider a special case of Takeuchi's equivalence, namely the fundamental theorem of two-sided Hopf modules. (This equivalence was originally considered in [12] using a parallel but equivalent formulation, see also [13].) For a Hopf algebra \(A\), the counit \(\varepsilon:A\to\mathbb{C}\) is a Hopf algebra map. The associated quantum homogeneous space is given by \(A=A^{\mathrm{co}(\mathbb{C})}\), the category \({}^{A}_{B}\mathsf{Mod}_{B}\) specialises to \({}^{A}_{A}\mathsf{Mod}_{A}\), and the category \({}^{H}\mathsf{Mod}_{B}\) reduces to the category of right \(A\)-modules \(\mathsf{Mod}_{A}\). In this special case we find it useful to denote the functor \(\Phi\) as
\[F:{}^{A}_{A}\mathsf{Mod}_{A}\to\mathsf{Mod}_{A}, \mathcal{F}\mapsto\mathcal{F}/A^{+}\mathcal{F},\]
Moreover, since the cotensor product over \(\mathbb{C}\) is just the usual tensor product \(\otimes\), we see that the functor \(\Psi\) reduces to
\[A\otimes-:\mathsf{Mod}_{A}\to{}^{A}_{A}\mathsf{Mod}_{A}, V\mapsto A\otimes V.\]
Since faithful flatness is trivially satisfied in this case, we have the following corollary of Takeuchi's equivalence.
**Theorem A.2** (Fundamental Theorem of Two-Sided Hopf Modules).: _An adjoint equivalence of categories between \({}^{A}_{A}\mathsf{Mod}_{A}\) and \(\mathsf{Mod}_{A}\) is given by the functors \(F\) and \(A\otimes-\), and the unit, and counit, natural isomorphisms_
\[\mathrm{U}:\mathcal{F}\to A\otimes F(\mathcal{F}), f\mapsto f_{(-1)}\otimes[f_{(0)}],\] \[\mathrm{C}:F(A\otimes V)\to V, [a\otimes v]\mapsto\varepsilon(a)v.\]
### Some Monoidal Equivalences
In this subsection we recall two monoidal equivalences induced by Takeuchi's equivalence. Denote by \({}^{A}_{B}\mathsf{Mod}_{0}\) the full subcategory of \({}^{A}_{B}\mathsf{Mod}_{B}\) whose objects \(\mathcal{F}\) satisfy the identity \(\mathcal{F}B^{+}=B^{+}\mathcal{F}.\) Consider also the full sub-category of \({}^{H}\mathsf{Mod}_{B}\) consisting of those objects endowed with the trivial right \(B\)-action, which is to say, those objects \(V\) for which \(v\triangleleft b=\varepsilon(b)v\), for all \(v\in V\), and \(b\in B\). This category is clearly isomorphic to \({}^{H}\mathsf{Mod}\), the category of left \(H\)-comodules, and as such, Takeuchi's equivalence induces an equivalence between \({}^{A}_{B}\mathsf{Mod}_{0}\) and \({}^{H}\mathsf{Mod}\), for details see [1, Lemma 2.8].
For \(\mathcal{F},\mathcal{D}\) two objects in \({}^{A}_{B}\mathsf{Mod}_{0}\), we denote by \(\mathcal{F}\otimes_{B}\mathcal{D}\) the usual bimodule tensor product endowed with the standard left \(A\)-comodule structure. This gives \({}^{A}_{B}\mathsf{Mod}_{B}\) the structure of a monoidal category. If \(\mathcal{F},\mathcal{D}\) are contained in the subcategory \({}^{A}_{B}\mathsf{Mod}_{0}\), then it is easily checked that \(\mathcal{F}\otimes_{B}\mathcal{D}\) is again an object in \({}^{A}_{B}\mathsf{Mod}_{0}\). Thus \({}^{A}_{B}\mathsf{Mod}_{0}\) is a monoidal subcategory of \({}^{A}_{B}\mathsf{Mod}_{B}\). With respect to the usual tensor product of comodules in \({}^{H}\mathsf{Mod}\), Takeuchi's equivalence is given the structure of a monoidal equivalence (see [1, SS4] for details) by the morphisms
\[\mu_{\mathcal{F},\mathcal{D}}:\Phi(\mathcal{F})\otimes\Phi(\mathcal{D})\to \Phi(\mathcal{F}\otimes_{B}\mathcal{D}),\ \ [f]\otimes[d]\mapsto[f\otimes d],\ \ \ \ \text{ for any }\mathcal{F},\mathcal{D}\in{}^{A}_{B}\mathsf{Mod}_{0}.\]
This monoidal equivalence will be tacitly assumed throughout the paper, along with the implied monoid structure on \(\Phi(\mathcal{N})\), for any monoid object \(\mathcal{N}\in{}^{A}_{B}\mathsf{Mod}_{0}\).
Consider now the category \({}^{A}_{B}\mathsf{Mod}\), whose objects are left \(A\)-comodules, and left \(B\)-modules, satisfying the obvious analogue of (8), and whose morphisms are left \(A\)-comodule, right \(B\)-module maps. We can endow any object \(\mathcal{F}\in{}^{A}_{B}\mathsf{Mod}\) with a right \(B\)-action uniquely defined by
\[f\triangleleft b:=f_{(-2)}bS(f_{(-1)})f_{(0)}.\]
Since \(e_{(-2)}bS(e_{(-1)})e_{(0)}\in B^{+}\mathcal{F}\), for all \(b\in B^{+}\), this new right module structure satisfies the defining conditions of \({}^{A}_{B}\mathsf{Mod}_{0}\), giving us an obvious equivalence between \({}^{A}_{B}\mathsf{Mod}\) and \({}^{A}_{B}\mathsf{Mod}_{0}\). In particular, we see that any left \(A\)-comodule, left \(B\)-module map between two objects in \({}^{A}_{B}\mathsf{Mod}_{0}\) is automatically a morphism. (We should note that the implied equivalence between \({}^{A}_{B}\mathsf{Mod}\) and \({}^{H}\mathsf{Mod}\) is the original form of Takeuchi's equivalence [Tak], the bimodule form presented above being an easy consequence.)
Next we examine \({}^{A}_{B}\mathsf{mod}_{0}\) the full sub-category of \({}^{A}_{B}\mathsf{Mod}_{B}\) whose objects \(\mathcal{F}\) are finitely generated as left \(B\)-modules and \({}^{H}\mathsf{mod}_{B}\), the full sub-category of \({}^{H}\mathsf{Mod}_{B}\) whose objects are finite-dimensional as complex vector spaces. As established in [1, Corollary 2.5] Takeuchi's equivalence induces an equivalence between these two sub-categories. We define the _dimension_ of an object \(\mathcal{F}\in^{A}_{B}\mathsf{mod}_{B}\) to be the dimension of \(\Phi(\mathcal{F})\) as a vector space.
## Appendix B Quantum Integers
Quantum integers are ubiquitous in the study of quantum groups. For this paper in particular, they arise in the defining relations of the Drinfeld-Jimbo quantum groups, and in the calculation of the curvature of the positive line bundles over quantum projective space \(\mathcal{O}_{q}(\mathbb{CP}^{n})\). In each case we use different but related formulations for quantum integers. Thus we take care here to clarify our choice of conventions. We begin with the version of quantum integer used in the definition of the Drinfeld-Jimbo quantum groups. For \(q\in\mathbb{C}\), the _quantum integer_\([m]_{q}\) is the complex number
\[[m]_{q}:=q^{-m+1}+q^{-m+3}+\cdots+q^{m-3}+q^{m-1}.\]
Note that when \(q\notin\{-1,0,1\}\), we have the identity
\[[m]_{q}=\frac{q^{m}-q^{-m}}{q-q^{-1}}.\]
We next recall the definition of the quantum binomials, which arise in the quantum Serre relations of the Drinfeld-Jimbo quantum groups. For any \(n\in\mathbb{N}\), we denote
\[[n]_{q}!=[n]_{q}[n-1]_{q}\cdots[2]_{q}[1]_{q},\]
and moreover, we denote \([0]_{q}!=1\). For any non-zero \(q\in\mathbb{C}\), and any \(n,r\in\mathbb{N}_{0}\), the associated \(q\)-binomial coefficient is the complex number
\[\genfrac{[}{]}{0.0pt}{}{n}{r}_{q}:=\frac{[n]_{q}!}{[r]_{q}!\,[n-r]_{q}!}.\]
By contrast, the form of quantum integer arising in curvature calculations is defined as follows: For \(q\in\mathbb{C}\setminus\{1\}\), the _quantum integer_\((m)_{q}\) is the complex number
\[(m)_{q}=\frac{1-q^{m}}{1-q}.\]
When \(m>0\), we have
\[(m)_{q}:=1+q+q^{2}+\cdots+q^{m-1}.\]
The definition of quantum binomial also makes sense for this version of quantum integer, although we will not use it in this paper. Finally, it is instructive to note that the two conventions are related by the identity
\[[m]_{q}=q^{1-m}(m)_{q^{2}}.\]
|
2306.17389 | The K-essence flow seen from the preferred frame $S_{V}$. A scalar field
theory with Landau superfluid structure | We study the hypothesis of deformation of the invariance of Lorentz
transformations produced by the introduction of a universal minimum velocity
relative to a preferred frame. Our goal with this job is to apply this
hypothesis to superfluids and study its consequences relating the minimum
velocity to the idea of a fluid, with superfluid properties. In previous works
we related the minimum velocity to the cosmological constant and even to cosmic
inflation. Soon we could generate a hypothetical superfluid capable of modeling
with characteristics of a cosmological fluid with dark energy properties. The
first excited state of this universal superfluid would be a preferred frame
from which all other excited states are observed and then we would have a
preferred frame $S_{V}$ associated with the critical Landau velocity, thus
implying that the universal minimum velocity coincides with the critical Landau
velocity, and the objects observed by the preferred frame are excited states of
the superfluid. This coincidence between the concepts of minimum velocity and
Landau's critical velocity makes Landau's critical velocity a type of limit
velocity, modifying the usual causal structure of restricted relativity.
Formulating the phenomena in this preferred frame would have the advantage of
providing a simple explanation for astrophysical and cosmological phenomena
linked to a causal structure, which emerges from this construction and is very
similar to causal structures linked to Gordon geometry and acoustic tachyons.
We build a deformed relativistic Lagrangian, demonstrate its relation with a
K-essence Lagrangian and calculate the quantities associated with that
Lagrangian. We also studied an irrotational fluid and verified the role of
enthalpy associated with the minimum velocity structure. | Rodrigo Francisco dos Santos, Luis Gustavo de Almeida, A. C. Amaro de Faria Jr | 2023-06-30T03:44:03Z | http://arxiv.org/abs/2306.17389v1 | The K-essence flow seen from the preferred frame \(S_{v}\). A scalar field theory with Landau superfluid structure
###### Abstract
We study the hypothesis of deformation of the invariance of Lorentz transformations produced by the introduction of a universal minimum velocity relative to a preferred frame. Our goal with this job is to apply this hypothesis to superfluids and study its consequences relating the minimum velocity to the idea of a fluid, with superfluid properties. In previous works we related the minimum velocity to the cosmological constant and even to cosmic inflation. Soon we could generate a hypothetical superfluid capable of modeling with characteristics of a cosmological fluid with dark energy properties. The first excited state of this universal superfluid would be a preferred frame from which all other excited states are observed and then we would have a preferred frame \(S_{V}\) associated with the critical Landau velocity, thus implying that the universal minimum velocity coincides with the critical Landau velocity, and the objects observed by the preferred frame are excited states of the superfluid. This coincidence between the concepts of minimum velocity and Landau's critical velocity makes Landau's critical velocity a type of limit velocity, modifying the usual causal structure of restricted relativity. Formulating the phenomena in this preferred frame would have the advantage of providing a simple explanation for astrophysical and cosmological phenomena linked to a causal structure, which emerges from this construction and is very similar to causal structures linked to Gordon geometry and acoustic tachyons. We build a deformed relativistic Lagrangian, demonstrate its relation with a \(k\)-essence Lagrangian and calculate the quantities associated with that Lagrangian. We also studied an irrotational fluid and verified the role of enthalpy associated with the minimum velocity structure.
pacs: 11.30.Qc
## I Introduction
An important line of research in cosmology has been the model entitled \(k\)-essence [1]. The \(k\)-essence models are scalar field models [2; 3; 4; 5], that alongside the quintessence models [6; 7], appear as alternatives to solve the problem of barotropic fluids with \(w=p/\rho<0\). Therefore, one of the most important guidelines in the study of \(k\)-essence is the study of causal structures and their pathologies [8]. Recent developments [9] point to an approximation of the \(k\)-essence hypothesis with superfluid models [11; 62]. The line of gravitational superfluids models has received a lot of attention in recent years, in particular using the framework of Carter [12; 13], known as Carter's Multifluid Theory [14], and its relation of gravity [15; 16]. This is very similar line to the line developed by Visser [49]. In this line, Bacetti [18] developed a treatment for dissipation relations deformed by a Lorentz symmetry breaking, which is very similar to the Landau formalism used in superfluids, also applicable to Planck scale dissipations [19]. In 2017 Nassif _et al_[20], in an exotic but promising approach, starts the study of effects of deformation of Lorentz Invariance introduced by a universal minimum velocity relative to a
preferred reference frame in an inflationary scenario [21; 22] and gravitational collapse [23]. Nassif's works [24; 25; 26; 27; 28; 29; 30] have suggested a dispersion relation deformed by the invariant minimal speed, associated with a preferred frame \(S_{V}\). However, in 2022 Santos _et al_[61] propose a totally new perspective for the deformation introduced in incipient Nassif's works, where they look for a hydrodynamic perspective for the minimal speed [32; 33]. This new interpretation aims to adjust the idea of a minimum velocity to a relativistic hydrodynamics scenario that approximates the gravitational interaction of phase transition phenomena [34; 35; 36] metamaterials [37; 38] to a Relativistic Bose Einstein Condensate [39]. That said, let's explicitly enumerate the objectives of our work
1. The goal of this paper is to find a physical justification for the introduction of the minimum velocity[24], in this case the justification is the relationship of the Nassif minimum velocity to the Landau critical velocity. This coincidence is very promising, since superfluids have properties analogous to anti-gravity. We can therefore use them as analogues to the gravitational vacuum.
2. Demonstrate that the introduction of the minimum velocity is equivalent to a violation of the Lorentz symmetry in an acoustic geometry
3. To resume Nassif's uncertainty principle proposal[26], but with a description that makes it clear that the quantum characteristic of this treatment is analogous to that of a superfluid[11]
4. Demonstrate the Lorentz Invariance Violation in the same terms as Zloschatiev [35]
5. To identify points of falsifiability of the existence of the preferred-frame \(S_{V}\)[24] we study the relation of the Einstein-Fuller referential and the preferred-frame \(S_{V}\). As we try to understand the relevance of the \(S_{V}\) referential to other proposals of fluids that intend to model the dark energy[21], in this paper the fluid chosen is \(k\)-essence
6. Performing the first movements to understand the energy mechanism, which would make each excited state appear in the hypothetical fluid, excited states that we can analogously compare with subatomic matter appearing in the gravitational vacuum on a cosmological scale could be a more complex Structures. Since we can hypothetically extrapolate the Jeans mechanism [40]
This work is divided as follows:
In Section 2 we briefly review the \(k\)-essence model and specify the kinetic \(k\)-essence.
In Section 3, we revisit the concepts associated with the introduction of minimum velocity in Lorentz transformations.
The Section 4 shows how the minimum speed corresponds to a critical speed in the Landau criterion, a relation to the Einstein-Fuller referential, helping to clarify the role of the preferred-frame \(S_{V}\). And we construct a emphK-Essence Lagrangian, taking into account the scalar product deformation, which introduces the reciprocal velocity
The Section 5 presents an approach where we seek to investigate some thermodynamic properties of an irrotational fluid considering the Shutz formalism [41; 42]. And rehearsing a discussion about the jumps of energy levels for the appearance of each excited state with the increment of the velocity \(v\)
In section 7 we present our conclusions and future perspectives.
## II \(k\)-essence framework
The \(k\)-essence model is a theoretical framework used in hydrodynamics to describe the dynamics of a scalar field with a non-canonical kinetic term. This model introduces a new degree of freedom, which modifies the equation of state of the fluid and allows for the possibility of an accelerated expansion of the universe. The \(k\)-essence model has been used to study a wide range of phenomena, including cosmic inflation [43; 48], dark energy [45; 46; 47; 48], and the dynamics of cosmic structure formation.
We can define a \(k\)-essence model [2; 3] with the action
\[\mathcal{S}:=\int d^{4}x\sqrt{g}\mathcal{L}(\mathcal{X},\Phi). \tag{1}\]
The action (1) is associated to the lagrangian density \(\mathcal{L}(\mathcal{X},\Phi)\), where \(\mathcal{X}\) is the scalar product defined as
\[\mathcal{X}:=g^{\mu\nu}\nabla_{\mu}\Phi\nabla_{\nu}\Phi. \tag{2}\]
Here \(\Phi\) is a scalar field and \(\nabla\) is a covariant derivative operator linked to \(g^{\mu\nu}\). It is very important notice that \(\mathcal{X}\) is a scalar product between two non-normalized four-vectors, with the presence of the normalized factor \(\nabla_{\mu}\Phi\). The conserved current associated to (1) is obtained in [4]
\[\left(\mathcal{L}_{,\mathcal{X}}g^{\alpha\beta}-\mathcal{L}_{,\mathcal{X} \mathcal{X}}\nabla^{\alpha}\Phi\nabla^{\beta}\Phi\right)\nabla_{\alpha}\nabla _{\beta}\Phi+\mathcal{L}_{,\Phi\mathcal{X}}g^{\alpha\beta}\nabla_{\alpha}\Phi \nabla_{\beta}\Phi=-\mathcal{L}_{\Phi}, \tag{3}\]
where \(\mathcal{L}_{,\mathcal{X}}\) is the derivative for parameter \(\mathcal{X}\), \(\mathcal{L}_{,\mathcal{X}\mathcal{X}}\) is second derivative with \(\mathcal{X}\). Also, this \(\mathcal{L}_{,\Phi}\) is the derivative with \(\Phi\) and \(\mathcal{L}_{,\Phi\mathcal{X}}\) is a second derivative. In the spatially homogeneous, case \(\mathcal{X}>0\), it reduces to
\[\left(2\mathcal{X}\mathcal{L}_{,\mathcal{X}\mathcal{X}}+\mathcal{L}_{, \mathcal{X}}\right)\ddot{\Phi}+\mathcal{L}_{,\mathcal{X}}(3H\dot{\Phi})+ \mathcal{L}_{,\Phi\mathcal{X}}\dot{\Phi}^{2}-\mathcal{L}_{,\Phi}=0. \tag{4}\]
To ensure solubility, we take \(\left(2\mathcal{X}\mathcal{L}_{,\mathcal{X}\mathcal{X}}+\mathcal{L}_{, \mathcal{X}}\right)\neq 0\). We can further restrict this condition, calling it a condition of "hyperbolicity",
\[1+2\frac{\mathcal{X}\mathcal{L}_{,\mathcal{X}\mathcal{X}}}{\mathcal{L}_{, \mathcal{X}}}\geq 0, \tag{5}\]
thus imposing a condition on \(\mathcal{L}_{,\mathcal{X}}\) and \(\mathcal{L}_{,\mathcal{X}\mathcal{X}}\).
### _K_-essence purely kinetic: Effective Hydrodynamics Approach
The lagrangean \(\mathcal{L}(\mathcal{X},\Phi)\) eventuality showed a dependency only of \(\mathcal{X}\), so we can write
\[\mathcal{L}(\mathcal{X},\Phi)\equiv\mathcal{L}(\mathcal{X}). \tag{6}\]
This is a particular case called "purely kinetic \(k\)-essence" and imply in an isotropic and homogeneous preferred frame backgroud [2; 3]. In this particular case we have \(\mathcal{L}_{\Phi\mathcal{X}}=\mathcal{L}_{\Phi}=0\). The lagrangean (1) allow evaluation of momentum-energy tensor, \(T_{\mu\nu}:=\frac{2}{\sqrt{\beta}}\frac{\delta\mathcal{L}}{\delta g_{\mu\nu}}\). We also can write the momentum-energy tensor as we did in [21]:
\[T_{\mu\nu}=\mathcal{L}_{,\mathcal{X}}\nabla_{\mu}\Phi\nabla_{\nu}\Phi- \mathcal{L}(\mathcal{X})g_{\mu\nu}. \tag{7}\]
The simple comparation of equation (7) with energy-momentum tensor for a perfect fluid suggests the identification of several hydrodynamics variables. The pressure can be written as
\[p=\mathcal{L}(\mathcal{X}). \tag{8}\]
The kinetic term requires more work. We define the four-velocity
\[u_{\mu}:=\frac{\nabla_{\mu}\Phi}{\sqrt{2\mathcal{X}}}, \tag{9}\]
replacing (8) and (9) in (7), we have density of energy
\[\rho=\mathcal{X}\mathcal{L}_{,\mathcal{X}}-\mathcal{L}(\mathcal{X}). \tag{10}\]
We write now the energy-momentum tensor with \(k\)-essence as source:
\[T_{\mu\nu}=2\mathcal{X}\mathcal{L}_{,\mathcal{X}}u_{\mu}u_{\nu}-\mathcal{L}( \mathcal{X})g_{\mu\nu}. \tag{11}\]
For (11) exists a rest-frame \(u_{i}=0\) where the scalar field is locally isotropic.
Now we are going to discuss the energy conditions in a similar way of the work [4]. The _null energy condition_ (NEC), is respected
\[\mathcal{L}_{,\mathcal{X}}>0, \tag{12}\]
the _weak energy condition_ (WEC), establishes
\[\mathcal{X}\mathcal{L}_{,\mathcal{X}}-\mathcal{L}(\mathcal{X})>0, \tag{13}\]
and the (12) with \(\mathcal{L}>0\). Both conditions in force guarantee that we deal only with positive energy states. In the event of a violation, negative energy states arise.
The expression
\[\frac{\mathcal{X}\mathcal{L}_{,\mathcal{X}}-\mathcal{L}(\mathcal{X})}{ \mathcal{L}(\mathcal{X})}>0, \tag{14}\]
is associated with the _dominate energy condition_ (DEC), and (14) with (12), generate the _null dominant energy condition_ (NDEC). The violation of those conditions can introduce tachyons.
We define the barotropic parameter \(w\) according to Visser et-al [49], and establish criteria for a barotropic fluid:
* Any zero-temperature fluid is automatically barotropic;
* Any non-zero temperature but isothermal fluid is automatically barotropic;
* Any zero-entropy fluid (superfluid) is automatically barotropic;
* Any isentropic fluid is automatically barotropic.
We write, then
\[w:=\frac{p}{\rho}=\frac{\mathcal{L}(\mathcal{X})}{\mathcal{X}\mathcal{L}_{, \mathcal{X}}-\mathcal{L}}, \tag{15}\]
and get the sound propagation, according to [5]
\[c_{s}^{2}:=\left(1+2\frac{\mathcal{X}\mathcal{L}_{,\mathcal{X},\mathcal{X}}}{ \mathcal{L}_{,\mathcal{X}}}\right)^{-1}, \tag{16}\]
with is the inverse of the expression showed in (5).
Two other hydrodynamic quantities still can be defined, the concentration of particles
\[n\equiv\exp\left[\int\frac{d\rho}{\rho+p(\rho)}\right]=\sqrt{\mathcal{X}} \mathcal{L}_{,\mathcal{X}}, \tag{17}\]
and enthalpy
\[h:=2\sqrt{\mathcal{X}}. \tag{18}\]
We then link the kinetic structure to enthalpy, which is a quantity associated with the capacity of absorption and emission of energy. Enthalpy is a crucial element in theoretical modeling and numerical simulation of astrophysical systems such as neutron stars and black holes, allowing for a deeper understanding of the physical processes that occur in these extreme environments.
### _K_-essence purely kinetic: Effective Geometric Approach
In many situations in astrophysics and cosmology, the complete spacetime metrics are extremely complex and difficult to manipulate, which makes it difficult to understand and analyze the physical properties of the system. In such cases, it can be useful to use a simplified space-time description, which can be derived from the full metric, but which takes into account only the properties most relevant to the situation at hand.
The conserved current (3) allows us to define an effective metric of \(\tilde{\mathcal{G}}^{\mu\nu}\),
\[\tilde{\mathcal{G}}^{\mu\nu}=\mathcal{L}_{,\mathcal{X}}g^{\mu\nu}+\mathcal{L }_{,\mathcal{X},\mathcal{X}}\nabla_{\mu}\Phi\nabla_{\nu}\Phi. \tag{19}\]
The effective metric (19) has Lorentzian signature and hence describes the time evolution of the system provided and obeys the condition of hyperbolicity (5). If we have \(\mathcal{L}_{,\mathcal{X}\mathcal{X}}=0\), the (19) reduce to \(\tilde{\mathcal{G}}^{\mu\nu}=\mathcal{L}_{,\mathcal{X}}g^{\mu\nu}\). Very similar to a conformal transformation, we therefore establish the criteria for the existence of a transformation according to \(\omega^{2}\). It is a scale transformation on the metric of a metric space, which generates a new metric, associated with another metric space as \(\mathcal{G}_{\mu\nu}=\omega^{2}g_{\mu\nu}\), and which obeys the following requirements:
* \(\omega\) has an inverse \(\omega^{-1}\) and is smooth. It has all the derivatives. Transforms the vicinity from a \(p\) point, in a topological sense, leading to the vicinity of a \(p^{\prime}\) point in the transformed metric space. That is loosely equivalent to the properties of an isometry. The expression (5) obeys this requirement;
* The \(\omega\) transformation respects the null geodesic \[g_{\mu\nu}X^{\mu}X^{\nu}=g_{\mu\nu}\omega x^{\mu}\omega x^{\nu}=\omega^{2}g_{ \mu\nu}x^{\mu}x^{\nu}=\mathcal{G}_{\mu\nu}x^{\mu}x^{\nu}=0,\] (20) which implies that the transformation \(X^{\mu}=\omega x^{\mu}\) preserves the vector type (time, space and null). This also implies that \(\omega>0\). For simple inspection we notice that \(\mathcal{L}\) and (12) ensure this requirement;
* \(\omega\) respects the angles \[\frac{1}{\sqrt{|v||u|}}g_{\mu\nu}v^{\mu}u^{\nu}=\frac{1}{\sqrt{|V||U|}} \mathcal{G}_{\mu\nu}V^{\mu}U^{\nu}.\] (21) The construction of Equation (2) agrees to the request;
* Since \(\omega(a)\) has the \(a\) parameter, it must belong to the original metric space. The (2) is a scalar product and defines a manifold \(\mathcal{M}\) with metric \(g_{\mu\nu}\).
The criteria established in works [20; 21] were respected in this case. A second criterion deducted by Kunhnel and Radmacher [50] has shown a relation between metrics and the Ricci tensor of a space-time, as follows
\[\tilde{R}_{ab}-R_{ab}=\frac{1}{\Omega^{2}}[2\Omega\partial_{\mu}\partial_{\nu }\Omega+(\Omega\partial^{\mu}\partial_{\nu}\Omega-3\partial^{\mu}\Omega \partial_{\mu}\Omega)g_{ab}], \tag{22}\]
where
\[\tilde{g}_{ab}=\omega^{2}g_{ab}=\Omega^{-2}g_{ab},\]
so that \(\omega=\Omega^{-1}\). We can see here the difference between the conformal Ricci tensor \(\tilde{R}_{\mu\nu}\) and the usual one \(R_{\mu\nu}\), related to \(\partial_{\mu}\Omega\), \(\partial^{\mu}\partial_{\mu}\Omega\) and \(\partial_{\mu}\partial_{\nu}\Omega\). Those operators are Gradient, Laplacian and Hessian of \(\Omega\) respectively. The result is that the difference between Ricci tensors after and before the conformal transformation is conserved and proportional to the metric. We write explicitly
\[\tilde{R}_{\mu\nu}-R_{\mu\nu}\propto g_{\mu\nu}\propto\tilde{g}_{\mu\nu}, \tag{23}\]
if, and only if,
\[\partial_{\mu}\partial_{\nu}\Omega=(\partial^{\alpha}\partial_{\alpha}\Omega) g_{\mu\nu}, \tag{24}\]
where \(g_{\mu\nu}\) has zero curvature.
The Equation (24) introduced optimization proprieties for \(\mathcal{L}_{\mathcal{X}}\). A very interesting approach can be seen in works [27], to discuss fluctuations in \(k\)-essence background where the eikonal is writen as
\[G^{\mu\nu}=\left(\frac{c_{s}}{\mathcal{L}_{,\mathcal{X}}}\right)^{\frac{1}{ \mathcal{L}_{,\mathcal{X}}}}g^{\mu\nu}. \tag{25}\]
We consider this approach as a good perspective for future research.
### The metric of Refractive index perturbation (R.I.P.) frame
We are going to use works [51; 52] as a guide to write this section. As a direct application of acoustic formalism, the R.I.P. is a treatment for small fluctuations propagated in a Kerr medium or a dielectric medium. The fluctuations can be characteristic of locally superluminal propagation and can be described by an eikonal approximation in a stationary metric. We introduce the specific analogue model that we have analyzed. As we need to work both in the laboratory frame as in an inertial reference frame, which is moving at relativistic speed relative to the original frame, it is convenient to adopt a covariant formalism. Then consider a reference frame in which the dielectric medium is moving with four-velocity \(u^{\mu}\)[51]. The medium in question can have permittivity and permeability given respectively by
\[\epsilon^{\alpha\beta} :=\epsilon(E)\left(\eta^{\alpha\beta}-u^{\alpha}u^{\beta}\right), \tag{26}\]
\[\mu^{\alpha\beta} :=\mu_{0}\mu_{r}\left(\eta^{\alpha\beta}-u^{\alpha}u^{\beta} \right). \tag{27}\]
The variable \(E:=\sqrt{E_{\mu}E^{\nu}}\) is an electric field, \(\mu_{r}\) is the relativity permeability and \(\mu_{0}\) is the vacuum permeability. Likewise we have
\[\epsilon(E):=\epsilon_{0}\left(1+\mathcal{X}^{(1)}+\mathcal{X}^{(3)}E^{2} \right). \tag{28}\]
The small fluctuation "fills" an effective space-time with two polarization possibilities
\[g^{+}_{\mu\nu}=\eta_{\mu\nu}-u_{\mu}u_{\nu}\left(1-\frac{1}{n}\right), \tag{29}\]
\[g^{-}_{\mu\nu}=\eta^{\mu\nu}-u_{\mu}u_{\nu}\left(1-\frac{1}{n^{2}(1+\mathcal{ X})}\right)+\frac{\mathcal{X}}{\mathcal{X}+1}l_{\mu}l_{\nu}, \tag{30}\]
where the vector \(l_{\mu}:=\frac{E_{\mu}}{E}\) is unitary in direction of \(E_{\mu}\). The parameter \(\mathcal{X}\) have an important role since the usual electromagnetism has non generated space-time curvature, because it's a linear field theory. The shift of linearity is measured by parameter \(\mathcal{X}\). This parameter is called _nolinearity parameter_
\[\mathcal{X}:=\frac{E}{\epsilon}\frac{\partial\epsilon}{\partial E}. \tag{31}\]
Therefore we have a link with non-linear electrodynamics. When \(\mathcal{X}\equiv 0\), we have linear electrodynamics regime, making the polarization (29) and ( 30) match.
After defining those tools, we have to define the refractive index perturbation.
\[n:=n_{0}+\delta n(x^{\prime},\rho), \tag{32}\]
and the line-element measured of co-moving frame
\[ds^{2}:= c^{2}\frac{\gamma^{2}}{n^{2}}\left(1+\frac{nv}{c}\right)\left(1- \frac{nv}{c}\right)dt^{\prime}+2\gamma^{2}\frac{v}{n^{2}}\left(1-n^{2}\right) dt^{\prime}dx^{\prime}+\] \[-\gamma^{2}\left(1+\frac{v}{nc}\right)\left(dx^{\prime}\right)^{2 }-d\rho^{2}-\rho^{2}d\varphi^{2}. \tag{33}\]
The (33) have a cylindrical symmetry, \(\rho=\sqrt{y^{2}+z^{2}}\).
Any specific choice for the function \(\delta n\) give rising to a specific metric in the aforementioned class. We point out that an isotropic refractive index in the laboratory frame corresponds to an anisotropic refractive index in the pulse frame due to length contraction associated with a boost in the x-direction. If the refractive index depends explicitly on \(\rho\), the metric is stationary but not static. It is implied in emergence of the ergoregion,
the ergosurface and event-horizon. In this last case, we have a more complex relation, the event-horizon is characterized by \(g_{00}=0\), with \(v=\frac{c}{n}\). The event-horizon in these conditions admit solution in a range
\[\frac{1}{n_{0}+n}<\frac{v}{c}<\frac{1}{n_{0}}. \tag{34}\]
It is necessary highlight a very-important question, in stationary but not static geometry, there is a difference between event-horizon and ergo-surface, i.e. they do not coincide. The event-horizon in such geometry is in the antipodal points which solve
\[\delta n(x^{\prime},\rho=0):=\frac{c-n_{0}v}{v}, \tag{35}\]
and ergo-surface is associated with the vanishing of Killing vector \(\partial_{t}\).
### Formulation of Einstein-Euller equations
Following Rezzolla [53], we shall consider a foliation perpendicular to the timeline \(\Sigma_{t}\), i.e. a foliation where we can define a perpendicular unitary vector given as follows
\[\Omega_{\mu}:=\nabla_{\mu}t. \tag{36}\]
We then can write the vector defined in (36) as proportional to another vector
\[\eta_{\mu}=A\nabla_{\mu}t, \tag{37}\]
and we can write the tensor \(\eta_{\mu}=(A,0,0,0)\), so we calculate the scalar product \(\eta_{\mu}\eta^{\nu}\), where we found
\[\eta_{\mu}\eta^{\mu}=g^{\mu\nu}\eta_{\mu}\eta_{\nu}=g^{tt}A^{2}. \tag{38}\]
The variable \(\eta_{\mu}\) corresponds to an observer measuring a four-speed, the timelike condition requires \(\eta_{\mu}\eta^{\mu}=-1\), therefore normalization is required. This condition when applied in (38) gives us
\[g^{tt}A^{2}=-1, \tag{39}\]
where we chose \(\alpha:=-\frac{1}{g^{tt}}\), and for a \(\eta^{\mu}\) future we have \(A=-\alpha\). Once the vector \(\eta_{\mu}\) is specified, we can define a metric for each hypersurface
\[\gamma_{\mu\nu}=g_{\mu\nu}+\eta_{\mu}\eta_{\nu} \tag{40}\]
and
\[\gamma^{\mu\nu}=g^{\mu\nu}+\eta^{\mu}\eta^{\nu}. \tag{41}\]
The metric \(\gamma_{ij}\) corresponds to the spatial part of \(\gamma_{\mu\nu}\), we require that \(\gamma^{ik}\gamma_{kj}=\delta^{i}_{j}\) to ensure that \(\gamma_{\mu\nu}\) and \(\gamma^{\mu\nu}\) correspond to inverse metrics of each other. This construction allows \(\eta_{\mu}\) and \(\gamma_{\mu\nu}\) to be two useful tools in the description of any four-vector.
Now we are going to set the projectors
\[\gamma^{\mu}_{\nu}:=g^{\mu\nu}\gamma_{\alpha\nu}. \tag{42}\]
When we set (42), we can resume the definition of (40) and then we write \(g^{\mu}_{\nu}+\mathcal{N}^{\mu}_{\nu}\). A simple inspection ensures that \(g^{\mu}_{\nu}+\mathcal{N}^{\mu}_{\nu}=\delta^{\mu}_{\nu}+\mathcal{N}^{\mu}_{\nu}\). Where we set
\[\mathcal{N}^{\mu}_{\nu}:=\eta^{\mu}\eta_{\nu}, \tag{43}\]
and the projector acting on (43)
\[\gamma^{\alpha}_{\mu}\mathcal{N}^{\mu}_{\nu}=0, \tag{44}\]
where a generic four-vector can be written as
\[U^{\mu}=\gamma^{\mu}_{\nu}U^{\nu}+\mathcal{N}^{\mu}_{\nu}U^{\nu}. \tag{45}\]
One can easily notice that \(\gamma^{\mu}_{\nu}U^{\nu}=V^{\mu}\) has the contravariant component \(V^{t}=0\). On the other hand \(V_{t}=g_{t\mu}V^{\mu}\neq 0\), as well as the scalar product is given by \(\eta^{\mu}\Omega_{\mu}=\frac{1}{A}\eta^{\mu}\eta_{\mu}=\alpha^{-1}\neq 1\), agreeing to the unitary vector defined in (36). Perpendicular to a hypersurface spacelike \(\Sigma_{t}\), it does not represent changes along the temporal coordinate, and it is not the direction of the time derivative. The vector that represents the temporal direction is
\[\mathbf{t}=\mathbf{e}_{t}=\alpha\vec{\eta}+\vec{\beta}, \tag{46}\]
where the vector \(\vec{\beta}\) is purely spatial, commonly called the vector shift and with the composition
\[t^{\mu}\Omega_{\mu}=\alpha\eta^{\mu}\Omega_{\mu}+\beta^{\mu}\Omega_{\mu}= \frac{\alpha}{\alpha}=1. \tag{47}\]
The relation above implies the explicit construction of the Eulerian base
\[\eta_{\mu}=(-\alpha,0,0,0) \tag{48}\]
and
\[\eta^{\mu}=\frac{1}{\alpha}\left(1,\beta^{i}\right). \tag{49}\]
By Rezzolla [53] we write the acoustic line element
\[ds^{2}=-\left(\alpha^{2}-\beta_{i}\beta^{i}\right)dt^{2}+2\beta_{i}dx^{i}dt+ \gamma_{ij}dx^{i}dx^{j}, \tag{50}\]
which allows us to identify the vector \(\vec{\beta}\) with the velocity of a fluid, calculating therefore the product to scale with the equations (48) and ( 49),
\[\eta_{\mu}\eta_{\nu}\eta^{\mu\nu}=\alpha^{2} \tag{51}\]
\[\eta^{\mu}\eta^{\nu}\eta_{\mu\nu}=\frac{1-\vec{\beta}\cdot\vec{\beta}}{\alpha ^{2}}. \tag{52}\]
We realize then that \(\alpha\) is associated with the spread of signals and \(\vec{\beta}\) to a fluid seeping. Finally we write a four-speed vector using the Eulerian elements written above (42), (48)
\[\gamma^{i}_{\mu}u^{\mu}=u^{i}, \tag{53}\]
where we have the spatial component of a four-speed and
\[-\eta_{\mu}u^{\mu}=\alpha u^{t}, \tag{54}\]
the temporal component, wich can be written, in the case of \(\beta_{i}=0\), as
\[d\tau^{2}=\alpha^{2}dt^{2}. \tag{55}\]
what allows us us to call the function \(\alpha\) time-lapse function [53].
## III The acoustic stationary but non static geometry linked to a preferred frame \(S_{v}\)
### An acoustic geometry
Now we are going to analyze the Figure (1) from a kinematic point of view. It was showed that [26], from a point \(A\) located on the axis \(y^{\prime}\) of the referential frame \(S^{\prime}\), a photon is emitted, in principle, in the direction \(\overline{AO^{\prime}}\), as expected for those in the \(S^{\prime}\) referential frame. However, to our surprise the photon deviates in the direction \(\overline{AO^{\prime}}\). This occurs due to the existence of an unreachable minimum speed \(V\), relative to the ultra-reference frame \(S_{V}\). The emitting electron of the photon has an uncertainty in its position in \(S^{\prime}\) due to this minimum velocity, since we cannot say that it is at complete rest. This uncertainty generates a "delocalization" \(\Delta x^{\prime}=\overline{O^{\prime}C}\). Hence, instead of only the segment \(\overline{AO^{\prime}}\), a rectangle triangle \(\overline{AO^{\prime}C}\) is formed at the proper frame \(S^{\prime}\) where it is not possible to find a set of points at rest. For more details of this formalism see also Refs. [24; 25; 27; 28; 29; 30].
From the hydrodynamic perspective we also observe that the trajectory (direction) of the excited state seen from \(S^{\prime}\) is deviated, because in fact there is no possibility of centering a rigid \(S^{\prime}\) coordinate system, whose origin \(O^{\prime}\) is fixed precisely on the quasi-particle, since it cannot be located at the origin. Therefore, there is a \(\Delta x^{\prime}=\overline{O^{\prime}C}>0\) "delocation" from the origin, and this delocation depends on the speed of \(S^{\prime}\) relative to the referential frame \(S_{V}\), i.e, \(\Delta x^{\prime}=\Delta x^{\prime}(v_{S^{\prime}}/S_{V})=\Delta x^{\prime}(v _{S^{\prime}})=\Delta x^{\prime}_{v}\). There is no possibility to undo \(\Delta x^{\prime}_{v}\). We can show that \(\Delta x^{\prime}\rightarrow\Delta x^{\prime}_{min}\) when \(v\to c\) and \(\Delta P\) increases since \(\Delta v\rightarrow\Delta v_{max}=c\) and that \(\Delta x_{1}\rightarrow\Delta x^{\prime}_{max}\) maximum delocation, when \(v\to V\) (\(\Delta v\rightarrow(\Delta v_{min})=V\)) in such a way that a certain uncertainty relation \(\Delta x^{\prime}_{v}\Delta v^{\prime}\) is maintained for the particle. Thus, it is clear that this space-time in the hydrodynamic perspective already has the fundamental ingredients to propose a relation with a quantum uncertainty. Within an objective framework of reality, now in the Hydrodynamics perspective, essentially quantum.
In Figure (1), as we have \(\Delta x^{\prime}{}_{v}>0\), we will see two rectangular triangles of a cone, as in relativity. They are:
1. \(\blacktriangle BO^{\prime}A\), seen from the external \(S_{0}\) referential frame, which is already separated;
2. \(\blacktriangle BO^{\prime}C\), obtained in the particle's own \(S^{\prime}\) referential frame, \(O^{\prime}C\) would be an uncertainty in location
Figure 1: The deviation of a photon in reference frames \(S^{\prime}\) and \(S_{0}\) with non null velocities relative to a preferred frame \(S_{V}\), measured by three clocks.
(delocation) of it in its own referential \(S^{\prime}\). As already said, we have \(O^{\prime}C=\Delta x^{\prime}(v)\), where
\[v=v_{S^{\prime}/S_{V}}. \tag{56}\]
The equation (56) represents the speed of a usual reference, measure of the preferred frame \(S_{V}\).To be built
Now, let's extract the following relations from two rectangular triangles:
1. \(\blacktriangle BO^{\prime}A:(AB)^{2}=(O^{\prime}B)^{2}+(O^{\prime}A)^{2} \Rightarrow c^{2}\Delta t_{0}^{2}=(O^{\prime}B)^{2}+v^{2}\Delta t_{0}^{2}\Rightarrow\)
\[(O^{\prime}B)^{2}=c^{2}\Delta t_{0}^{2}-v^{2}\Delta t_{0}^{2}; \tag{57}\]
2. \(\blacktriangle BO^{\prime}C:(CB)^{2}=(O^{\prime}B)^{2}+(O^{\prime}C)^{2} \Rightarrow c^{2}\Delta{t^{\prime}}^{2}=(O^{\prime}B)^{2}+[\Delta x^{\prime}(v )]^{2}\Rightarrow\)
\[O^{\prime}B^{2}=(c\Delta t^{\prime})^{2}-[\Delta x^{\prime}_{v}]^{2}. \tag{58}\]
Comparing the relations (57) and (58), follows
\[c^{2}\Delta t^{2}-v^{2}\Delta t^{2}=c^{2}\Delta t^{\prime 2}-\Delta x^{ \prime 2}_{v}. \tag{59}\]
where \(\Delta\tau=\Delta t^{\prime},t_{0}=t\), we have
\[c^{2}\Delta t^{2}-v^{2}\Delta t^{2}=c^{2}\Delta\tau^{2}-\Delta x^{\prime 2} _{v}. \tag{60}\]
In Lorentz Space-time, we are (proper) at the origin \(O^{\prime}\) of \(S^{\prime}\) so that we have \(\Delta x^{\prime}=0\), lose meaning, so we have \(S\) and \(S^{\prime}\). Therefore, it comes that
\[c^{2}\Delta t^{2}-v^{2}\Delta t^{2}=c^{2}\Delta\tau^{2}=\Delta S^{\prime 2}, \tag{61}\]
where \(\Delta\tau\) is the proper time and \(\Delta t\) is the improper one. In this sense, we define this fundamental parameter, this speed \(v\), expressed in the equation (56).
\[v:=\sqrt{c^{2}-\frac{c^{2}(\Delta\tau)^{2}-(\Delta x^{\prime})^{2}}{(\Delta t )^{2}}}. \tag{62}\]
We take again the (61) and notice that
\[\left(c^{2}-v^{2}\right)\left(\frac{\Delta t}{\Delta\tau}\right)^{2}-c^{2}+ \left(\frac{\Delta x^{\prime}}{\Delta\tau}\right)^{2}=0, \tag{63}\]
The equation (63) is very similar to an acoustic geodesic [65? ]
\[\left(c_{s}^{2}-v^{2}\right)\left(\frac{dt}{ds}\right)^{2}-2v_{i}\left(\frac{ dx^{i}}{ds}\right)\left(\frac{dt}{ds}\right)+\left(\frac{dx}{ds}\right)^{2}=0. \tag{64}\]
When we compare (63) with (64), we get the following coincidence
\[2v\frac{\Delta x^{\prime}}{\Delta\tau}\frac{\Delta t}{\Delta\tau}\equiv c^{2}. \tag{65}\]
Returning to Equation (63)
\[\left(c^{2}-v^{2}\right)\left(\frac{\Delta t}{\Delta\tau}\right)^{2}-2v\frac {\Delta x^{\prime}}{\Delta\tau}\frac{\Delta t}{\Delta\tau}+\left(\frac{\Delta x ^{\prime}}{\Delta\tau}\right)^{2}=0. \tag{66}\]
The Equations (66) and (64) have identical formats, then we deduce for relation (65)
\[c^{2}=\left(\frac{c^{2}}{v}\frac{\Delta\tau}{\Delta t}-v\right)^{2}, \tag{67}\]
wich can be considered a reissue of the equation (62), but one that allows a better interpretation. Considering the References [55? ], we interpret the equations (66) and (67) as the eikonal of a particle, the speed \(v\) is a flux of fluid and \(\frac{\Delta\tau}{\Delta\tau}\) as the vector tangent of the geodesic. Also, the flux and the geometry stay connected.
The next step in procedure is consider a four-vector
\[\frac{\Delta{x^{\prime}}^{\mu}}{\Delta\tau}=\left(-c\frac{\Delta t}{\Delta \tau},\frac{c^{2}}{v}\frac{\Delta\tau}{\Delta t}\right). \tag{68}\]
Thus we have an apparent violation of Lorentz symmetry, which actually is a deformation of Lorentz invariance, but is recovered by doing \(\Delta x=0\). The equations (67) and (68) indicate a prohibition, the speed \(v\) can not be null. Otherwise we would have a singularity and an inconsistency, the speed \(\frac{\Delta\tau}{\Delta\tau}\) may be greater than the light for certain values of \(v\), forced the speed to assume a minimum value under certain conditions, this mooring, between the two speeds \(\frac{\Delta x}{\Delta\tau}\) and \(v\) is a real novelty.
To establish a causal relation between the two speeds, it is necessary to resort to the concept of R.I.P. (32), where the equation (35) represents the relation between event-horizon and R.I.P.
Replacing (35) in (65),
\[\frac{\Delta x^{\prime}}{\Delta\tau}=\frac{c}{2}\frac{\Delta\tau}{\Delta t} \left(\eta(x^{\prime},\rho=0)-n_{0}\right). \tag{69}\]
Therefore, if the R.I.P. at the origin of the cylindrical coordinate is equal to the refraction index of the medium, we have \(\frac{\Delta x^{\prime}}{\Delta\tau}=0\) and we return to an acoustic geometry of Lorentz. This result allows us to interpret the geometric structure we are dealing with as a kind of granular geometric structure, something similar to a vortex because we have cylindrical geometry. So we have more than particles but quasi-particles. We can understand these little fluctuations as phonons. A causal link between the two speeds is established, because we associate discrepancy with a quantity that involves a horizon of events.
We write the new eikonal
\[h^{\mu\nu}\frac{\Delta{x^{\prime}}_{\mu}}{\Delta\tau}\frac{\Delta{x^{\prime}} _{\nu}}{\Delta\tau}=0, \tag{70}\]
We have an acoustic geometry in particular, associated with a tachionic causal structure.
### Two-fluid Acoustic Geometry
Considering the equations (64) and (63). We will now subtract (64) from (63):
\[\left(1-\frac{v^{2}}{c^{2}}\right)\left(\frac{\Delta t}{\Delta\tau}\right)^{ 2}+1-\frac{1}{c^{2}}\left(\frac{\Delta x^{\prime}}{\Delta\tau}\right)^{2}- \frac{c_{s}^{2}}{c^{2}}=\frac{1}{c^{2}}||\frac{d\vec{x}}{dt}-\vec{u}||^{2} \tag{71}\]
Here we have that the speed \(\vec{v}\) is the velocity of the hypothetical superfluid, measured and compared to the preferred-frame \(S_{V}\). The speed \(\vec{u}\) is measured in laboratory \(\frac{d\vec{x}}{dt}\) is the vector parallel to the flow line, \(c_{s}\) is the speed of sound in an acoustic geometry that respects Lorentz symmetry. So the left side contains the sound modified by the presence of the quantities measured by the preferred-frame, the right side the quantities measured in a laboratory. The left side like this, is a sound propagating in this exotic medium. If on the right side we have \(\frac{d\vec{x}}{dt}=\vec{u}\), results
\[\left(1-\frac{v^{2}}{c^{2}}\right)\left(\frac{\Delta t}{\Delta\tau}\right)^{ 2}+1-\frac{1}{c^{2}}\left(\frac{\Delta x^{\prime}}{\Delta\tau}\right)^{2}= \frac{c_{s}^{2}}{c^{2}} \tag{72}\]
the presence of the if the factor, induces sound propagation in a naturally flowing medium, in this case, \(c_{s}<c\), if the factor \(\frac{\Delta x^{\prime}}{\Delta\tau}=0\to c_{s}=c\). So the waves are electromagnetic. Replacing (65) in (72), we have
\[\left(1-\frac{v^{2}}{c^{2}}\right)\left(\frac{\Delta t}{\Delta\tau}\right)^{2}+ 1-\frac{1}{c}\left(\frac{\Delta\tau}{\Delta t}\right)^{2}\left(n(x^{\prime}, \rho=0)-n_{0}\right)^{2}=\frac{c_{s}^{2}}{c^{2}}. \tag{73}\]
If \(c_{s}=c\), we have \(n(x^{\prime},\rho=0)=n_{0}\), which implies that the medium does not change, we have a continuous medium, with the same refractive index value \(n_{0}\). So here we have laminar flow. Therefore, we have the Minkowski space reestablished. Replacing (35) in
\[\left(1-\frac{v^{2}}{c^{2}}\right)\left(\frac{\Delta t}{\Delta\tau}\right)^{2} +1-\frac{c^{2}}{v^{2}}\left(\frac{\Delta\tau}{\Delta t}\right)^{2}=\frac{c_{s }^{2}}{c^{2}}. \tag{74}\]
The equation (74) tells us that values of \(v=0\), can be problematic, because we would have a singularity and at the same time a break in causality. We then understand that the terms associated with \(\frac{\Delta x}{\Delta\tau}\), needs to have material characteristics, and specific material characteristics, which can be related to an index of refraction. Another consequence to be emphasized is that low velocity values imply negative values for the speed of sound, which implies the Jeans' collapse condition [40]
### The Lorentz symmetry violation in an acoustic geometry
The tachikonic causal structure usualy correspond to particles more rapid as light. But, we intent to generate a causal structure similar to acoustic tachkion, particles faster than sound. In (63) we have that
\[\Delta x^{\prime}(v)=[w(v)]\Delta\tau. \tag{75}\]
The \(w(v)\) function introduced in Ref. [26] has speed dimension. We interpret \(\Delta x^{\prime}{}_{v}\) as an internal degree of freedom, then it is better understood as an quasi-particle. The function \(w(v)\) need some proprieties:
1. dimension of speed;
2. \(w(v)<c\);
3. \(w(v)\) is linked to equation (69).
In physics, the search for fundamental principles is essential, as conservation laws and symmetries. The reciprocity [54; 55; 56] is a fundamental symmetry for randomic systems and seismology. Surveys in seismology define a quantity called _slowness_\(\left(\frac{1}{v}\right)\), see Ref. [56], associated to granularity of the medium of propagation p-wave. In quantum turbulence [11] exists a similar conception, the reconnection of two vortexes is associated with speed of flux line \(\delta(v)\approx\frac{1}{v}\). But, \(w(v)\) have speed dimension, so a better propose is \(w(v)=\frac{a}{v}\). In this case \(a\) have dimension of quadratic speed, we write \(a=v_{0}^{2}\). Furthermore, the \(w(v)\) is a opportunity to introduce reciprocal symmetry \(w(v)=\frac{v_{0}^{2}}{v}\) to obey the causal structure \(v_{0}^{2}=bc\). Where \(b\) is a speed, we chose \(b=V\) according to Equation (63), so we have
\[w(v)=\frac{cV}{v}. \tag{76}\]
We return 76 in 71
\[\left(1-\frac{v^{2}}{c^{2}}\right)\left(\frac{\Delta t}{\Delta\tau}\right)^{2 }+1-\frac{V^{2}}{v^{2}}-\frac{c_{s}^{2}}{c^{2}}=\frac{1}{c^{2}}||\frac{d\vec{ x}}{dt}-\vec{u}||^{2} \tag{77}\]
Considering once again \(\frac{d\vec{x}}{dt}=\vec{u}\).We must now interpret each of the terms in the acoustic geometry presented, in the equation (77).
1. The term \(\left(1-\frac{v^{2}}{c^{2}}\right)\left(\frac{\Delta t}{\Delta\tau}\right)^{2}+1\)corresponds to a usual Lorentz factor, including the speed of the wave represented there is the speed of light \(c\).
2. The term \(\frac{c_{s}^{2}}{c^{2}}=\frac{1}{c^{2}}||\frac{d\vec{x}}{dt}-\vec{u}||^{2}\) corresponds to an acoustic Lorentz symmetry, in the sense proposed by Unhur[57] and Visser[58]. In this geometry we have acoustic tackyons, but, these acoustic tackyons still correspond to time-like shifts for the usual Lorentz geometry. So we have a double cone causal structure. An acoustic cone and a light cone
3. The third term \(1-\frac{V^{2}}{v^{2}}\) which was introduced in equation (76). This term is associated with the speed parameter (62). This term is associated with the parameter velocity (62), so (62) is the velocity seen from the referential \(S_{V}\), which corresponds to a referential attached to the first excited state of a superfluid, at the moment this excited state appears, in this sense the velocity \(v\) is the flow velocity of the hypothetical superfluid A superfluid is characterized by maintaining a flow of energy in the direction of the capillary, but when we deal with the universe there is no capillary, so this velocity corresponds to a rate of departure, between the hypothetical states that would arise, with increasing \(v.\)We can thus consider the recession velocity of Galaxies deduced by Hubble \(v=HX\), where \(X\) would be the Galaxies' position [59]. Therefore, when we apply this set of ideas, we imagine that galaxies are analogous to excited states and that the vacuum is a superfluid
4. A totally new and different causal structure emerges here, just as particles faster than light describe space-like vectors. Here we have a situation, where two events need to be connected by a vector, which corresponds to a velocity \(v>V\), so we have a new range for the validity of the causal structure. For velocities less than \(V\), the concept of an event is not defined by the simple laminar flow of this cosmological fluid. So we write the validity range of the causal structure \[V<v<c.\] (78) It is necessary to comment on the similarity of this causal structure, with the idea of RIP, described in the equation (32). By pointing out that space-time can go through phase transitions like a fluid, we are not bringing back the idea of a luminiferous aether, but rather conceiving of a model of space-time that would allow us to have different material properties throughout the cosmological ages. Indeed, such a hypothesis is supported by the studies of running constant[60]
5. The equation allows us to calculate a new time-lapse function, considering the speed of sound of the acoustic causal structure \(c_{s}\) and the granular structure associated with \(V\). \[\frac{\Delta t}{\Delta\tau}=\frac{1}{c}\sqrt{\frac{c^{2}\left(1-\frac{V^{2}} {v^{2}}\right)-c_{s}^{2}}{1-\frac{v^{2}}{c^{2}}}}\] (79) This result 79 indicates the interaction of three geometries, one the usual geometry of Minkowski space, but compared to the AETHER flow with velocity \(v\), a Lorentzian acoustic geometry in the terms the works [57] and [58] represented by \(c_{s}\) and the granular term represented by the critical velocity, which is also compared to the AETHER flow with velocity \(v\). In the limit where the aether flows with \(v=V\), we have a situation where \(c^{2}\left(\Delta t\right)\equiv-\frac{c_{s}^{2}}{1-\xi^{2}}\left(\Delta \tau\right)^{2}\). It must be said, that the result generalized the works of Unhur [57] and Visser [58], constituting a more complex acoustic geometry than the acoustic geometry proposed by the [57],[58]. So here we have the Lorentz Symmetry Violation.
The introduction of the term
\[\theta(v):=\sqrt{1-\frac{V^{2}}{v^{2}}} \tag{80}\]
modifies the geometry, introducing granularity, being granularity, which is understood as the first excited state, so a very slow particle, a particle that resisted the drag of the cosmic fluid and became very slow, would be dissolved.
When we have \(c_{s}=0\) the equation 79 we get the factor
\[\psi(v)=\sqrt{\frac{1-\frac{V^{2}}{v^{2}}}{1-\frac{v^{2}}{c^{2}}}}, \tag{81}\]
\(\psi(v)\) is a deformed Lorentz factor, taking the fluid flow as the velocity parameter \(v\). So when we eliminate the critical velocity, we get the usual Lorentz factor, see the [24], [25],[26],[27],[28].
\[\psi(v)=\frac{\Delta t}{\Delta\tau}. \tag{82}\]
We get:
1. \(v\to c\Rightarrow\Delta t_{0}>>\Delta\tau\)(time dilation: the time march in \(S^{\prime}\) is very slow compared to the time march in \(S_{0}\));
2. \(v=v_{0}\Rightarrow\Delta t_{0}=\Delta\tau\), here \[v_{0}=\sqrt{Vc}\] (83)
3. **Unprecedented**\(v\to V\Rightarrow\Delta\tau>>\Delta t_{0}\). Time contraction: the time-march in \(S^{\prime}\) is much faster compared to the march in \(S_{0}\).
### A clock hypothesis associated with the introduction of the privileged reference \(S_{v}\)
Once we deduce (81), we can introduce a deformation into Lorentz (82) and thus, we see the space-time interval considering the existence of the privileged referential of this deformation, we introduce (80) and we rewrite the space-time interval:
\[c^{2}\Delta\tau^{2}=\frac{1}{1-\frac{V^{2}}{v^{2}}}\left[c^{2}\Delta t-v^{2} \Delta t^{2}\right]. \tag{84}\]
In Ref. [26; 29] we see the space-time interval considering the existence of the preferred frame \(S_{V}\).
Manipulating (84) and following [26], we get
\[c^{2}\left(1-\frac{V^{2}}{v^{2}}\right)\left(\frac{d\tau}{dt}\right)^{2}+v^{2 }=c^{2}. \tag{85}\]
We assume then that the Equation (85) represents the march of time, where the parameter \(v\) is the measured speed relative to the preferred frame \(S_{V}\).
Figure 2: We have \(c=\sqrt{v_{t}^{2}+v^{2}}\) – see (77) and (85) – which represents the space-temporal speed of any particle (hypothenuse of the triangle= \(c\). The novelty here is that such a structure of space-time implements the preferred frame \(S_{V}\) from Ref. [26]
We've rearranged the Equation (84)
\[c^{2}\Delta t^{2}-v^{2}\Delta t^{2}=c^{2}\Delta\tau^{2}-\frac{c^{2}V^{2}}{v^{2}} \Delta\tau^{2} \tag{86}\]
where we can rewrite the Equation (83). We can then reverse (86), writing as
\[c^{2}\left(1-\frac{v^{2}}{c^{2}}\right)\left(\frac{dt}{d\tau}\right)^{2}+\frac {v_{0}^{4}}{v^{2}}=c^{2} \tag{87}\]
Here we define a speed which we call the _march of space_, as opposed to the march of time \(v_{t}\) in Equation (87)
\[v_{trec}=c\sqrt{1-\frac{v^{2}}{c^{2}}}\frac{dt}{d\tau}. \tag{88}\]
Although the name marches from space may seem unusual, the idea is quite understandable when we think that, in our case, space corresponds to a fluid. We understand that the Equation (88) can be interpreted as the propagation speed of a fluid. We can imagine that (87) is the module of the tangent vector the flow line, parameterized by \(\tau\). The Equation (88), would be a complementary fluid to that, analogous to what is done in superfluids. We write then
\[v_{trec}^{2}+\frac{v_{0}^{4}}{v^{2}}=c^{2}. \tag{89}\]
Therefore, associating the Equations (85) and (87), we can write a new equation
\[v_{trec}^{2}+v_{rec}^{2}=c^{2}. \tag{90}\]
Figure 3: The four cones proposed by Ref. [26] illustrate an unification of Landau superfluid criticial speed (167), the uncertainty-priniple and an acoustic causal structure. We see in these figures the causal structures associated with momentum-position and energy-time uncertainty.
Now we get a line element, which at the same time is related to \(d\tau\) and to \(dt\),
\[ds_{5}=cd\tau\sqrt{1-\frac{V^{2}}{v^{2}}}=cdt\sqrt{1-\frac{v^{2}}{c^{2}}}. \tag{91}\]
The Equation (91) is a new hypothesis of the deformed clock. Starting from (91) we can write
\[\frac{ds_{5}}{d\tau}=c\sqrt{1-\frac{V^{2}}{v^{2}}}, \tag{92}\]
and
\[\frac{ds_{5}}{dt}=c\sqrt{1-\frac{v^{2}}{c^{2}}}. \tag{93}\]
The interpretation suggested in our previous work [61] for (92) and ( 93) discusses the possibility of the two line elements corresponding to a Weyl geometry. And adressing the clock-hypothesis of Weyl for a new perspective [61]. This kinematics, proposed orginally by Ref. [24] is totally new, because it binds the causal sectors in order to be reciprocal of each other. This reciprocity establishes a structure very similar to wave-particulate duality (see Figure 3).
If we do \(\frac{ds_{5}}{dt}\frac{d\tau}{ds_{5}}\), we get
\[\psi=\frac{dt}{d\tau}=\frac{v_{tree}}{v_{t}}\propto E. \tag{94}\]
In this case \(v=\frac{dx}{dt}\) in Equation (85) and (76) corresponding to an internal structure of the quase-particle. Such internal structure, like the external structure, can be understood as a fluid, also endowed with causal structure as well as the outer fluid. In practice, both external and internal causal structures are associated by (92) and (93). We call (76) reciprocal speed.Alternatively, if we choose to do \(\frac{ds_{5}}{d\tau}\frac{dt}{ds_{5}}\), we get:
\[\psi^{-1}=\frac{d\tau}{dt}=\frac{v_{t}}{v_{tree}}\propto time. \tag{95}\]
In this case \(v_{t}\) is given for (85), \(v_{tree}\) for (88). The Equations (94) and (95) indicate the presence of a reciprocal symmetry between the dispersion relation and the principle of energy-time uncertainty. We will study further this issue in the next section.
### Principle of Uncertainty in the presence of an hypothetical superfluid
The momentum of a massive quasi-particle with respect to \(S_{V}\), with respect to the world line of the particle parameterized by \(\tau\) is given by:
\[P:=m_{0}c\left[\frac{dt}{d\tau}+\Sigma_{i=1}^{3}\frac{dx_{i}^{\prime}}{d\tau }\right]. \tag{96}\]
Where \(P\) is the current moment relative to the preferred frame \(S_{V}\). The Equation (96), when compared to the Equation (82), suggests a dispersion relation
\[P^{2}=\frac{m_{0}^{2}v^{2}-m_{0}^{2}V^{2}}{1-\frac{v^{2}}{c^{2}}}, \tag{97}\]
very similar to a roton [11]. As (94), already indicates, the temporal composition of the moment is related to reciprocal symmetry with energy [55], we have a strong indication that we are dealing with excited states of a superfluid. That indication would give a new meaning to the privileged reference frame \(S_{V}\). Therefore, the relation (97) is associated with a differential momentum between the first excited state \(P_{V}=m_{0}V\) of a superfluid
and the other states \(P_{+}=m_{0}v\). This perspective consists of a different interpretation of Nassif's works [25; 26]. We are considering that the reference \(S_{V}\) is a reference associated with the speed of the first excited state, being dredged by the superfluid. When this reference arises, the fluid itself shifts with minimal velocity, the next states will arise, when the fluid exceeds other speed limits. Thus the difference \(\Delta P:=\sqrt{P_{+}^{2}-P_{V}^{2}}\) corresponds to the variation of the velocity of the fluid itself in relation to the critical velocity of the fluid, which generates the first excited state. In this perspective, we write:
\[P^{2}=\frac{\Delta P^{2}}{1-\frac{v^{2}}{c^{2}}}. \tag{98}\]
As example, if \(v{\rightarrow}V\) implies (\(P\to 0\)) then we have \(\Delta x^{\prime}_{v}\rightarrow\infty\). The new hydrodynamic interpretation makes it possible to understand the delocation of the quasi-particle. When the quasi-particle is slow, it simply dismounts in the superfluid. This leads the understanding of the preferred reference frame \(S_{V}\) more palpable, because actually the reference \(S_{V}\) becomes a conceptual reference frame. It is physical in the meaning of corresponding to a reference with speed where the first excited state arises, then to a virtual observer, which propagates with the emergency speed of the first excited state even if in the system this state is not present.
Now we can calculate the following quantity
\[\Delta x^{\prime}_{v}P=\frac{v_{0}^{2}}{v}\Delta t\Psi^{-1}m_{0}v=(m_{0}v_{0}) (v_{0}\Delta t). \tag{99}\]
We can now consider the relation (76), (82) and (96). In addotion to \(v_{0}=\sqrt{cV}\). Naturally, we have \(\Delta x^{\prime}_{v}P=0\) in special relativity. In Minkowsky space-time \(V=0\) so \(v_{0}=0\) too. As an analogy, a non-inercal reference frame has the same state of motion given by any acceleration relative to a galinean reference frame, thus here we define the flux-line of the hypothetic superfluid (78) relative to a preferred frame \(S_{V}\).
In other words, we say that every flux-line is a frame system, these frames systems agree with each other as to the given speed of hypothetical superfluid with respect to preferred frame \(S_{V}\). In this meaning, the critical speed of the superfluid is an invariant. Since the speed \(V\) is inaccessible for any other flux-line in a minkowski space-time we have to \(\Delta v\) is the width of a quasi-particle moving with speed \(v\) measurement of a reference associated with a fluid \(S_{V}\).
Let's consider a particle that has wavelength \(\lambda\) relative to a preferred frame \(S_{V}\) associated with a hypothetical fluid. It would be natural to think that the delocation \((\Delta x^{\prime}_{v})_{0}\), would be given by
\[\lambda\sim(\Delta x^{\prime}_{v})_{0}=\frac{v^{2}}{v_{0}}(\Delta\tau)_{0}= \frac{v_{0}^{2}}{v}(\Delta t)_{0}\Psi^{-1}, \tag{100}\]
where \((\Delta t)\) will be calculated. We have (76) using the wavelength definition,
\[\lambda=\frac{h}{P}=\frac{h}{m_{0}v}\sqrt{\frac{1-\frac{v^{2}}{c^{2}}}{1- \frac{V^{2}}{v^{2}}}} \tag{101}\]
and with the Equation (96) we write:
\[\lambda=\frac{h}{m_{0}v}\Psi^{-1}\sim\frac{v_{0}^{2}}{v}(\Delta t)_{0}\Psi^{- 1}. \tag{102}\]
From that we get
\[m_{0}v_{0}^{2}(\Delta t)_{0}{\sim}h. \tag{103}\]
Finally we compare the equation above (103) with the Equation (99) and we have
\[(\Delta x^{\prime}_{v})_{0}P=m_{0}v_{0}^{2}(\Delta t)_{0}{\sim}h. \tag{104}\]
Alternatively
\[(\Delta x^{\prime}_{v})_{0}\Delta P=m_{0}v_{0}\lambda_{0}{\sim}h, \tag{105}\]
where we have (98) for any flux-line. The Equation (103) is the uncertainty relation of momentum emerging from a space-time with properties of a hypothetically superfluid.
Now in order to get the relation of uncertainty to energy
\[m_{0}cV(\Delta t)_{0}{\sim}h. \tag{106}\]
We can write Equation (103) as
\[m_{0}c^{2}\Psi(\Delta t)\Psi^{-1}{\sim}h. \tag{107}\]
From Equation \(m_{0}c^{2}\Psi(\Delta t)\Psi^{-1}{\sim}h\) into Equation (95)
\[E=m_{0}c^{2}\Psi. \tag{108}\]
Thus we write the relation of energy-time uncertainty
\[E(\Delta\tau)_{c}=m_{0}c^{2}(\Delta t){\sim}h, \tag{109}\]
or alternatively
\[E(\Delta\tau)_{c}=m_{0}c^{2}\lambda{\sim}h, \tag{110}\]
where \(\lambda=c(\Delta t)\).
We have considered (98) for any flux line. We can consider the energy \(E\) relative to a preferred frame \(S_{V}\), we expect uncertainty as far as \(\Delta E\) for any flux line \(E\equiv\Delta E\), in same meaning of (98).
According to the Equation (110), which is the energy-time uncertainty principle, we rewrite the Equation (95) as
\[\psi^{-1}(v)=\frac{\Delta\tau}{\Delta t}, \tag{111}\]
therefore a relation of reciprocity between the clock hypothesis and the principle of energy-time uncertainty is established. The relation of uncertainty becomes an interpretation associated with causality, this causality, which is associated with the emergence of a first excited state, breaks with the laminar flow of a superfluid at the limit where we are close to critical speed. Thus it occurs that the first excited state breaks the laminar flow, but there is still a drag effect associated with the original laminar flow. We can say on limit \(v{\to}V\), it implies \(\Delta E\to 0\), where
\[\psi^{-1}_{v=V}(v)\rightarrow\infty. \tag{112}\]
The relathion present on Equation (110) is an uncertainty time-energy emergent of space-time. In this case the space-time correspond to a hypothetical superfluid [62].
### Dispersion Relation in presence of the preferred frame \(S_{v}\)
Revisiting the Equations (82) and (95), we notice an interesting symmetry
\[\psi^{-1}(v)=\psi(w), \tag{113}\]
where \(w(v)\) is given for Equation (76), it is an indication that \(w(v)\) plays the role of the velocity of counterflow [62]. Considering this perspective the Equation (87) also gains a hydrodynamic interpretation, as a typical superfluid structure, where the superfluid is modeled as two fluids. However, here the two fluids are complementary, in terms of their causal structures.
The appearance of this new structure and its effects on the causal structure have implications, one of these implications concerns the Lorentz symmetry. Revisiting (108) we have a relation between energy and causal
strucuture, this relation was adressed initially by Equations (95) and (94), knowing that \(\Delta E\equiv E|_{S_{V}}\). Thus we can write
\[\frac{\Delta t}{\Delta\tau}=\frac{E|_{S_{V}}-E_{0}}{m_{0}c^{2}}=\frac{E|_{S_{V}}- E_{0}}{E_{0}}. \tag{114}\]
The meaning of the Equation (114) is very important because it is a virtually identical result obtained by Zloshchatiev [63]. According to Zloschatiev work [63] we have a Lorentz Invariant Violation (LIV), expressed by the line element being multiplied by a function of the energy in Equations (92) and (93). The difference from the work of the Zloschatiev is he considers \(E_{0}\) as the scale of energy of quantum-gravitational interactions and we consider \(E_{0}\) as the resting energy. It was demonstrated that the introduction of a minimal speed breaks the symmetry of Lorentz [20; 21; 22; 23]. However we have not yet established the size of the Lorentz symmetry shift, certainly the energy function (108), is a conformal function as already was showed in [20; 22].
Now, we consider the Equations (97) and (108) and we write
\[P^{2}-\frac{E^{2}}{c^{2}}=\frac{p^{2}-m_{0}^{2}V^{2}}{1-\frac{v^{2}}{c^{2}}}- m_{0}^{2}c^{2}\frac{1-\frac{V^{2}}{v^{2}}}{1-\frac{v^{2}}{c^{2}}}. \tag{115}\]
A transformation in relativistic energy \(E\) and relativisitic momentum \(P\) can be made \(E\rightarrow\hbar\omega\) and \(P\rightarrow\hbar\omega\). We then rewrite (115)
\[P^{2}-\frac{E^{2}}{c^{2}}=\hbar^{2}\kappa^{2}-\hbar^{2}\omega^{2}\left(1+\frac {c^{2}V^{2}}{v^{2}}\right)-\frac{m_{0}^{2}V^{2}}{1-\frac{v^{2}}{c^{2}}}, \tag{116}\]
where (32), is an refractive index perturbation [51].
We call the Equation (69) and we relate to (32)
\[\frac{cV}{v}=\frac{c}{2}\frac{\Delta\tau}{\Delta t}\left(\eta(x^{\prime},\rho =0)-n_{0}\right). \tag{117}\]
We have the establishment of a lump, a sector where small fluctuations differentiate the fluid from inside the pit, in relation to the outer fluid.
The term \(-\frac{m_{0}^{2}V^{2}}{1-\frac{v^{2}}{c^{2}}}\) is a part of the maxon (97), the important to be highleted is that \(\frac{m_{0}^{2}V^{2}}{1-\frac{v^{2}}{c^{2}}}\) can be infinite. Bacci et-al [54] survey a dispersion relation. Bacci _et-al_[54] build a kinematic relation which they call an _isotropic Aether-Frame_. We have
\[E^{2}-||\vec{p}||^{2}c^{2}=E_{0}^{2}(\tilde{F}), \tag{118}\]
here \(\tilde{F}\) is an inertial frame. For Bacciti _et-al_ we are working an "absolute moving", where right-side of (118) is given for \(E_{0}^{2}(v)\) and \(v\) is measured from frame of aether. The energy \(E(v)\) is an usual function \(\eta(V_{aether},V)\). Constituting a minimalist violation of Lorentz symmetry, actually a "deformation" of Lorentz symmetry.
We compare (118) and (116) and visualize an equivalence
\[E_{0}^{2}\equiv\frac{m_{0}^{2}V^{2}}{1-\frac{v^{2}}{c^{2}}}. \tag{119}\]
Therefore, we understand the sector associated with (119) as a kind of aether, a kind of energy for a privileged reference frame like what we've been calling reference frame \(S_{V}\) in the above sections.
## IV A two-fluid model in presence of preferred frame \(S_{v}\)
### The Lorentz Invariance Violations and deformed kinematics transformations
After the study of the previous sections and the deduction of the deformed Lorentz factor (82), we are allowed to write the kinematic transformations, similar to Lorentz transformations, which relate the references as follows
\(S^{\prime}\to S_{V}\):
\[dt^{\prime}=\psi(v)\left[dt+\left(V-v\right)\frac{dx}{c^{2}}\right], \tag{120}\]
and
\[dx^{\prime}=\Psi(v)\left[dx+\left(V-v\right)dt\right]. \tag{121}\]
Being \(S^{\prime}\) any reference and \(S_{V}\) a privileged reference proposed by Nassif in a series of works that begin in [24] being the latest [30], all of them already cited in the above sections. So we also have inverse transformations \(S_{V}\to S^{\prime}\)
\[dx=\Psi(v)\left[dt^{\prime}+\left(v-V\right)dt\right], \tag{122}\]
\[dt=\Psi(v)\left[dt^{\prime}+\left(v-V\right)\frac{dx^{\prime}}{c^{2}}\right]. \tag{123}\]
The transformations (120), (121), (122) and (123) are do not conceive the existence of a perpendicular direction. In order to proceed our study of superfluids, we need to write broader transformations that conceive a larger number of dimensions. The transformations deduced in [28] satisfy our need.
Let's therefore define the following perpendicular transformations, the one that relates the references \(S^{\prime}\to S_{V}\)
\[dx_{\perp}=\Psi^{-1}(v)dx^{\prime}, \tag{124}\]
and the reverse transformation \(S_{V}\to S^{\prime}\)
\[dx^{\prime}_{\perp}=\Psi(v)Vdt. \tag{125}\]
Once the equations have been written (120), (121), (124) and (125), we write the matrices associated with transformations
\[\begin{bmatrix}x^{\prime}_{\perp}\\ x^{\prime}_{\parallel}\\ x^{\prime}_{0}\end{bmatrix}=M_{3\times 3}(S_{V}\to S^{\prime})\begin{bmatrix}x_{ \perp}\\ x_{\parallel}\\ x_{0}\end{bmatrix}^{S_{V}}. \tag{126}\]
We conceive that the relation (126) can be undone, carrying an array that would exist as
\[\begin{bmatrix}x_{\perp}\\ x_{\parallel}\\ x_{0}\end{bmatrix}^{S_{V}}=M_{3\times 3}(S_{V}\to S^{\prime})\begin{bmatrix}x^{ \prime}_{\perp}\\ x^{\prime}_{\parallel}\\ x^{\prime}_{0}\end{bmatrix}. \tag{127}\]
The matrix of transformations written by Nassif [28]
\[M_{2\times 2}(S_{V}\to S^{\prime})=\Psi(v)\begin{bmatrix}1&i\frac{v}{c}\left(1- \frac{V}{v}\right)\\ -i\frac{v}{c}\left(1-\frac{V}{v}\right)&1\end{bmatrix}, \tag{128}\]
provides an idea of what the matrix might be \(M_{3\times 3}(S_{V}\to S^{\prime})\), as the same algebraic and kinematic properties need to be respected. Our goal here is to include an element perpendicular to translation, then a non-translational term. In this sense, we have no terms outside the diagonal, apart from those present in (128), because there is no need to represent translations like the terms outside the diagonal \(M_{12}=-M_{21}=i\frac{v}{c}\left(1-\frac{V}{v}\right)\). Thus we can build \(M_{1\times 3}=M_{3\times 1}=M_{1\times 2}=M_{2\times 1}=0\) and we write the matrix
\[M_{3\times 3}(S_{V}\to S^{\prime})=\Psi(v)\begin{bmatrix}1&0&0\\ 0&1&i\frac{v}{c}\left(1-\frac{V}{v}\right)\\ 0&-i\frac{v}{c}\left(1-\frac{V}{v}\right)&1\end{bmatrix}. \tag{129}\]
It is trivial to find the inverse of \(M_{3\times 3}(S_{V}\to S^{\prime})\), we write
\[M_{3\times 3}(S^{\prime}\to S_{V})=\Psi^{-1}(v)\begin{bmatrix}1&0&0\\ 0&\frac{1}{1-\frac{v^{2}}{c^{2}}\left(1-\frac{V}{c}\right)^{2}}&i\frac{\frac{v \left(1-\frac{V}{c}\right)}{c^{2}}}{1-\frac{v^{2}}{c^{2}}\left(1-\frac{V}{c} \right)^{2}}\\ 0&-i\frac{\frac{v\left(1-\frac{V}{c}\right)}{c^{2}}}{1-\frac{v^{2}}{c^{2}} \left(1-\frac{V}{c}\right)^{2}}&\frac{1-\frac{v^{2}}{c^{2}}\left(1-\frac{V}{c} \right)^{2}}{1-\frac{v^{2}}{c^{2}}\left(1-\frac{V}{c}\right)^{2}}\end{bmatrix}. \tag{130}\]
The composition of relations (126) and (127) implies that
\[M_{3\times 3}(S_{V}\to S^{\prime})M_{3\times 3}(S^{\prime}\to S_{V})=\mathbf{I}_{3\times 3} \tag{131}\]
where \(\mathbf{I}_{3\times 3}\) is the identity matrix. This implies that the crack of matrices
\[\left[\mathbf{I}_{3\times 3},M_{3\times 3}(S_{V}\to S^{\prime}),M_{3\times 3}(S^{ \prime}\to S_{V})\right], \tag{132}\]
make a group, where \(\mathbf{I}_{3\times 3}\) is the neutral element and the two matrices \(M_{3\times 3}(S_{V}\to S^{\prime})\) and \(M_{3\times 3}(S^{\prime}\to S_{V})\) are inverse to each other. Given the simplicity of this group, it is trivial to note that it is Abelian. However, it is necessary to comment that the frame \(S_{V}\) is unique, this implies that \(X^{\prime}=M_{3\times 3}(S_{V}\to S^{\prime})X_{V}\) is an operation defined in the group, but \(M_{3\times 3}(S_{V}\to S)X^{\prime}=M_{3\times 3}(S_{V}\to S)M_{3\times 3}(S_{V}\to S^{\prime})X_{V}\) is not an operation defined in the group. The existence of another equivalent reference would imply the destruction of this structure. The same applies to a transformation that would \(S^{\prime}\to S_{V}\to S_{V}\). The only possible operation in this case is the identity \(I_{3\times 3}\) that does \(S_{V}\to S_{V}\). In addition, the crack of matrices (132) does not answer the question of transformations between two distinct reference frames \(S_{V}\) and \(S^{\prime}\). As Nassif proposed in [27] and [28] the set of kinematic transformations would not correspond to a group. This conclusion of Nassif is based on Bacetti _et-al_ studies [55; 64].
Relations (120), (121) and (124) allow us to write compositions of velocities between the references \(S_{V}\to S^{\prime}\), so we write the parallel component
\[v^{\prime}_{\parallel}=\frac{v_{\parallel}-u+V}{1+\frac{v_{\parallel}}{c^{2}} (V-u)}. \tag{133}\]
From now on we will just write \(v_{\parallel}\) as \(v\).
The perpendicular component is
\[v^{\prime}_{\perp}=\frac{V}{1+\frac{v}{c^{2}}(V-u)}. \tag{134}\]
Another possible simplification is to adopt \(u=v\), which is equivalent to considering only the two reference frames: \(S^{\prime}\), the reference frame on the quasi-particle world line; and, the preferred frame, \(S_{V}\). Then we rewrite (133)
\[v^{\prime}_{\parallel}=\frac{V}{1+\frac{(V-v)v}{c^{2}}}, \tag{135}\]
and (134)
\[v_{\perp}=\frac{V}{1+\frac{(V-v)v}{c^{2}}}. \tag{136}\]
The Equations (135) and (136) are the equations associated with the propagation velocity of any excited quasi-particle in the privileged reference frame, and we fall into the first excited state, when we do \(v=V\), which means the quasi-particle is being dragged by this exotic fluid. Besides making it clear that this exotic fluid is isotropic, the components of the velocity are identical in all directions, as a wave. A fluctuation that propagates in the fluid, which serves as a privileged reference and that here, already shows a property of superfluidity, associated with the term
\[V-v. \tag{137}\]
We write the vector \(\vec{v}^{\prime}\), seen from the referential \(S^{\prime}\)
\[\vec{v}^{\prime}=\begin{bmatrix}v^{\prime}_{\parallel}\\ v^{\prime}_{\perp}\end{bmatrix}. \tag{138}\]
We can also find parallel and perpendicular velocities when viewed from the point of view of \(S_{V}\), these transformations are important because are based on the reference frame in the superfluid where the currents are constructed [11; 36; 62; 65]. We write below the speeds seen in the reference \(S_{V}\)
\[v_{\parallel}=\frac{v+u-V}{1+\frac{v}{c^{2}}(u-V)}, \tag{139}\]
\[v_{\perp}=\frac{\left[1-\frac{v^{2}}{c^{2}}\left(1-\frac{V^{2}}{v^{2}}\right)^ {2}\right]V}{1+\frac{v}{c^{2}}(u-V)}, \tag{140}\]
where
\[u=v_{S/S_{V}}. \tag{141}\]
Now we write the vector
\[\vec{v}_{S_{V}}=\begin{bmatrix}v_{\parallel}\\ v_{\perp}\end{bmatrix}_{S_{V}}. \tag{142}\]
### A concrete example: Vortex
The Visser work [67] proposes a flow with a vortex profile
\[\vec{v}=\frac{A\hat{r}+B\hat{\theta}}{r}. \tag{143}\]
We consider (143) with relation to preferred frame \(S_{V}\) starting from (139)
\[\vec{v}_{\parallel}(S_{V})=\frac{\sqrt{\frac{A^{2}+B^{2}}{r^{2}}}+u-V}{1+\sqrt {A^{2}+B^{2}}\frac{(u-V)}{rc^{2}}}\mathbf{e} \tag{144}\]
where \(\mathbf{e}\) this related to Equation (143) and follows \(\mathbf{e}=\frac{\vec{v}}{||\vec{v}||}\). We write like this, for (140)
\[v_{\perp}(S_{V})=\left[\frac{1-\frac{A^{2}+B^{2}}{r^{2}c^{2}}\left(1-\frac{V^{ 2}r^{2}}{A^{2}+B^{2}}\right)}{1+\sqrt{\frac{A^{2}+B^{2}}{r^{2}}}\frac{u-V}{c^ {2}}}\right]V\mathbf{e}_{\perp} \tag{145}\]
When the flux line is given by (143). The line element associated with (143) is
\[ds^{2}=-c_{som}^{2}dt^{2}+\left(dr-\frac{A}{r}dt\right)^{2}+\left(rd\theta- \frac{B}{r}dt\right)^{2}. \tag{146}\]
The limits of (144) and (145) are
\[\lim_{r\to 0}v_{\parallel}\rightarrow\frac{c^{2}}{u-V} \tag{147}\]
and
\[\lim_{r\to 0}v_{\perp}\rightarrow\frac{c^{2}}{u-V}. \tag{148}\]
Without incorporating the effects of preferred frame \(S_{V}\), to kinematic transformations does not clearly express the kinematic and geometric role of the first excited state of the superfluid. This paper is clear in the Equations (144) and (145), both have the vortex drag effect by the fluid in laminar flow with minimal speed \(V\). The two equations present a second observer \(S_{u}\), expressed by speed \(u\), for simplicity we will make sure that we have a single state excited in this case \(u\equiv v\). We rewrite then (144) and (145)
\[\vec{v}_{\parallel}(S_{V})=\frac{2\sqrt{\frac{A^{2}+B^{2}}{r^{2}}}-V}{1+\frac {\sqrt{A^{2}+B^{2}}}{rc^{2}}\left(\frac{\sqrt{A^{2}+B^{2}}}{r}-V\right)}\mathbf{ e}. \tag{149}\]
We see that in the vicinity of the center of the vortex the speed has an isotropic behavior. The other limit \(r\rightarrow\infty\) causes \(v_{\parallel}\to u-V\), so the parallel component with the offset would be related to that of the \(S_{u}\), already the other component \(v_{\perp}\rightarrow\xi^{2}V\). The dimensionless constant \(\xi\) is defined and its physical meaning are better explained in Ref. [23] as
\[\xi=\frac{V}{c}=\sqrt{\frac{Gm_{p}m_{e}}{4\pi\epsilon_{0}}}\frac{q_{e}}{\hbar c}, \tag{150}\]
where \(G\) is the gravitational constant, \(m_{p}\) is the proton mass, \(m_{e}\) and \(q_{e}\) are mass and charge of electron. In the same work [23] it is estimated as \(\xi=1.5302\times 10^{-22}\).
It is remarkable that for small values of \(r\), the denominator goes to infinity faster than the numerator, bringing the expression to zero, thus generating a natural cutoff. Such parameters \(A\) and \(B\) are responsible for the causal structure, so let's look for a relation to the radius of the ergoregion. Better said, a relation to which distance the effects of Aether drag are felt
\[r_{0}=2\frac{\sqrt{A^{2}+B^{2}}}{V}, \tag{151}\]
it is notable that the effect has a much greater scope than the effect predicted by Visser [67]\(2\frac{\sqrt{A^{2}+B^{2}}}{c}\), curiously the ratio between the two effects is of the order of the constant \(\xi\).
\[v_{\perp}(S_{V})=\left[\frac{1-\frac{A^{2}+B^{2}}{r^{2}c^{2}}\left(1-\frac{V^ {2}r^{2}}{A^{2}+B^{2}}\right)}{1+\frac{\sqrt{A^{2}+B^{2}}}{rc^{2}}\left(\frac {\sqrt{A^{2}+B^{2}}}{r}-V\right)}\right]V\mathbf{e}_{\perp} \tag{152}\]
Similar to (149), we see a cutoff for \(r\to 0\), which indicates that this would be a serene region, like the eye of a hurricane, but the existing velocity would be isotropic, providing a type of expansion, linear velocity in \(r\), we then have an effect similar to the Hubble parameter [66].
When we do \(\vec{v}_{\parallel}-\vec{v}_{\perp}\) and we consider the limit, we have
\[\lim_{r\rightarrow\infty}||\vec{v}_{\parallel}(S_{V})-\vec{v}_{\perp}(S_{V})|| \to V, \tag{153}\]
consisting of a region where the vortex influence ceases and we have only the laminar flow of the fluid.
### The usual Lorentz symmetry in a superfluid
We now write transformations of Lorentz, usually for a superfluid
\[v^{\mu}=(\tilde{\mu},\vec{\nabla}\sigma)=\tilde{\mu}\left(1,\vec{v}_{s}\right), \tag{154}\]
and a perpendicular fluid
\[v^{\mu}_{\parallel}=\tilde{s}\left(1,\vec{v}_{n}\right). \tag{155}\]
We have established a pattern
\[\tilde{\mu}=\gamma_{s}\mu, \tag{156}\]
implying in a Lorentz factor
\[\gamma_{s}=\frac{1}{\sqrt{1-\frac{v_{s}^{2}}{c^{2}}}}. \tag{157}\]
The same for \(\tilde{s}\)
\[\tilde{s}=\gamma_{n}s, \tag{158}\]
\[\gamma_{n}=\frac{1}{\sqrt{1-\frac{v_{s}^{2}}{c^{2}}}}. \tag{159}\]
The scalar product between (154) and (155)
\[v_{s}^{\mu}v_{\mu}^{n}=-c^{2}y=-c^{2}\tilde{\mu}\tilde{s}\left(1-\frac{ \vec{v}_{s}\cdot\vec{v}_{n}}{c^{2}}\right)=-c^{2}\mu s\gamma_{n}\gamma_{s} \left(1-\frac{\vec{v}_{s}\cdot\vec{v}_{n}}{c^{2}}\right), \tag{160}\]
with Lorentz transformations for the addition of \(\vec{v}_{s}\) and \(\vec{v}_{n}\)
\[\vec{v}_{ns}=\frac{\vec{v}_{n}-\vec{v}_{s}}{1-\frac{\vec{v}_{s}\cdot\vec{v}_{ n}}{c^{2}}}, \tag{161}\]
Thus the squared module \(||\vec{v}_{ns}||=||\vec{v}_{s}-\vec{v}_{n}||^{2}\) is deliberate
\[\frac{v_{ns}^{2}}{c^{2}}=1-\frac{\mu^{2}s^{2}}{y^{4}}. \tag{162}\]
Figure 4: The Vortex Geometry and Ergoregions [67].
Here is established a ratio of the module product of the difference between perpendicular and circulating speeds \(||\vec{v}_{s}-\vec{v}_{n}||^{2}\) with the entropy of the system, as a Lorentzian invariant, but there is no lower limit for the speeds of the excited states, there is no information on how excited states behave, in a range where the superfluid begins to generate its first turbulent states. Here is the advantage of the treatment exposed in the Equations (139) and (140), since the transformations (139) and (140) are more complete than the transformation (162). Although in future work we can integrate the same conceptions set out in (162) with thermodynamic concepts, thus leaving a thermodynamics that takes into account the existence of this superfluid. We also add that the Equation (162) is a referential transformation, a composition of velocities, as well as (153), whereas (153) implies a Lorentz Invariance Violation (114).
### Kinematics symmetry compatible with Landau Criteria
Since we describe the dispersion relation (108), we can consider the critical speed generated by it, applying the criterion of Landau [68], [11] and [62]. So we rewrite (108)
\[E(v)=m_{0}c^{2}\sqrt{\frac{1-\frac{V^{2}}{v^{2}}}{1-\frac{v^{2}}{c^{2}}}}. \tag{163}\]
Following [62], we consider (108), which we rewrite
\[E(v)=m_{0}c^{2}\sqrt{\frac{1-\frac{V^{2}}{v^{2}}}{1-\frac{v^{2}}{c^{2}}}}+ \vec{p}\cdot\vec{v}_{n}. \tag{164}\]
In (164), we have to \(\vec{v}_{s},\vec{v}_{n}\) are defined as in (139) and \(v_{n}\) are defined as in [62] like a perpendicular wind, although here it is more like an induced velocity. The induced expression reflects an analogy with the Faraday-Lens effect. The momentum \(\vec{p}\) is given by the relation (97). We will define another scatter relation
\[\epsilon(v,p)=m_{0}c^{2}\sqrt{\frac{1-\frac{V^{2}}{v^{2}}}{1-\frac{v^{2}}{c^{2 }}}}+\vec{p}\cdot(\vec{v}_{n}-\vec{v}_{s})\,. \tag{165}\]
Equation (164) is associated with its excitements. Thinking of a linear dispersion relation with momentum \(\epsilon(p)\propto p\), we consider that the dispersion relation (165), at the limit where \(p\to 0\) in (96), we will have \(v\to V\). That implying that, \(\vec{v}_{n}+\vec{v}_{s}\to V\), so (165) will tend to
\[\epsilon(p)\to Vp. \tag{166}\]
That represents the structure of a phonon, as that phonon is associated with a causal structure (85), we can call it acoustic tachyon [69]. This indicates that, by minimizing (165), we get
\[min\frac{\epsilon(p)}{p}=V, \tag{167}\]
and we find a critical speed [11; 62] for the hypothetical gravitational superfluid. Nassif in previous works [27; 28] had reached a result, associating the limit (166), to the cosmological constant. We find this new kinematic structure, established by Nassif in [25; 26] and other works, that the cosmologic constant corresponds to the first excited state of a gravitational vacuum. Corresponding directly to Nassif's result in [28].
### Conversion between a Eulerian observer and a preferred observer \(S_{v}\)
Comparing with the equations of the preferred frame \(S_{V}\), the first conclusion we get is
\[\alpha\equiv\psi(v), \tag{168}\]
the second is the immediate comparison between (54) and (85), that reaffirms (168). Understanding the role of \(\psi(v)\) as a lapse function, we can write the Eulerian base in terms of this new lapse function, via Equations (48) and (49)
\[\tilde{\eta}_{\mu}=\sqrt{\frac{1-\frac{V^{2}}{v^{2}}}{1-\frac{v^{2}}{c^{2}}}} \left(-1,0,0,0\right), \tag{169}\]
and
\[\tilde{\eta}^{\mu}=\sqrt{\frac{1-\frac{v^{2}}{c^{2}}}{1-\frac{V^{2}}{v^{2}}}} \left(1,\beta^{i}\right). \tag{170}\]
It is easy to notice that the scalar product \(\tilde{\eta}_{\mu}\tilde{\eta}^{\mu}=-1\) implies that \(\tilde{\eta}_{\mu}\) and \(\tilde{\eta}^{\mu}\) continue to be unitary, thereby causing
\[g^{tt}=\psi(v). \tag{171}\]
We postulate now.
\[\phi=c^{2}\left(\psi(v)-1\right). \tag{172}\]
Building projectors
\[\tilde{\gamma}_{\mu\nu}=\eta_{\mu\nu}+\tilde{\eta}_{\mu}\tilde{\eta}_{\nu}, \tag{173}\]
and
\[\tilde{\gamma}^{\mu\nu}=\eta^{\mu\nu}+\tilde{\eta}^{\mu}\tilde{\eta}^{\nu}, \tag{174}\]
we have established that \(\tilde{\gamma}^{ik}\tilde{\gamma}_{kj}\).
We write the new projectors
\[\tilde{\gamma}^{\mu}_{\nu}:=g^{\mu\nu}\tilde{\gamma}_{\alpha\nu}. \tag{175}\]
To determine the four-speed we apply (169) in (54) and we have
\[-\tilde{\eta}_{\mu}u^{\mu}=c\sqrt{1-\frac{V^{2}}{v^{2}}}. \tag{176}\]
Applying \(\tilde{\eta}^{\mu}\) on both sides
\[u^{\mu}=-c\sqrt{1-\frac{V^{2}}{v^{2}}}\tilde{\eta}^{\mu}, \tag{177}\]
remembering that \(\tilde{\eta}^{\mu}\) is given in the (170).
After we write the Equation (177), we can determine by [53], the speeds \(v^{i}:=\frac{\gamma^{i}_{\mu}u^{\mu}}{\alpha u^{i}}\) and \(v_{i}:=\frac{\gamma_{i\mu}u^{\mu}}{\alpha u^{i}}\). We do then
\[\tilde{v}^{i}=-2c\psi^{-2}(v)\beta^{i}, \tag{178}\]
and
\[\tilde{v}_{i}=-2c\eta_{i}=0. \tag{179}\]
Returning again to Rezzolla [53], using the Equations (177) and (170), we write \(\beta^{i}\), so we get the expression
\[\frac{\beta^{i}}{c}=\left(\psi(v)-1\right)\eta^{i}, \tag{180}\]
bringing back the relation (172).
We also studied the condition of normalization \(u^{\mu}u_{\nu}=-1\), to find \(u_{\nu}\), we do then \(u_{\mu}=a(v)\eta_{\mu}\). Replacing along with (177), we found the value of \(a=\frac{1}{c\sqrt{1-\frac{V^{2}}{v^{2}}}}\). Thus
\[u_{\mu}=\frac{1}{c\sqrt{1-\frac{V^{2}}{v^{2}}}}\tilde{\eta}_{\mu}. \tag{181}\]
Studying because
\[\eta^{\mu\nu}u_{\mu}u_{\nu}=\frac{1}{c^{2}}\frac{1}{1-\frac{V^{2}}{v^{2}}}, \tag{182}\]
alternatively
\[\eta_{\mu\nu}u^{\mu}u^{\nu}=c^{2}\left(1-\frac{V^{2}}{v^{2}}\right). \tag{183}\]
The Equation (183) suggest a relation with the \(k\)-essence models, in the meaning that they are defined in the Equation (2).
### Supefluids and \(k\)-essence an unified description via preferred frame \(S_{v}\)
Considering the work of Nassif et al[22]. We have the Lagrangian for a relativistic free particle
\[\mathcal{L}=-m_{0}c^{2}\sqrt{1-\frac{v^{2}}{c^{2}}}, \tag{184}\]
however, if the particle suffers effects from a conservative force, which is independent of speed, we have \(\mathcal{L}=-m_{0}c^{2}\sqrt{1-\beta^{2}}-U\), where \(U=U(r)\). If the Lagrangian \(\mathcal{L}\) is a function that does not depend on time, if there is no movement constant.
We write the Hamiltonian
\[h=\dot{q}_{j}p_{j}-\mathcal{L}=\frac{m_{0}v_{j}v_{j}}{\sqrt{1-\frac{v^{2}}{c^{ 2}}}}+m_{0}c^{2}\sqrt{1-\frac{v^{2}}{c^{2}}}+U, \tag{185}\]
also
\[h=\frac{m_{0}c^{2}}{\sqrt{1-\frac{v^{2}}{c^{2}}}}+U=E, \tag{186}\]
where \(h\) is the total energy.
If \(U=0\), so \(E=\gamma m_{0}c^{2}\), there is a constant movement of the free particle. We just write \(v^{2}=v_{j}v^{j}\) for single particle.
Now, by using analog procedures to obtain a relativistic Lagrangian, affected by the existence of a preferred frame \(S_{V}\) associated with the critical velocity of a superfluid (167). We consider to obtain first Lagrangian of an ideal flow. With \(h=E\), where \(E\) is get from (95), the constant movement for a flow-line, knowing the momentum (97) is a momentum for the hypothetical superfluid. We write then
\[E=m_{0}c^{2}\sqrt{\frac{1-\frac{V^{2}}{v^{2}}}{1-\frac{v^{2}}{c^{2}}}}=h=m_{0} v^{2}\sqrt{\frac{1-\frac{V^{2}}{v^{2}}}{1-\frac{v^{2}}{c^{2}}}}-\mathcal{L}, \tag{187}\]
where \(v^{2}=v_{j}v^{j}\) and \(\mathcal{L}\) is the free particle Lagrangian.
Starting from the Equation (187), we extract:
\[{\cal L}=-m_{0}c^{2}\theta\sqrt{1-\frac{v^{2}}{c^{2}}}=-m_{0}c^{2}\sqrt{\left(1- \frac{V^{2}}{v^{2}}\right)\left(1-\frac{v^{2}}{c^{2}}\right)}. \tag{188}\]
If we do \(V\to 0\), or equivalently \(v>>V\), in Equation (188), we recover the Lagrangian (184).
If we take Equation (183) and compare with (2), we recover the Equation (80). In (183) making a change of variable, we get
\[{\cal K}=1-\frac{V^{2}}{v^{2}}, \tag{189}\]
it is necessary to comment on the range of values that the variable \({\cal K}\) can assume
\[0<{\cal K}<1-\xi^{2}, \tag{190}\]
A small manipulation in (189) and we have the relation between (62) and (189)
\[v^{2}=\frac{V^{2}}{1-{\cal K}}. \tag{191}\]
The Equation (191) represents the relation between a preferred frame \(S_{V}\) and the \(k\)-essence typical parameter \({\cal K}\), generating an observer coupled to groud-state. It is an imposition for \(k\)-essence parameter, which is limited to varying within the range, thus constituting a timelike region associated with the causal structure with minimal speed \(V\), which simultaneously is the speed criticized according to the criteria of Landau (167).
We introduce (191) in (188) and we obtain
\[{\cal L}({\cal K})=-m_{0}c^{2}\sqrt{{\cal K}\left(1-\frac{\xi^{2}}{1-{\cal K }}\right)}. \tag{192}\]
The Lagrangian (192) is a \(k\)-essence purely kinetic Lagrangian just as (6) and same for the Lagrangian of a superfluid. The consequences and implications of (188) are implications of generating this unified scenario for cosmology and astrophysics. The \(k\)-essence was proposed as a hydrodynamic description for cosmological constant [4; 8], pathological causal structures in scalar models are mapped [9] in several works. But (192) is a different approach. The causal structure of scalar model match with critical speed, building a preferred frame. We write
\[{\cal L}_{\cal K}=-\frac{m_{0}c^{2}}{2\sqrt{{\cal K}(1-\frac{\xi^{2}}{1-{\cal K }})}}\left(1-\frac{\xi^{2}}{1-{\cal K}}-\frac{\xi^{2}{\cal K}}{(1-{\cal K})^{ 2}}\right), \tag{193}\]
and
\[{\cal L}_{,{\cal K}{\cal K}}= -\frac{3m_{0}c^{2}}{4\left[{\cal K}\left(1-\frac{\xi^{2}}{1-{\cal K }}\right)\right]^{\frac{3}{2}}}\left[1-\frac{\xi^{2}}{1-{\cal K}}-\frac{\xi^{ 2}{\cal K}}{(1-{\cal K})^{2}}\right]+\] \[-\frac{m_{0}c^{2}}{2\sqrt{{\cal K}(1-\frac{\xi^{2}}{1-{\cal K}}) }}\left(-\frac{\xi^{2}}{(1-{\cal K})^{2}}-\frac{1-2{\cal K}}{(1-{\cal K})^{3} }\right). \tag{194}\]
The equations permit to write an hydrodynamic approach. The pressure
\[{\cal P}={\cal L}({\cal K}), \tag{195}\]
and density (10), using (189), (192) and (193)
\[\rho={\cal K}{\cal L}_{,{\cal K}}-{\cal L}({\cal K}), \tag{196}\]
or even
\[\rho=-m_{0}c^{2}\sqrt{\mathcal{K}}\left[\frac{1-\frac{\xi^{2}}{1-\mathcal{K}} \left(1-\frac{\mathcal{K}}{1-\mathcal{K}}\right)}{2\sqrt{1-\frac{\xi^{2}}{1- \mathcal{K}}}}-\sqrt{\left(1-\frac{\xi}{1-\mathcal{K}}\right)}\right]. \tag{197}\]
Therefore, Equation (197) reproduces a vast amount of state equations, with the barotropic parameter
\[w=\frac{\sqrt{\left(1-\frac{\xi}{1-\mathcal{K}}\right)}}{\frac{1-\frac{\xi^{2 }}{1-\mathcal{K}}\left(1-\frac{\mathcal{K}}{1-\mathcal{K}}\right)}{2\sqrt{1- \frac{\xi^{2}}{1-\mathcal{K}}}}-\sqrt{\left(1-\frac{\xi}{1-\mathcal{K}} \right)}}. \tag{198}\]
The speed of sound is calculated
\[c_{s}^{2}=\] \[\left[1-\frac{\left[-\frac{3}{4}\left[1-\frac{\xi^{2}}{1- \mathcal{K}}\left(1-\frac{\mathcal{K}}{1-\mathcal{K}}\right)^{2}\right)+ \frac{1}{2(1-\mathcal{K})^{2}}\left(\xi^{2}-\frac{1-2\mathcal{K}}{1-\mathcal{ K}}\right)\right]}{\frac{1}{2\sqrt{\left(1-\frac{\xi^{2}}{1-\mathcal{K}}\right)}} \left[1-\frac{\xi^{2}}{1-\mathcal{K}}\left(1-\frac{\mathcal{K}}{1-\mathcal{K} }\right)\right]}\right]^{-1}. \tag{199}\]
The Equations (198) and (199) correspond to the values of the barotropic factor and the sound propagation within the range of values that \(\mathcal{K}\) can assume.
## V A velocity-potential approach
Inspired by Rezzola (see details section 3.9 of Ref. [53]), we are going to make a very brief approximation of works [41; 42]. Velocity-potential approach allows the definition of some thermodynamic quantities associated with large kinematics. In this work we will not define the four-speed in terms of the 6 potential scalings of schutz [41; 42] in formalism, because we intend to address the temperature problem associated with event horizons in future work. But the discussion about enthalpy gives back a clarification about the role of the preferred frame \(S_{V}\).
### The Irrotational perfect fluid
The hydrodynamic four-speed (9)
\[\mathcal{U}_{\mu}:=\frac{\partial_{\mu}\Phi}{\sqrt{2\left(1-\frac{V^{2}}{v^{2 }}\right)}}. \tag{200}\]
The quantity \(\mathcal{U}_{\mu}\) represents an irrotational fluid written in Schutz formalism, [41; 42], with normalization
\[\mathcal{U}^{\mu}\mathcal{U}_{\mu}=-1, \tag{201}\]
which implies
\[\mathcal{U}^{\mu}:=\sqrt{2\left(1-\frac{V^{2}}{v^{2}}\right)}\partial^{\mu}\Phi. \tag{202}\]
Performing a contraction with Minkowski metric
\[\eta_{\mu\nu}\mathcal{U}^{\mu}\mathcal{U}^{\nu}=2\left(1-\frac{V^{2}}{v^{2}} \right), \tag{203}\]
as in the Schutz formalism[41], is given for enthalpy
\[h(v)=\sqrt{2\left(1-\frac{V^{2}}{v^{2}}\right)}, \tag{204}\]
a thermodynamic quantity associated the capacity of absorb or emit heat, the simplest substance have enthalpy \(h=0\). In this case, the more simple state have \(h(v=V)=0\). This results confirm a propriety of ground state, the enthalpy have a relation with pressure \(h=\frac{\rho+P}{n}\), so \(h=0\) implies
\[P=-\rho. \tag{205}\]
We have indication that the cosmological constant is a ground-state of this cosmological superfluid. The density of particle is related with the quantities in this description. The concept of preferred frame \(S_{V}\) then makes it possible to introduce the cosmological constant naturally into a hydrodynamic structure.
\[n=\sqrt{1-\frac{V^{2}}{v^{2}}}\mathcal{P}_{,\mathcal{K}} \tag{206}\]
where \(\mathcal{P}_{,\mathcal{K}}=\mathcal{L}_{,\mathcal{K}}\). Using (193), we rewrite so
\[n(\mathcal{K})=-\frac{m_{0}c^{2}}{2\sqrt{(1-\frac{\xi^{2}}{1- \mathcal{K}})}}\left(1-\frac{\xi^{2}}{1-\mathcal{K}}-\frac{\xi^{2}\mathcal{K} }{(1-\mathcal{K})^{2}}\right). \tag{207}\]
We have the limits of \(n(\mathcal{K}\to 0)\to m_{0}c^{2}\), the \(n(\mathcal{K}\to 1-\xi^{2})\rightarrow\infty\). In the first one we recover the classical resting energy, in the second one there is predominance of the \(k\)-essence and tending to ground-state. The energy-momentum tensor (7) is written in terms of thermodynamics quantities
\[T_{\mu\nu}=nh\mathcal{U}_{\mu}\mathcal{U}_{\nu}-P(\mathcal{K}) \eta_{\mu\nu} \tag{208}\]
Also, we have a hydrodynamic description of \(k\)-essence superfluid. Thus the Equation (202) is an enthalpy current.
### Conserved current of enthalpy
We reconsider the Equation (202), and we study its conservation
\[\partial_{\mu}\mathcal{U}^{\mu}=\frac{\sqrt{2}}{\sqrt{1-\frac{V^ {2}}{v^{2}}}}\frac{V^{2}}{v^{3}}\partial_{\mu}v\partial^{\mu}\Phi+\sqrt{2 \left(1-\frac{V^{2}}{v^{2}}\right)}\partial_{\mu}\partial^{\mu}\Phi \tag{209}\]
We consider as Rezzola [53]
\[U=U^{\mu}e_{\mu} \tag{210}\]
where \(e_{\mu}\) is a vectorial base. Let us now make a derivation in (210)
\[\partial_{\nu}U=\partial_{\nu}U^{\nu}e_{\mu}+U^{\mu}\partial_{\nu}e_{\mu} \tag{211}\]
when we compare (211) with (209). We identify
\[\partial_{\nu}e_{\mu}\mathcal{U}^{\mu}\equiv\sqrt{\frac{2}{1- \frac{V^{2}}{v^{2}}}}\frac{V^{2}}{v^{3}}\partial_{\mu}v\partial^{\mu}\Phi. \tag{212}\]
Again by Rezzola [53], we write
\[\partial_{\nu}e_{\mu}=\Gamma^{\kappa}_{\mu\nu}e_{\kappa}. \tag{213}\]
Therefore
\[\Gamma^{\alpha}_{\alpha\nu}=\sqrt{\frac{2}{1-\frac{V^{2}}{v^{2}}}}\frac{V^{2}}{v^{3 }}\partial_{\nu}v, \tag{214}\]
we thus constitute the hydrodynamic equivalent of a connection. The consequences regarding curvature will be investigated in another work, for now the term is immediately identified with some type of shear between flow lines. The appearance of this term in this hypothetical superfluid model is quite interesting, as it allows connections between vortices and intense gravitational fields, a very interesting line of research with many open questions.
When we consider the conservation of current (209), we obtain
\[\sqrt{\frac{2}{1-\frac{V^{2}}{v^{2}}}}\frac{V^{2}}{v^{3}}\partial_{\mu}v \partial^{\mu}\Phi+\sqrt{2\left(1-\frac{V^{2}}{v^{2}}\right)}\partial_{\mu} \partial^{\mu}\Phi=0 \tag{215}\]
we see a wave equation for the scalar field, but the wave equation is increased by a term in (214), in classical terms, this term can be compared to shear, in quantum terms the reconnection between vortices. The hypothesis of a type of sink or source is also not ruled out and should be investigated in detail in future works.
## VI Conclusions and perspectives
We revisited the basic of \(k\)-essence fluid formalism and build a relation of the proposal of a universal minimum velocity under a preferred reference frame \(S_{V}\) with the Landau criterion for superfluids. We did this unification of disjoint concepts to the structure of a \(k\)-essence fluid. We also identified the enthalpy of a fluid in the Schutz formalism, the function that deforms the Lorentz transforms, introducing the concept of minimum velocity. In this way, we were able to construct an observed \(k\)-essence fluid from the first excited state of a superfluid. This implies a reinterpretation of the \(k\)-essence term, thus allowing us to understand a causal structure with the presence of sonic tachyons.
We have found that the hydrodynamic equivalent of a relativistic connection. Such thing could bring light to the discovery of constraints between hydrodynamis of dark superfluids and space-time curvature. The calculation on curvature tensor on a preferred reference frame is something new under the \(k\)-essence formalism of superfluids.
We also intend to address the problem of constraining the thermodynamics with the preferred frame and associate its quantities with event horizons in future work. The discussion about enthalpy made here gives back the question about the role of the preferred frame \(S_{V}\).
This theoretical construction allows an association of thermodynamic concepts with causal structures in a very simple way. Using mathematical concepts common in the academic community, such as conformal transformations.
|
2309.03579 | DTW+S: Shape-based Comparison of Time-series with Ordered Local Trend | Measuring distance or similarity between time-series data is a fundamental
aspect of many applications including classification, clustering, and
ensembling/alignment. Existing measures may fail to capture similarities among
local trends (shapes) and may even produce misleading results. Our goal is to
develop a measure that looks for similar trends occurring around similar times
and is easily interpretable for researchers in applied domains. This is
particularly useful for applications where time-series have a sequence of
meaningful local trends that are ordered, such as in epidemics (a surge to an
increase to a peak to a decrease). We propose a novel measure, DTW+S, which
creates an interpretable "closeness-preserving" matrix representation of the
time-series, where each column represents local trends, and then it applies
Dynamic Time Warping to compute distances between these matrices. We present a
theoretical analysis that supports the choice of this representation. We
demonstrate the utility of DTW+S in several tasks. For the clustering of
epidemic curves, we show that DTW+S is the only measure able to produce good
clustering compared to the baselines. For ensemble building, we propose a
combination of DTW+S and barycenter averaging that results in the best
preservation of characteristics of the underlying trajectories. We also
demonstrate that our approach results in better classification compared to
Dynamic Time Warping for a class of datasets, particularly when local trends
rather than scale play a decisive role. | Ajitesh Srivastava | 2023-09-07T09:18:12Z | http://arxiv.org/abs/2309.03579v2 | # DTW+S: Shape-based Comparison of Time-series with Ordered Local Trends
###### Abstract
Measuring distance or similarity between time-series data is a fundamental aspect of many applications including classification, clustering, and ensembling/alignment. Existing measures may fail to capture similarities among local trends (shapes) and may even produce misleading results. Our goal is to develop a measure that looks for similar trends occurring around similar times and is easily interpretable for researchers in applied domains. This is particularly useful for applications where time-series have a sequence of meaningful local trends that are ordered, such as in epidemics (a surge to an increase to a peak to a decrease). We propose a novel measure, DTW+S, which creates an interpretable "closeness-preserving" matrix representation of the time-series, where each column represents local trends, and then it applies Dynamic Time Warping to compute distances between these matrices. We present a theoretical analysis that supports the choice of this representation. We demonstrate the utility of DTW+S in several tasks. For the clustering of epidemic curves, we show that DTW+S is the only measure able to produce good clustering compared to the baselines. For ensemble building, we propose a combination of DTW+S and barycenter averaging that results in the best preservation of characteristics of the underlying trajectories. We also demonstrate that our approach results in better classification compared to Dynamic Time Warping for a class of datasets, particularly when local trends rather than scale play a decisive role.
## I Introduction
The distance between two time-series is a fundamental measure used in many applications, including classification, clustering, and evaluation. In classification and clustering, we want two "similar" time-series to have a low distance between them so that they can be grouped together or classified with the same label. In evaluation, the setting could be that we are generating projections (long-term forecasts) of time-series, and retrospectively, we wish to measure how close we are to the ground truth.
While many measures exist for these purposes, including Euclidean distance, correlation, and dynamic time-warping (DTW) [1], the choice of the similarity measure can depend on the domain. Further, existing similarity measures may fail to capture the desired properties of the task at hand, for instance, when we wish to capture the similarity in trends over time. For example, consider the scenario presented in Figure 0(a). Two models perform a projection to estimate the time-series given by the ground truth. Model 1 produces a pattern that is similar to the ground truth, while Model 2 produces a flat line. If we use mean absolute error to assess which model performed better, Model 2 (flat line) will receive a better score. Although Model 1 produces identical trends and correctly predicts the peak timing, it loses to a Model 2 which conveys no information. Now, consider the scenario presented in Figure 0(b). Model 1 predicts the exact pattern but it slightly shifted in time. Again, Model 1 - a flat line, produces a lower error. Finally, in Figure 0(c), Model 1 predict the overall pattern well, it only misjudges the height of the peaks. Yet, Model 2, a straight line, is considered closer to the ground truth. Some form of a range normalization could have addressed the issue in Figure 0(a), and Dynamic Time Warping (DTW) [1], which allows stretching the time dimension to best match two time-series, can address the issue raised in Figure 0(b). However, DTW and/or any normalization of scale cannot address the issue presented in Figure 0(c).
Our goal is to develop a measure of distance such that two time-series are considered similar if and only if they have a similar sequence of trends and similar trends occur around similar times. This is particularly useful in public health where the time-series may represent meaningful local trends that are ordered, e.g., a surge, followed by an increase, then a peak, and finally a decrease. We define a trend as the local shape of a time-series. Further, we wish this measure to be easily **interpretable** so that it can be adopted by researchers in applied domains, such as epidemiology. To achieve this, we propose a novel distance measure DTW+S that (i) produces a matrix representation where each column encodes local trends, and (ii) uses Dynamic Time Warping on these matrices to compute distances between a pair of time-series. With this measure, we are able to perform better clustering and ensembling [2, 3, 4] of epidemic curves where local interpretable trends are of interest. While we do not intend to develop a new time-series classification algorithm, we believe a good distance measure should improve a simple distance-based classifier such as \(k\)-nearest neighbors [5] on a class of tasks. Therefore, we also demonstrate the success of 1-nearest neighbor using our measure on classification.
**Contributions.** (1) We _prove necessary and sufficient conditions_ for the shapelet space representation [6] to be closeness-preserving - two local trends are similar if and only if they are mapped to nearby points (Section III-B). (2) We propose a novel distance measure for time-series DTW+S that leverages this representation along with DTW to find if similar trends
occur around similar times (Section III-D). (3) We develop an ensembling technique using DTW+S combined with barycenter averaging [7] that can simultaneously summarize time-series in scale and time (Section III-F), and we demonstrate its utility on epidemic curves (Section IV-B). (4) We demonstrate that DTW+S results in more sensible clustering of epidemic curves (Section IV-A). (5) Also, DTW+S outperforms Dynamic Time Warping in classifying time-series on a subset of datasets, particularly, those where local trends play a key role in classification (Section IV-C).
## II Related Work
### _Background_
#### Ii-A1 Shapelet Space Representation
In [6], the idea of the shapelet space representation is introduced to compare short-term forecasts of epidemics. The motivation is to compare the shape of the forecasts rather than exact numerical values. Further, they wish to make the representation interpretable. Each dimension represents the similarity of the vector with one of the chosen shapes of interest, such as an increase \((1,2,3,4)\) and peak \((1,2,2,1)\). These shapes of interest are termed Shapelets.
**Definition 1** (Shapelet).: _A shapelet \(\mathbf{s}=[s_{1},\ldots,s_{w}]\in\mathbb{R}^{w}\) is a vector that represents a shape of interest._
**Definition 2** (Shapelet-space Representation).: _Given \(d\) shapelets \(\{\mathbf{s_{1}},\ldots\mathbf{s_{d}}\}\), a Shapelet-space Representation of a vector \(\mathbf{x}\) is a \(d\)-dimensional point \(P_{x}=(p_{1},p_{2},\ldots,p_{d})\) capturing the shape of \(\mathbf{x}\in\mathbb{R}^{w}\), where the co-ordinate \(p_{i}=sim(\mathbf{x},\mathbf{s_{i}})\) for some measure of similarity. The function \(f:\mathbb{R}^{w}\rightarrow\mathbb{R}^{d}\) is the Shapelet-space Transformation._
The similarity function is to be chosen in such a way that two shapes are considered similar if and only if one shape can be approximated by translation and scaling of the other. However, this may cause an issue - when the shape is close to a "flat", small noise can cause it to become similar to other shapes. It is argued that there is an inherent concept of flatness in the domain of interest. For instance, in influenza when the number of hospitalizations is stable at a very low value, that shape is to be considered flat and not to be considered similar to any other shape when hospitalizations are higher. Therefore, a desirable property is the following.
**Property 1** (Closeness Preservation).: _Two vectors have similar representation, if and only if (i) none of the vectors are "almost flat" and one can be approximately obtained by scaling and translating the other, or (ii) both vectors are "almost flat"._
They propose an approach that first identifies how similar a shape is to what we could consider "flat", and then updates the similarities of the shape with respect to other shapelets. For some constants \(m_{0},\beta\geq 0\), define "flatness" as \(\phi=\exp(-\beta(m-m_{0})),\) if \(m>m_{0}\), otherwise \(\phi=1\). Here \(m\) is the average absolute slope of the vector \(\mathbf{x}\) whose shapelet-space representation is desired, i.e., if \(\mathbf{x}=(x_{1},x_{2},x_{3},x_{4})\), then \(m=(|x_{2}-x_{1}|+|x_{3}-x_{2}|+|x_{4}-x_{3}|)/3\). The constant \(m_{0}\) enforces that a vector with a very small average absolute slope is considered flat and receives a \(0\) similarity in all other dimensions. The constant \(\beta\) represents how quickly above the threshold \(m_{0}\), the "flatness" should reduce. Now, the co-ordinates of shapelet-space representation are defined as
\[sim(\mathbf{x},\mathbf{s_{i}})=\left\{\begin{matrix}2\phi-1,&\text{if } \mathbf{s_{i}}\text{ is ``flat"},\\ (1-\phi)corr(\mathbf{x},\mathbf{s_{i}}),&\text{otherwise}.\end{matrix}\right.\]
It is shown that this definition satisfies Closeness Preservation (Property 1) with \(w\) or more shapelets including the "flat" shapelet. _We prove that \(w\) shapelets are not only sufficient but necessary to satisfy this property (Theorem 2)._ We use Shapelet-space Representations of moving windows on the given time-series to capture local trends over time.
#### Ii-A2 Dynamic Time Warping
Dynamic Time Warping (DTW) is a distance measure between two time-series that allows warping (local stretching and compressing) of the time component so that the two time-series are optimally aligned. Given two time-series \(\mathbf{a}=[a(1),a(2),\ldots]\) and \(\mathbf{b}=[b(1),b(2),\ldots]\), the objective of DTW is to minimize \(\sum_{i\leftrightarrow j}\mathcal{D}(a(i),b(j)\), for some distance measure \(\mathcal{D}\), and where \(i\leftrightarrow j\) represents aligning index \(i\) of \(\mathbf{a}\) with index \(j\) of \(\mathbf{b}\). The alignment is done under some constraints - (1) if \(a(i)\) and \(b(j)\) are aligned then \(a(i+1)\) cannot be aligned with \(b(j^{\prime})\) for some \(j^{\prime}<j\). (2) Every index is present in at least one alignment. (3) The first index of both time-series are aligned
Fig. 1: Simple measures like Mean Absolute Error can be deceiving. In the three scenarios, Model 1 seems to be closer to the Ground truth, but receives a higher distance compared to a straight line.
with each other. (4) The last index of both time-series are aligned with each other. Further, a window constraint can be added [8] suggesting that indices \(i\) and \(j\) can only be aligned if \(|i-j|\leq w\), for some non-negative integer \(w\).
#### Ii-A3 Time-series Ensemble
In applications like epidemic projection, multiple trajectories are generated using different methods or different initializations. Then, an ensemble is created which is then communicated to the public and policy makers [2]. These ensembles are designed to capture the mean value at time \(t\), i.e., for \(n\) trajectories \(\mathbf{a}_{1},\ldots,\mathbf{a}_{n}\), where \(\mathbf{a}_{i}=[a(1),\ldots,a(n)]\) the ensemble is \(\bar{a}(t)=\sum_{i=1}^{n}a(t)/n\). As an unintended consequence, informative aspects of individual trajectories may be lost. As an example, consider Figure 2. Two models produce almost identical projections but they are shifted in time, and they have the same peak. The ensemble produces a trajectory that has a peak that is significantly lower and wider than individual models. In public health communication, this can cause a misjudgment of the severity of the epidemic. While the ensemble correctly summarizes the expected outcome at time \(t\), the reader tends to infer other information such as peak timing and severity. Our similarity measure can be used to build ensembles that better preserve the properties of the individual models.
#### Ii-A4 Shapelets
In time-series literature, "shapelets" have been used to refer to informative motifs that occur in time-series [9]. A feature vector for time-series can then be constructed by similarity of the best matching subsequence of the time-series to these motifs. The motifs are selected based on their representativeness of a class. In contrast, we use the term shapelet as _a pre-determined shape of interest coming from the domain_. We prove that a specific class of shapelet sets and similarity measures is needed to develop a representation to satisfy the closeness-preserving property. We encode all local trends of the time-series into a matrix representation, which is compared using DTW.
### _Related Similarity Measures_
While many similarity measures for time-series have been proposed in the literature [10, 11, 12], the closest work to our proposed measure DTW+S is ShapeDTW [13]. It was developed as an alignment algorithm that captures point-wise local structures and preferentially aligns similarly-shaped structures. It does so by generating shape descriptors that include the raw sequence, piece-wise aggregation, Discrete Wavelet Transform [14], slopes, and derivatives. A key distinction from our approach is that we utilize a set of shapelets that are shapes of interest in the desired application, and hence our representation is directly interpretable. Further, we present theoretical results on the closeness-preserving characteristics of our approach. While shapeDTW is designed to be general-purpose by constructing a variety of shape descriptors, our approach is particularly designed for applications where the trend is more important than the scale. We later demonstrate that our approach still outperforms shapeDTW for classification on almost half of the 64 datasets considered (Section IV-C).
## III Methodology
### _Definitions and Overview_
First, we define some terms used throughout the text. We start with the idea of a "trend descriptor" that formally defines the idea of assigning arbitrary label (e.g., increase, decrease, peak, etc.) to a part of a time-series. This is intended to emulate a human using categories (implicit or explicit) to interpret a pattern in the time-series. This concept will help us define interpretability of a representation and similarity measure.
**Definition 3** (Trend Descriptor).: _A trend descriptor is a function \(\mathcal{L}\) that maps any vector \(\mathbf{x}\in\mathbb{R}^{w}\) to a label in set \(L\) denoting a shape description of \(\mathbf{x}\)._
For instance, a trend descriptor may map any given 4-element vector to one of slow increase, rapid increase, exponential (convex) increase, going towards a peak, going past a peak, rapid decrease, flat, and unknown. Some of these labels may be more similar to each other, e.g., slow and rapid increases are more similar to each other then rapid increase and decrease.
**Definition 4** (Local Trend).: _For a time-series \((a_{1},a_{2},\ldots,a_{T})\), we define the local trend at location \(i\) as \(\mathcal{L}(a_{i},\ldots,a_{i+w-1})\)._
**Definition 5** (Interpretable).: _We say a representation is interpretable if it is possible to identify the local trends based on the values in each dimension of the representation. We say a distance measure is interpretable if it assigns low distance to two time-series if and only if they have the same local trend._
**Definition 6** (Ordered Local Trend).: _A class of time-series has ordered local trends if the similarity between two time-series implies similarity between the sequences of local trends._
In Figure 0(c) both orange and blue curves have the same sequence of local trends that can be described as a sequence of increases then peak, followed by a decrease, then a surge, and increase, another peak and decrease. While the gray line is only a sequence of increases. Such characterization is important in understanding and communicating long-term
Fig. 2: Failure of the mean ensemble in capturing the properties of individual time-series – much lower peak.
projections of epidemics as they represent specific events of interest [15, 16].
Our approach consists of two major steps. First, we capture local trends (shapes) of the time-series. To do so, we extend the notion of Shapelet-space Representation to a time-series with a sliding window.
**Definition 7** (Shapelet-space Representation - Time-series).: _Given a time-series \(\mathbf{a}\in\mathbb{R}^{T_{1}}\), a window \(w\), and a Shapelet-space Transformation \(f:\mathbb{R}^{w}\rightarrow\mathbb{R}^{d}\), the Shapelet-space Representation of \(\mathbf{a}\) is the matrix \(\mathbf{A}\in\mathbb{R}^{d\times(T_{1}-w+1)}\) whose \(i^{th}\) column is the Shapelet-space Representation of the vector \((a_{i},\ldots,a_{i+w-1})\)._
This matrix encodes how the time-series changes over time in an interpretable manner. Given two time-series, \(\mathbf{a}\in\mathbb{R}^{T_{1}}\) and \(\mathbf{b}\in\mathbb{R}^{T_{2}}\), we first find their Shapelet-space Representations - the matrices \(\mathbf{A}\in\mathbb{R}^{d\times(T_{1}-w+1)}\), and \(\mathbf{B}\in\mathbb{R}^{d\times(T_{2}-w+1)}\). Each column (of size \(d\)) of these matrices is obtained by sliding a \(w\)-length window on the respective time-series and obtaining its \(d\)-dimensional Shapelet-space Representation (SSR). Figure 3 shows the SSR obtained from a time-series. The SSR is built using four dimensions representing "increase", "peak", "surge", and "flat". A yellow color represents a high positive value and a blue represents a high negative value (e.g., a negative increase is a decrease). Also note that "surge" and "increase" are similar shapes and hence seem to have a high correlation. The representation tells us that this time series has a sequence of surges/increases leading to a small peak (green in "peak" and "flat") around time step \(5\), followed by stability, then increase, leading to a sharp peak (bright yellow around time-step \(13\)), followed by rapid decline (dark blue in "inc") and then stability (flatness).
Finally, we use Dynamic Time Warping with a suitable window. The distance \(\mathcal{D}\) is defined as the Euclidean distance between aligned columns of these matrices. Next, we discuss how to choose a good Shapelet-space Representation.
### _Choosing the Shapelet-Space_
While Srivastava et. al. provide some indication of how to choose the set of shapelets, they only prove that for vectors with \(w\) elements, \(w\) shapelets are sufficient. While one may choose more number of shapelets, e.g., one for each trend descriptor in mind, having a large number (\(d\)) of shapelets also impacts the space and time complexities as both scale linearly with \(d\). What is the minimum number of shapelets needed? Here, we show that \(w\) shapelets are not only sufficient but also necessary to satisfy the closeness-preserving property. Let \(f:\mathbb{R}^{w}\rightarrow\mathbb{R}^{d}\) be a shapelet transformation obtained by a set of linearly independent vectors \(\mathbf{s}_{1},\ldots,\mathbf{s}_{d-1}\) and the flat vector \(\mathbf{s}_{0}\). Consider two vectors \(\mathbf{x}\) and \(\mathbf{y}\) of length \(w\). Suppose \(\mathbf{x}^{\prime}\) and \(\mathbf{y}^{\prime}\) represent the corresponding normalized vectors obtained as: \(\mathbf{x}^{\prime}=\frac{\mathbf{x}-\mu_{\mathbf{x}}}{\|\mathbf{x}\|}\) and \(\mathbf{y}^{\prime}=\frac{\mathbf{y}-\mu_{\mathbf{y}}}{\|\mathbf{y}\|}\), where \(\mu_{\mathbf{x}}\) and \(\mu_{\mathbf{y}}\) are the mean of elements in the vectors \(\mathbf{x}\) and \(\mathbf{x}\), respectively.
**Theorem 1**.: _Property 1 is satisfied with any set of \(w-1\) linearly independent shapelets and the "flat" shapelet, i.e., with this choice \(\|f(\mathbf{x})-f(\mathbf{y})\|\leq\epsilon\) iff (i) both \(\mathbf{x}\) and \(\mathbf{y}\) are "almost" flat, or (ii) \(\|\mathbf{x}^{\prime}-\mathbf{y}^{\prime}\|\leq\delta\), for some small \(\epsilon\) and \(\delta\)._
Proof.: Suppose \(\|f(\mathbf{x})-f(\mathbf{y})\|\leq\epsilon\).
If both vectors are approximately flat then, without loss of generality, we can assume that \(\phi_{x}\geq\phi_{y}\geq 1-\varepsilon\), for some small \(\varepsilon\). Then, along the flat dimension, \(|2(\phi_{x}-1)-2(\phi_{y}-1)|^{2}\leq 4(1-(1-\varepsilon))^{2}\leq 4\varepsilon^{2}\). And, along any other dimension,
\[|(1-\phi_{x})\mathbf{s}^{\prime T}\mathbf{x}^{\prime}-(1-\phi_{y })\mathbf{s}^{\prime T}\mathbf{y}^{\prime}|^{2}\] \[\qquad\qquad\qquad\qquad\qquad=|2-(\phi_{x}+\phi_{y})|^{2}\leq|2- 2(1-\varepsilon)|^{2}\leq 4\varepsilon^{2}.\]
So, \(\|f(\mathbf{x})-f(\mathbf{y})\|^{2}\leq 4(w-1)\varepsilon^{2}+4\varepsilon=4w \varepsilon^{2}=\epsilon^{2}\), where \(\varepsilon=\epsilon/(2\sqrt{w})\).
Now, suppose that both vectors are not "almost" flat and \(\phi_{x}\geq\phi_{y}\). Let \(\epsilon^{2}=\sum_{i}\epsilon_{i}^{2}\), where \(\epsilon_{i}\) is the difference in the \(i^{th}\) dimension of \(f(\mathbf{x})-f(\mathbf{y})\). Then, along the dimension corresponding to \(\mathbf{s}_{0}\):
\[|(2\phi_{x}-1)-(2\phi_{y}-1)|\leq\epsilon_{0}\implies\phi_{\mathbf{x}}\leq\phi _{\mathbf{y}}+\epsilon_{0}/2. \tag{1}\]
Now, since \(\phi_{y}\leq\phi_{x}\leq\phi_{y}+\epsilon_{0}/2\), then a small \(\phi_{y}\) would imply that \(\phi_{x}\) is also small. Therefore, both \(\phi_{x}\) and \(\phi_{y}\) are not small.
Now, along any other dimension \(i\),
\[\epsilon_{i} =|(1-\phi_{x})\mathbf{s}^{\prime T}\mathbf{x}^{\prime}-(1-\phi_{ y})\mathbf{s}^{\prime T}\mathbf{y}^{\prime}|\] \[=|(1-\phi_{x})\mathbf{s}^{\prime T}(\mathbf{x}^{\prime}-\mathbf{y} ^{\prime})+(1-\phi_{x})\mathbf{s}^{\prime T}\mathbf{y}^{\prime}-(1-\phi_{y}) \mathbf{s}^{\prime T}\mathbf{y}^{\prime}|\] \[=|(1-\phi_{x})\mathbf{s}^{\prime T}(\mathbf{x}^{\prime}-\mathbf{y} ^{\prime})+(\phi_{y}-\phi_{x})\mathbf{s}^{\prime T}\mathbf{y}^{\prime}|\] \[\geq|(1-\phi_{x})\mathbf{s}^{\prime T}(\mathbf{x}^{\prime}-\mathbf{y} ^{\prime})|-\epsilon_{0}/2\,.\]
Fig. 3: Shapelet-space Representation of a time-series.
And thus,
\[\left|(1-\phi_{x}){\mathbf{s^{\prime}}}^{T}({\mathbf{x^{\prime}}}-{ \mathbf{y^{\prime}}})\right| \leq\epsilon_{i}+\epsilon_{0}/2\] \[\implies\left|{\mathbf{s^{\prime}}}^{T}({\mathbf{x^{\prime}}}-{ \mathbf{y^{\prime}}})\right| \leq\frac{\epsilon_{i}+\epsilon_{0}/2}{1-\phi_{x}}\,,\forall i \in\left\{1\ldots w-1\right\}. \tag{2}\]
Additionally, we note that due to the normalization,
\[{\mathbf{1}}^{T}{\mathbf{y^{\prime}}}={\mathbf{1}}^{T}{\mathbf{x^{\prime}}}=0 \implies{\mathbf{1}}^{T}({\mathbf{y^{\prime}}}-{\mathbf{x^{\prime}}})=0\,, \tag{3}\]
where \({\mathbf{1}}\) is the vector of all ones. Now consider a matrix \(C_{w}\) with \(w\) rows whose \(i^{th}\) row is given by \({\mathbf{s^{\prime}}}_{i}\), for \(i\leq w-1\) and \(w^{th}\) row is \({\mathbf{1}}\).
\[C_{w}({\mathbf{x^{\prime}}}-{\mathbf{y^{\prime}}})={\mathbf{e}}\,, \tag{4}\]
where \(i^{th}\) row of \({\mathbf{e}}\) for \(i<w\) is given by some \(|e_{i}|\leq\frac{\epsilon_{i}+\epsilon_{0}/2}{1-\phi_{x}}\) and the \(w^{th}\) entry is 0. We show that \({\mathbf{1}}\) is linearly independent of all the \(w-1\) other rows. Recall that \({\mathbf{s^{\prime}}}_{i}\) are all \(\mu\)-normalized, and so \({\mathbf{1}}^{T}{\mathbf{s^{\prime}}}_{i}=0,\forall i\). Therefore, all \(w\) rows of \(C_{w}\) are linearly independent, i.e., \(C_{w}\) is full rank and invertible. So, we have
\[{\mathbf{x^{\prime}}}-{\mathbf{y^{\prime}}} =C_{w}^{-1}{\mathbf{e}}\implies\|{\mathbf{x^{\prime}}}-{ \mathbf{y^{\prime}}}\|=\|C_{w}^{-1}{\mathbf{e}}\|\] \[\leq\|C_{w}^{-1}\|\|{\mathbf{e}}\|\leq\|C_{w}^{-1}\|\sqrt{\sum_{ i=1}^{w-1}\left(\frac{\epsilon_{i}+\epsilon_{0}/2}{1-\phi_{x}}\right)^{2}}=\delta\,, \tag{5}\]
which is small for finite \(\|C_{w}^{-1}\|\) and small \(\epsilon_{i},\forall i\geq 0\).
**Theorem 2**.: _At least \(w-1\) linearly independent shapelets are necessary with the "flat" shapelet to satisfy property 1._
Proof.: We will show that choosing \(w-2\) independent shapelets results in at least one \({\mathbf{y}}\neq{\mathbf{x}}\), such that \(f({\mathbf{x}})=f({\mathbf{y}})\) and none of them are flat, i.e., \(\phi_{x},\phi_{y}<1\). Since the vectors are equal across all dimensions, \(2\phi_{x}-1=2\phi_{y}-1\implies\phi_{x}=\phi_{y}\). Therefore, for all other dimensions:
\[(1-\phi_{x}){\mathbf{s^{\prime}}}^{T}_{i}{\mathbf{x^{\prime}}}=(1-\phi_{y}){ \mathbf{s^{\prime}}}^{T}_{i}{\mathbf{y^{\prime}}}\implies{\mathbf{s^{\prime} }}^{T}_{i}({\mathbf{y^{\prime}}}-{\mathbf{x^{\prime}}})=0\,. \tag{6}\]
Consider the matrix \(C_{w-1}\) whose \(i^{th}\) row for \(i\leq w-2\) is \({\mathbf{s^{\prime}}}_{i}\) and \((w-1)^{th}\) row is \({\mathbf{1}}\). Since all of its rows are independent, its rank is \(w-1\). Using rank-nullity theorem [17], its nullity is 1. Therefore, \(\exists\) a vector \({\mathbf{u}}\in\mathbb{R}^{w}\), with \(\|{\mathbf{u}}\|=1\) such that,
\[{\mathbf{y^{\prime}}}-{\mathbf{x^{\prime}}}=p{\mathbf{u}}\implies{\mathbf{y^{ \prime}}}={\mathbf{x^{\prime}}}-p{\mathbf{u}} \tag{7}\]
We will prove that there exists a solution to the above other than the trivial solution \(p=0\). First, taking the square of the norm of both sides of Equation 7
\[\|{\mathbf{y^{\prime}}}\|^{2}=\|{\mathbf{x^{\prime}}}+p{\mathbf{u}}\|^{2} \implies 1=\|{\mathbf{x^{\prime}}}\|^{2}+p^{2}\|{\mathbf{u}}\|^{2}+2p{\mathbf{u^{T}}}{ \mathbf{x^{\prime}}}\] \[\implies 1=1+p(p+2{\mathbf{x^{\prime}}}^{T}{\mathbf{u^{\prime}}})\implies p =0\text{ or }p=-2{\mathbf{u^{T}}}{\mathbf{x^{\prime}}}\,. \tag{8}\]
Therefore, for any given \({\mathbf{x^{\prime}}}\), if \({\mathbf{u^{T}}}{\mathbf{x}}\neq 0\), there exists \({\mathbf{y^{\prime}}}\neq{\mathbf{x^{\prime}}}\) which has the same shapelet space representation. In fact, there are infinitely many such \({\mathbf{x^{\prime}}}\) for which this holds. As a demonstration, pick any \({\mathbf{x^{\prime}}}\) which is linearly independent with all the \(w-1\) rows in \(C_{w-1}\) and is not a zero vector. To see that this choice works, note that if \({\mathbf{x^{\prime}}}^{T}{\mathbf{u}}=0\), then the nullity of the matrix \(C_{w}^{\prime}\) formed by appending \(x^{\prime}\) as a row to matrix \(C_{w-1}\) is 1 (i.e., \({\mathbf{u}}\) spans the null space of \(C_{w}\)). However, as all \(w\) rows of \(C_{w}^{\prime}\) are linearly independent, its rank is \(w\), which violates the rank-nullity theorem. As a result, there exists a \({\mathbf{y^{\prime}}}\neq{\mathbf{x^{\prime}}}\) such that \(f({\mathbf{x^{\prime}}})=f({\mathbf{y^{\prime}}})\).
**Discriminating any \({\mathcal{L}}\):** Due to the closeness preserving property, it follows that with \(w\) shapelets as described above, we can distinguish between any two local trends taken from an arbitrary choice of scale-free trend descriptor \({\mathcal{L}}\). By scale-free we mean a trend descriptor that does not distinguish based on scale, e.g., "high increase" vs "very high increase". On the other hand, it should be noted that we do not completely ignore the scale information. The flat dimension can be rewritten as \(sim({\mathbf{x}},\text{flat})=2\phi-1=2\exp(-\beta m)\), choosing \(m_{0}=0\). Given, the value in the flat dimension, we can uniquely find the average absolute slope \(m\) and thus we are able to discriminate scale-based trend descriptors as well. Therefore, by choosing the \(w\) shapelets appropriately, we can discriminate any two local trends taken from an arbitrary choice of \({\mathcal{L}}\).
### _Algorithm_
Here we present the pseudocode for DTW+S. Algorithm 1 implements our approach. It calls the function _TimeSeriesShape_ to compute SSR of the input time-series, which is presented in Algorithm 2. The function ShapeletSpaceTransform uses the similarity and shapelets described in the paper.
```
procedure DTW+S(\({\mathbf{a}},{\mathbf{b}},{\mathbf{S}},\tau\)) \({\mathbf{A}}=\) TimeSeriesShape(\({\mathbf{a}},S\)) \({\mathbf{B}}=\) TimeSeriesShape(\({\mathbf{b}},S\)) return\(DTW({\mathbf{A}},{\mathbf{B}},\tau)\) endprocedure
```
**Algorithm 1** Dynamic time warping with shapes
``` procedureTimeSeriesShape(\({\mathbf{a}},{\mathbf{S}}\)) \(w\leftarrow\) length of each shapelet in \({\mathbf{S}}\) \(T\leftarrow\) length of the time-series \({\mathbf{a}}\) for\({\text{\ {\text{\text{\text{\text{\text{\text{\text{\text{\text{\texttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttext{\text{\text{\text{\texttexttext{\texttexttexttexttexttexttexttexttexttexttexttexttexttexttext{ \texttexttext{\texttexttexttexttexttext{ \texttexttexttexttexttexttexttext{\texttexttexttexttext{ \texttexttexttexttexttexttexttexttext{ \texttexttexttexttexttexttexttexttext{ \texttexttexttexttexttexttexttext{ \texttexttexttexttexttext{\texttexttexttexttexttexttext{ \texttexttexttexttexttexttexttexttext{ \texttexttexttexttexttext{ \texttexttexttexttexttexttexttexttext{ \texttexttexttexttexttexttexttexttext{ \texttexttexttexttexttext{ \texttexttexttext{ \texttexttexttexttexttext{ \texttexttexttexttexttexttext{\texttexttexttexttext{\texttexttexttexttexttexttext{ \texttexttexttexttexttexttexttexttexttext{ \texttexttexttexttexttexttext{\texttexttexttexttexttexttext{ \texttexttexttexttexttext{\texttexttexttexttexttexttext{\texttexttexttexttexttexttexttexttext{ \texttexttexttexttexttexttext{\texttexttexttexttext{\texttexttexttexttexttexttexttext{ \texttexttexttexttexttext{\texttexttexttexttexttexttexttexttext{ \texttexttexttexttexttext{\texttexttexttexttexttexttexttext{\texttexttexttexttexttexttext{ \texttexttexttexttexttexttext{\texttexttexttexttexttexttext{\texttexttexttexttexttexttext{ \texttexttexttext{\texttexttexttexttexttexttexttext{\texttexttexttexttexttexttext{\texttexttexttexttexttexttext{ \texttexttexttexttexttext{\texttexttexttexttexttexttexttext{\texttexttexttexttexttexttext{ \texttexttexttexttext{\texttexttexttexttexttexttexttext{ \texttexttexttexttexttexttexttext{\texttexttexttexttext{\texttexttexttexttexttext{ \texttexttexttexttexttexttexttext{\texttexttexttexttexttext{\texttexttexttext{ \texttexttexttexttexttexttexttexttext{ \texttexttexttexttexttext{\texttexttexttexttexttexttext{ \texttexttexttexttexttext{ \texttexttexttexttexttexttext{ \texttexttexttexttexttexttexttext{ \texttexttexttexttexttext{ \texttexttexttexttexttexttext
\(i\) with column \(j\) is given by the square of the Euclidean distance between the \(i^{th}\) column of \(\mathbf{A}\) and \(j^{th}\) column of \(\mathbf{B}\), i.e., \(\|\mathbf{A}[:,i]-\mathbf{B}[:,j]\|^{2}\). Figure 11 provides a visualization of SSR matrices. Details of the interpretation are provided in Section V. The choice of warping window \(\tau\) depends on the application. For a classification task, it can be treated as a hyper-parameter and identified through validation on a held-out set. For epidemics, suppose, two models generate projections under the same assumptions, they may be predicting multiple peaks. Far-away peaks may refer to different events (e.g., two different variants causing two waves). However, peaks that occur 4-5 weeks apart across models may be referring to the same event. Therefore, \(\tau=5\) weeks would be a reasonable choice.
### _Clustering and Classification_
We can use DTW+S to cluster time-series with any clustering or classification algorithm that allows customizable distance measures. As a demonstration, we use agglomeration clustering [18] on the distance matrix where each entry \(\mathcal{D}(\mathbf{a}_{i},\mathbf{a}_{i^{\prime}})\) is the DTW+S distance of time-series \(i_{1}\) and \(i_{2}\). The algorithm starts with each time-series as its own cluster, and then recursively merges clusters greedily based on the distances. We use Silhouette Coefficient [19] to decide the optimal number of clusters. For classification, we use the \(1\)-nearest neighbor method [5]. The choice was made so that the decisive factor in correct classification is the distance measure. That is, two time-series that are closest to each other belong to the same class. This is also a popular way of evaluating distance measures between time-series [20].
### _Ensemble Generation_
The existing ensemble methods are designed to aggregate individual projections over time, thus measuring the scale (e.g., number of hospitalizations) at time \(t\). They are not designed to aggregate when will an event (e.g., a peak) take place. However, a viewer tends to interpret both the scale and timing from the ensemble plot. We are interested in - given \(n\) time-series, \(\mathbf{a}_{i}=[a_{i}(1),a_{i}(2),\dots,a_{i}(T)],i\in\{1,\dots,n\}\), find an "ensemble" time-series that captures the aggregate behavior in both time and scale. To address this, we assume that each trajectory tries to estimate a sequence of latent "events". With this perspective, for some sequence of event \(e_{1},e_{2},\dots\), a time-series captures the timing of \(e_{j}\) and its scale. Therefore, the time-series can be interpreted as \(\mathbf{a}_{i}=[(t_{i}(e_{1}),a_{i}(e_{1})),(t_{i}(e_{2}),a_{i}(e_{2})),\dots]\). For any given event \(e_{j}\), the aggregate time-series can be obtained by averaging both the timing and the severity dimensions of the individual time-series:
\[(\bar{t}(e_{j}),\bar{a}(e_{j}))=\left(\frac{\sum_{i}t_{i}(e_{j})}{n},\frac{ \sum_{i}a_{i}(e_{j})}{n}\right) \tag{9}\]
However, we do not observe these "events" explicitly. We define an event to be reflected in the time-series by a local trend. When two time-series are aligned by DTW+S, each alignment corresponds to an event. Formally, in the shapelet space representation \(\mathbf{A}\) and \(\mathbf{B}\), if columns \(\tau_{1}\) and \(\tau_{2}\) are aligned, then the local trend at time \([\tau_{1},\tau_{1}+w-1]\) in time-series \(\mathbf{a}\) and that at time \([\tau_{2},\tau_{2}+w-1]\) in time-series \(\mathbf{b}\) are defined to be corresponding to the same "event". Suppose we use DTW+S to align \(n\) projections. Then, each projection \(i\) contributes one point \((t_{i}(j),a_{i}(t_{j}))\) for each alignment \(j\). This results in a set of points, one for each alignment \(j\), using Equation 9. Finally, if desired, we can interpolate these points to estimate the value of \(\bar{a}(t)\) for \(t\in\{1,2,\dots,T\}\). Note that this approach is based on the following assumptions. First, each time-series has a similar sequence of shapes but may differ in timing and severity/scale. Second, the interpolation assumes smoothness in the desired ensemble.
Figure 4 demonstrates this approach for two time-series (in black solid line) that have two peaks each with different scales. We compare our approach against the mean ensemble and DTW (without SSR). Note that the mean ensemble results in a small second peak which is closer to the smaller peak among the input time-series. Further, its timing is biased towards the larger peak. The DTW ensemble results in one large peak by aggregating scale and timing of the first peak of one and the second peak of the other time-series. The DTW+S ensemble results in two peaks as expected, where each peak correctly averages the corresponding timing and scale of the input time-series.
While similar aggregation has been discussed in the literature [7, 21], two key differences exist. First, our application is motivated by visual interpretation of "events", such as when the epidemic peaks. Second, our approach uses an interpretable shape-based measure instead of directly using the Euclidean distance on time-series. This allows us to define an "event".
Optimally aligning multiple time-series is NP-Hard with some polynomial time approximation algorithms including DTW Barycenter Averaging [7]. In this approach, the initial 'base' time-series is selected as the time-series among the set of trajectories that has the lowest distance from other trajectories. Then, in each iteration, we compute the pairwise alignment of all time-series with respect to the base time-series and the result becomes the new 'base' time-series. The alignment is traditionally computed using DTW, which we replace with DTW+S. We demonstrate that the DTW+S based Barycenter Averaging is better at capturing the properties of the constituent trajectories (Section IV-B).
Fig. 4: Applying mean, DTW, and DTW+S to develop ensemble of two time-series.
## IV Experiments
We conducted a series of experiments to demonstrate the utility of DTW+S. Specifically, we wish to demonstrate the following: (1) DTW+S results in a more reasonable clustering compared to several other approaches. (2) DTW+S leads to a more reasonable ensemble that captures the scale and timing of events. (3) DTW+S produced better classification results for many classification tasks, outperforming DTW. In all of our experiments, unless stated otherwise, we used the following set of shapelets: (i) 'increase': \([1,2,3,4]\), (ii)'surge': \([1,2,4,8]\), (iii) 'peak': \([1,2,2,1]\), and (iv) 'flat': \([0,0,0,0]\). According to Theorems 1 and 2, these shapelets satisfy Property 1. These were chosen because they are easily interpretable, particularly in the domain of epidemics. The matrix \(C_{w}\) constructed as in Theorem 1, using this set of shapelets results in \(\|C_{w}^{-1}\|=13.1\), which is small when multiplied with functions of small \(\epsilon_{0},\epsilon_{1}\) as in Equation 5. Some other sets of shapelets that satisfy Property 1 were also tried, and their results were not significantly different.
The flatness was calculated by setting \(m_{0}=0\), and \(\beta=-\ln 0.1/\theta\), where \(\theta\) is the median of the maximum "absolute" slope of each time-series. Recall that the "absolute" slope for a given window is calculated by averaging successive differences over the window. Choosing \(\beta\) in such a way ensures that a window of time-series with a median "absolute" slope gets a low flatness of 0.1. All the code was written in MATLAB and is publicly available 1.
Footnote 1: [https://github.com/sec-usc/DTW_S_apps](https://github.com/sec-usc/DTW_S_apps)
### _Clustering: A Qualitative Evaluation_
We consider time-series projections for weekly influenza hospitalization for a US state by a model from Influenza Scenario Modeling Hub [3]. It has 75 time-series, each corresponding to different choices of parameters and initialization. We calculated the dissimilarity matrix (all pair distances) using the following. (1) DTW+S: our method with infinite window for alignment; (2): DTW+S (cos): same as DTW+S, except that cosine distance is used instead of Euclidean for aligning SSRs; (3) DTW, normalized: Applying DTW after normalizing all time-series to zero mean and unit variance - a common normalization technique used with DTW [22]; (4) DTW: DTW without any transformation or normalization; (5) Euclidean, normalized: Euclidean distance without any time warping on standard normal time-series; and (6) Shapelet-only: Euclidean distance on the SSR without any time-warping. For DTW+S, we generate hierarchical clusters with the number of clusters selected using the Silhouette Coefficient as described in Section III-E. We use the same number of clusters for clustering using each of the above dissimilarity measures.
We refrain from a quantitative evaluation as the metric of evaluation would depend on the choice of a distance measure which is what we are evaluating. So, we choose to demonstrate qualitatively. Figure 5 shows the results of clustering from each of the distance measures. We observe that DTW+S
Fig. 5: Comparison of clustering results obtained from DTW+S against other distance measures. DTW+S with Euclidean (default) and cosine distance produces reasonable clustering. All other measures mix different patterns into one cluster.
produces four clusters with similarly shaped time-series. DTW with standard normalization mixes clusters 1, 2, and 4. The Shapelet-only measure mixes clusters 1 and 3. DTW+S (cos) produces the same clustering (different ordering). DTW without normalization does not produce any discernable pattern in the trends and instead seems to group together those time-series that have similar peaks heights. Euclidean distance with standard normalization mixes clusters 1, 2 and 4. Note that Shapelet transformation is not sufficient to capture similar time-series due to not being flexible across time. On the other hand, DTW with simple standard normalization cannot capture similar trends that occur at different scales. However, when they are combined in DTW+S, they produce reasonable clustering.
### _Ensembling_
We consider seven sets of trajectories, including 75 projections from an Influenza model (Set 1), and more than 1000 trajectories from multiple influenza models for six scenarios each (Sets 2-7). We compare the following four approaches for ensembling. (i) Mean ensemble: the most popular ensemble approach that simply averages values at each time-point [2, 4]; (ii) DTW BA: barycenter averaging with DTW; (iii) DTW (z-norm) BA: barycenter averaging with DTW after applying z-normalization to all trajectories; and (iv) DTW+S BA: barycenter averaging with DTW+S. The resulting ensembles for Set 7 are shown in Figure 6. We observe that DTW+S BA produces the highest peak among all ensembles. Mean ensemble and DTW (z-norm) BA produce almost identical results (the green line overlaps with the blue line) and flatten the peak. To quantitatively compare the ensembles, we measure the fractional error of the ensembles in representing the peak timing and the peak size (i.e., the value and the time) at which the peak occurs. The ground truth is obtained by extracting the peak values (and timing) of all trajectories and averaging them. The results are presented in Table I. Note that our approach is the only one that captures the peak timing and size of the underlying trajectories well for all sets.
Figure 7 shows an intermediate step of computing alignments for a subset of trajectories in Set 1. While DTW+S is not specifically designed to identify peaks, we observe that almost all peaks among different time-series are aligned (marked with a pink 'x'). The circle denotes the centroid, i.e., the ensemble point of these points. Thus, the ensemble is able to provide a better estimate of the average size of the peak.
### _Classification_
For classification, we use 64 datasets available at UCR Time Series Classification 2015 Archive [22]. We removed all the datasets where the length of the time-series was more than 800 and the number of training data instances where more than 700, giving us 64 datasets out of 86. For results on the remaining 22 datasets, please see Section IV-C2. Please recall that our objective is not to create the best classifier, but to demonstrate that DTW+S is able to discriminate between time-series of different classes for many datasets and to understand a characterization of such datasets.
For each dataset, we find the 1-nearest neighbor for each instance of test time-series in the training set using DTW+S and assign its class. Then, we evaluate our approach using error defined as the fraction of misclassification, which is the commonly used evaluation method for these datasets [5]. We treat the warping window \(\tau\) as a hyperparameter. We use leave-one-out cross-validation to identify \(\tau\) as a fraction of the length of the time-series \(T\) from a set of values - \(\tau=\{0,0.01T,0.02T,\ldots,0.07T\}\). We compare our results with the reported best performance of DTW [22]. The results are presented in Figure 8. In the scatter plot, each point
Fig. 6: Results of different ensembling approaches on Set 7. The dark yellow lines represent the individual trajectories.
Fig. 7: An instance of alignment, marked by a pink ‘x’ on the individual time-series. The pink circle represents the ensemble point. Previous circles represent the ensemble points obtained from previous alignments.
represents errors on a dataset. The blue line represents the line \(y=x\), i.e., when DTW+S has the same error as that of DTW. A point lying below (and to the right) of this line indicates that DTW+S error was lower than DTW error. We observe that DTW+S outperforms DTW on 57.7% datasets. The two measures lead to similar errors for many datasets (along the \(y=x\) line). "Corr only" represents the DTW+S measure obtained by ignoring the "flat" shapelet. In this case, DTW+S reduces to the set of Person correlations with the three other shapelets. DTW+S (corr) only outperforms DTW in 39.1% datasets. Additionally, it produces some large errors. Correlation ignores scale completely, and small fluctuations that are not necessarily useful patterns, but noise can cause the measure to consider it similar to some other significant patterns. Thus, the _flatness dimension has a significant contribution_ to the performance of DTW+S.
We note that DTW+S does not outperform DTW on all datasets. This is _expected as DTW+S focuses more on the shapes rather than the scale_. Figure 9 shows examples where DTW+S and DTW have significantly different performances. For the dataset in Figure (a)a, DTW+S has an error of 0.0017 while DTW has an error of 0.13. This is because DTW+S picks up small local trends (a spike around time 30) present in one class and absent in another class. This is not captured by DTW alone. For the dataset in Figure (b)b, DTW+S has an error of 0.23 while DTW has a much lower error of 0.07. For this dataset, the difference in the classes is the scale rather than the shape in certain parts of the time-series. DTW is able to identify this distinction, making it a less desirable dataset for DTW+S.
#### V-B1 Smoothing
Another type of dataset that would be undesirable for DTW+S is that where the time-series have high noise (Figure (c)c). This noise will impact the identification of \(\beta\) for the flatness parameter and the identification of local trends. One way to address this is to smooth the time-series before finding the Shapelet-space Transformation. While a domain expert may choose a reasonable method for smoothing, we use a moving average method with the window size chosen with leave-one-out cross-validation from the set \(\{0,0.1T,0.2T,0.4T\}\). The results are shown in Figure 10. The first plot shows that allowing smoothness generally improves the error (many datasets fall to the bottom right). For a very small number of datasets validation results seem to pick a smoothing window that results in worse performance on the test set. In practice, this could be mitigated by having a larger training set. The second plot of Figure 10 shows that the smoothing significantly brings down the error for some datasets (e.g., the three circles on the left of the plot drop close to zero). As an example, the dataset in Figure (c)c for which DTW+S was significantly worse (error of 0.35) compared to DTW (0.0044), smoothing results in an error dropping to 0.0022. Finally, the third plot in Figure 10 compares our approach against shapeDTW [13]. We select "HOG1D" version of shapeDTW as it performs the best for this collection of datasets. Without smoothing, there are a few datasets where DTW+S is much worse than shapeDTW (higher in the plot). After smoothing, most datasets accumulate around \(y=x\) line.
#### V-B2 Results on "Large" Datasets
In the above analysis, we ignored datasets that were "large", either because the the time-series were long (? 700 time steps) or had a large number of training examples (? 800). One reason for ignoring large time-series is the computational cost. The complexity of performing 1-NN classification for each test case is \((wT^{2}|\mathcal{D}_{train}|)\), where \(|\mathcal{D}_{train}|\) is the number of time-series in the training set. Second reason is due to difficulty in interpretability. Throughout the paper, we use shapelets of length four, expecting that patterns in four consecutive points form a local trend. In time-series of 1000s of points, with potentially high sampling rates it remains
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline & Method & Mean ensemble & DTW BA & DTW (z-norm) BA & **DTW+S BA** \\ \hline \multirow{4}{*}{Peak Size} & Set 1 & -0.37 & -0.04 & -0.01 & -0.02 \\ & Set 2 & -0.29 & -0.33 & -0.29 & -0.01 \\ & Set 3 & -0.38 & -0.31 & -0.38 & -0.03 \\ & Set 4 & -0.28 & -0.22 & -0.28 & -0.01 \\ & Set 5 & -0.38 & -0.23 & -0.38 & -0.03 \\ & Set 6 & -0.28 & -0.21 & -0.29 & -0.02 \\ & Set 7 & -0.37 & -0.18 & -0.37 & -0.03 \\ \hline Peak Timing & Set 1 & 0.08 & -0.05 & -0.01 & 0.08 \\ & Set 2 & -0.08 & -0.10 & -0.08 & -0.03 \\ & Set 3 & -0.15 & -0.11 & -0.15 & 0.07 \\ & Set 4 & -0.09 & -0.08 & -0.09 & -0.03 \\ & Set 5 & 0.10 & -0.09 & -0.10 & 0.07 \\ & Set 6 & -0.09 & -0.08 & -0.09 & -0.02 \\ & Set 7 & -0.10 & -0.08 & -0.10 & 0.07 \\ \hline \end{tabular}
\end{table} TABLE I: Fractional error in estimation of peak size and timing. Darker red means higher error.
Fig. 8: Performance of DTW+S on 64 datasets.
unclear if a set of four points can form a discriminative trend without an in-depth study of the data source. Regardless, we ran experiments on these datasets by sampling - for datasets that were ignored due to the length of the time-series, we sampled every 10th point. For those that were ignored due to having many training instances, we sampled 100 (at regular intervals). We observed that, despite data being sampled, on exactly 50% of the datasets DTW+S outperformed DTW.
## V Discussion
InterpretabilityA key advantage of our approach is interpretability. Since the SSR is determined by the similarity of the given trend with respect to pre-defined shapes of interest, one can easily make sense of the representation. This is particularly useful for application domains where there exist certain shapes of interest (e.g., increase and peak in public health) and there is resistance towards adopting black-box approaches. Figure 11 shows the SSR of two samples each from two classes of a dataset. This is the dataset corresponding to Figure (a)a, where we observed small peaks appearing in one of the classes. Based on the SSR of the samples, we observe that samples for class 1 show a bright yellow (high value) corresponding to dimension 2. On the other hand, samples from class 2 have a dull yellow (lower value) for the same dimension. This dimension corresponds to the shapelet "peak" \(=[1,2,2,1]\), thus suggesting that a peak around the \(t=25\) makes sample 1 in class 1 more similar to sample 2 in class 1 compared to the samples in another class.
LimitationsOur approach is _not designed for general-purpose classification_ or encoding, particularly, where shapes have little impact compared to the scale. Furthermore, while warping windows and smoothing parameters can be set through validation, the best utilization of DTW+S would require some domain knowledge to understand their appropriate setting and the choice of shapelets. However, Theorems 1 and 2 act as guidelines to ensure that the chosen shapelets satisfy the desired property of closeness preservation. Another limitation is the implementation - currently, we use \(\mathcal{O}(wT^{2})\) time and space for DTW on the distance matrix obtained
Fig. 11: Interpreting the Shapelet Space Representation.
Fig. 10: Results obtained by DTW+S with smoothing. (a) Improvement on many datasets where DTW+S was worse than DTW. (b) Comparison against shapeDTW.
Fig. 9: Section of time-series color-coded to show the different classes. (a) DTW+S picks up small local trends that are not captured by DTW alone. (b) The difference in the classes is the scale rather than the shape in certain parts of the time-series, making it not a desirable dataset for DTW+S. (c) The time-series possess noise preventing DTW+S from identifying trends.
from SSR. Here, the \(T^{2}\) term is contributed by the number of distances to be calculated for a pair of time-series of length \(T\), and \(w\) is the number of shapelets. In future work, we will explore existing optimizations of DTW [23] and attempt to transfer them to DTW+S.
## VI Conclusion
We have proposed a novel interpretable distance measure for time-series that looks for a sequence of similar trends occurring around the same time. It can capture local trends in a representation that is closeness-preserving. We have demonstrated that our approach DTW+S, which applies DTW on our SSR matrices, results in better clustering, which cannot be achieved by DTW or SSR alone. We have developed an ensemble method using DTW+S that captures both the aggregate scale and timing of the individual time-series significantly better than the currently used mean ensemble and DTW-based barycenter averaging. We have shown that DTW+S can result in better classification compared to other measures for a large number of datasets, particularly those where local trends play a key role.
|
2309.12474 | SAVME: Efficient Safety Validation for Autonomous Systems Using
Meta-Learning | Discovering potential failures of an autonomous system is important prior to
deployment. Falsification-based methods are often used to assess the safety of
such systems, but the cost of running many accurate simulation can be high. The
validation can be accelerated by identifying critical failure scenarios for the
system under test and by reducing the simulation runtime. We propose a Bayesian
approach that integrates meta-learning strategies with a multi-armed bandit
framework. Our method involves learning distributions over scenario parameters
that are prone to triggering failures in the system under test, as well as a
distribution over fidelity settings that enable fast and accurate simulations.
In the spirit of meta-learning, we also assess whether the learned fidelity
settings distribution facilitates faster learning of the scenario parameter
distributions for new scenarios. We showcase our methodology using a
cutting-edge 3D driving simulator, incorporating 16 fidelity settings for an
autonomous vehicle stack that includes camera and lidar sensors. We evaluate
various scenarios based on an autonomous vehicle pre-crash typology. As a
result, our approach achieves a significant speedup, up to 18 times faster
compared to traditional methods that solely rely on a high-fidelity simulator. | Marc R. Schlichting, Nina V. Boord, Anthony L. Corso, Mykel J. Kochenderfer | 2023-09-21T20:41:47Z | http://arxiv.org/abs/2309.12474v2 | # SAVME: Efficient Safety Validation for Autonomous Systems Using Meta-Learning
###### Abstract
Discovering potential failures of an autonomous system is important prior to deployment. Falsification-based methods are often used to assess the safety of such systems, but the cost of running many accurate simulation can be high. The validation can be accelerated by identifying critical failure scenarios for the system under test and by reducing the simulation runtime. We propose a Bayesian approach that integrates meta-learning strategies with a multi-armed bandit framework. Our method involves learning distributions over scenario parameters that are prone to triggering failures in the system under test, as well as a distribution over fidelity settings that enable fast and accurate simulations. In the spirit of meta-learning, we also assess whether the learned fidelity settings distribution facilitates faster learning of the scenario parameter distributions for new scenarios. We showcase our methodology using a cutting-edge 3D driving simulator, incorporating 16 fidelity settings for an autonomous vehicle stack that includes camera and lidar sensors. We evaluate various scenarios based on an autonomous vehicle pre-crash typology. As a result, our approach achieves a significant speedup, up to 18 times faster compared to traditional methods that solely rely on a high-fidelity simulator.
## I Introduction
One way of demonstrating the reliability of an autonomous system is through rigorous real-world testing, a process that requires substantial resources and is often economically infeasible [1]. For this reason, simulations have become the method of choice in both research and industry for the validation of autonomous systems. Compared to real-world testing, simulations are faster, cheaper, and safer [2].
While empirical evidence supports the advantages of simulations over real-world testing, one fundamental question remains: _How much can we trust simulations_? Safety validation relies on a vast number of simulations to find edge cases through falsification [3]. This leads to a dilemma: should the accuracy of the simulation or its runtime be prioritized? Many modern-day simulation tools give the user control over numerous settings that affect behavior, output, and compute cost. Such settings range from different equations of motion to numerical solvers and sensor models [4]. Depending on the system under test, different fidelity settings are more important than others in terms of arriving at accurate safety assessments.
There are two important components of accelerating the validation process: speeding up the runtime and efficiently finding scenarios where the system under test fails. Many different approaches for finding failure scenarios have been studied in the literature. One such approach is black-box optimization [5], which has been applied to safety validation of autonomous systems [6, 7]. Two other similar approaches are path planning [8] and reinforcement learning [9]. A detailed review of methods is provided by Corso et al. [3]. These approaches require a simulation that is assumed to model reality.
Photorealistic simulators such as Carla [10], Airsim [11], or the products by Applied Intuition have been effective tools for developing and testing autonomous driving stacks. This realism, however, comes at the cost of increased computational resources required to run safety analyses. Since most simulators expose a number of fidelity settings to the user, it is possible to adapt those settings for specific needs. For example, an autonomous system stack without cameras does not need rendering. Unfortunately, this tuning process is non-trivial and requires domain expertise. For this reason, there are frameworks that make use of different fidelity settings by combining compute-intensive results from a high-fidelity simulator with computationally cheaper results from a low-fidelity simulator [12]. This approach has been developed further to support more than two levels of fidelity and has also been applied to safety validation of autonomous systems [13]. Multi-fidelity approaches have been shown to perform well while keeping computational requirements low, but are limited to a finite number of fidelity setting combinations [14]. In reality, however, simulators often offer dozens of parameters that determine the fidelity of a simulation. Consequently, it is infeasible to consider all possible combinations of settings.
Rather than combining the results from a limited number of simulators with expert-selected fidelity settings, our approach simultaneously learns the optimal fidelity settings (for an arbitrary simulator with many fidelity settings) while concurrently learning scenario configurations where the system under test is more likely to fail. The overall goal is to maximize the number of failures found in a given time. Our approach works with a combination of mixed continuous and discrete fidelity and scenario settings while taking the uncertainty of the outcome into account. To achieve this, we use a meta-learning framework where the task is learning the scenario parameters that lead to system failure. The optimized fidelity settings are considered as a prior that facilitate finding failures for new scenarios that have yet to be encountered during the training faster. The framework requires a high-fidelity simulator--providing the ground truth--and a learned-fidelity simulator during the
training phase. Uncertainty is taken into account by framing the optimization problem as a multi-armed bandit problem using Bayesian model estimation and Thompson sampling.
We validate the feasibility of our approach using a cutting-edge 3D driving simulator. A total of 16 fidelity settings can be controlled, leveraging an autonomous vehicle stack comprising both camera and lidar sensors. The scenarios used for the experiments are derived from an autonomous vehicle-specific pre-crash typology [15]. Through our experiments, we demonstrate the capability of our framework to not only learn the probability distribution of failure likelihood across scenario parameters, but also the distribution for accurate and fast simulation results across the fidelity settings. Despite the computational overhead originating from the parallel use of the high-fidelity and learned fidelity simulator, we reach the break-even point--the time at which we found more failures compared to only using a high-fidelity simulator--before the end of the training phase. Through the parallel training, we reduce the average time to find a failure by a factor of 18. Furthermore, we demonstrate that the acquired distribution over the fidelity settings can be used as a warm start for learning the parameter distributions for a novel set of scenarios. Using this head start, we can accelerate the learning process up to two times over the parallel learning with uniform prior as described above.
This paper makes two key contributions. First, it presents a simulator-agnostic framework that enables us to find distributions over both scenario parameters that yield a high probability of failure and fidelity settings that yield a high probability of a fast runtime while maintaining accuracy. Second, these scenario-agnostic fidelity settings facilitate the accelerated learning of the distribution over scenario parameters for new scenarios, increasing the efficiency of validating new scenarios after training.
## II Methodology
Our framework is based on meta-learning and Bayesian approaches for multi-armed bandits. This section explains how we combine both techniques to maximize the number of failures we find with constrained compute time. Let \(s\) be an abstract scenario description and \(\phi\) represent the scenario configuration, which is a vector that contains all values to create a runable instance of the scenario. The learned-fidelity settings--denoted by \(\theta_{\text{LF}}\)--are the values of all the fidelity settings for the learned-fidelity simulator. The SAVME framework learns the scenario-specific distributions \(p_{\psi}(\phi\mid s)\)--with \(\psi\) as parameters--at the same time as the distribution over the scenario-agnostic fidelity settings \(p_{\omega}(\theta_{\text{LF}})\), parameterized by \(\omega\). A core component of our framework is the concurrent use of a high-fidelity simulator
Fig. 1: Meta-learning framework for efficient safety validation.
with fidelity settings \(\theta_{\mathrm{HF}}\) and a learned-fidelity simulator with fidelity settings \(\theta_{\mathrm{LF}}\). To be precise, we want to solve two constrained optimization problems:
\[\underset{\psi}{\text{maximize}} \mathbb{E}_{\begin{subarray}{c}\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;
can take on. Although many fidelity settings are binary or categorical and well-suited for multi-armed bandit problems, other settings, such as distances, are continuous. By discretizing continuous fidelity settings, however, we can still use them with the multi-armed bandit framework.
The belief over the success for each possible fidelity setting value is represented by a beta distribution where \(\alpha=n_{\mathrm{prior},\mathcal{S}}+n_{\mathcal{S}}\) and \(\beta=n_{\mathrm{prior},\mathcal{L}}+n_{\mathcal{L}}\) with \(n_{\mathcal{S}}\) and \(n_{\mathcal{L}}\) being the counts of successes and losses as defined in Table I, respectively. Prior knowledge about the distribution can be incorporated by adjusting \(n_{\mathrm{prior},\mathcal{S}}\) and \(n_{\mathrm{prior},\mathcal{L}}\), whereas a uniform prior is equal to \(n_{\mathrm{prior},\mathcal{S}}=n_{\mathrm{prior},\mathcal{L}}=1\).
To extract the optimal fidelity settings, the distribution over \(\theta_{\mathrm{LF}}\) needs to be learned as accurately as possible. We use Thompson sampling [17, 18] to balance exploration and exploitation during training. For discrete fidelity settings, we can use Thompson sampling as described in the literature. For continuous fidelity settings, we use a stratified sampling scheme where the bin is sampled through Thompson sampling and the actual fidelity setting value is sampled within the bin according to either a uniform distribution (if uniform intervals were chosen) or a log-uniform distribution (if logarithmic intervals were chosen).
During the evaluation phase, there is no further need for exploration, and the _optimal_ fidelity settings can be selected using the greedy MAP estimate for each fidelity setting:
\[\theta^{*}_{LF,i}=\arg\max_{j}\frac{\alpha_{j}-1}{\alpha_{j}+\beta_{j}-2}, \tag{5}\]
where \(i\) is indexing the fidelity setting and \(j\) is indexing the possible values for each fidelity setting. For continuous variables, we determine the expected value based on either a uniform or log-uniform distribution.
### _Learning Failure Scenarios_
The method for learning failure scenarios--scenarios that are likely to lead to a failure of the system under test--is similar to the fidelity-learning framework described in Section II-B with three differences:
1. A success \(\mathcal{S}_{\mathrm{scenario}}\) is only dependent on the outcome and is defined when the outcome \(o\) is either TP or FN. Consequently a loss \(\mathcal{L}_{\mathrm{scenario}}\) is defined when the outcome \(o\) is either TN or FP.
2. As each scenario type can have different parameters, a distribution \(p_{\psi}(\phi)\) must be learned for each scenario \(s\) which can be written as a conditional distribution \(p_{\psi}(\phi\mid s)\).
3. Instead of using the MAP estimate of the learned distribution during evaluation, we sample the scenarios from the learned distribution to prevent running the same scenario instance repeatedly.
## III Experiments
We demonstrate the feasibility of our approach by using a state-of-the-art 3D autonomous driving simulator by Applied Intuition,1 a widely adopted platform in the autonomous driving industry. All experiments are run on a machine with an Intel Core i9-9900KF CPU, 64GB RAM, and a NVIDIA RTX 2080 Ti GPU. SAVME is simulator-agnostic and we provide starter code with a generic simulator and instructions.2
Footnote 1: [https://www.appliedintuition.com](https://www.appliedintuition.com)
Footnote 2: [https://github.com/sisl/SAVME.git](https://github.com/sisl/SAVME.git)
### _System under Test_
We use a stack that consists of a localization sensor, a camera sensor, and a lidar sensor to demonstrate that our approach is capable of handling complex systems with many fidelity settings. No motion planning is required because the desired lateral path is specified in the scenarios. For lateral path tracking, we use a Stanley controller [19], and for longitudinal control we use a PI controller that keeps the vehicle at a desired speed if no obstacle is detected. In the case of a predicted collision, a constant braking force is applied. The collision detection prediction is a two-stage process which begins with analyzing the camera image using a pre-trained YOLOv5s object detection network [20] trained on the widely-used COCO object detection dataset [21].
If an object of the category _vehicle_ or _person_ is detected, the lidar signal is filtered according to the discovered bounding box and the weighted centroid of the filtered lidar pointcloud is taken as measurement for the obstacle's position. The weights are proportional to the intensity of the point as reported by the lidar sensor. We predict the closest distance between the ego vehicle and the obstacle based on a first-order point-mass model which is fitted to the current and previous obstacle position. If the predicted minimal distance is less than a threshold of one car length, the brake event is triggered. We choose this safety buffer to account for noisy measurements and the inaccuracies of the first-order model. A more detailed description can be found in the repository.
### _Scenarios_
Instead of the NHTSA pre-crash typology [22] which is based on crashes between human-operated cars, we use a pre-crash typology from crash reports of incidents involving autonomous vehicles [15]. A total of 10 scenario types where the autonomous vehicle plays an active role in the accident are used for our assessment. No scenarios in which the autonomous vehicle is passive are included, such as instances where it is rear-ended. Out of the 10 scenarios, we use 8 scenarios for the meta-training phase and 2 scenarios for the meta-testing phase. The scenarios are depicted in Fig. 2 where scenarios 1 through 8 are the scenarios that are used for the meta-training while scenarios 9 and 10 are used for meta-testing.
### _Fidelity Settings_.
To demonstrate the feasibility of our framework in a high-dimensional, mixed categorical and continuous fidelity space, we use the 16 fidelity settings in Table II alongside the definition for \(\theta_{\mathrm{HF}}\).
### _Baseline and Experiment Goals_
As a baseline, we sample the scenario settings \(\phi\) from a uniform distribution and evaluate those using the high-fidelity settings \(\theta_{\mathrm{HF}}\) that correspond to the simulator's recommended settings. We can formulate two experimental goals based on the results from the meta-training and meta-testing phases:
1. The evaluation of the meta-training phase reveals how well the proposed framework can detect scenarios that lead to failures, i.e., learning \(p_{\psi}(\phi\mid s)\), while also making each simulation run faster by adjusting the the fidelity settings, i.e., learning \(p_{\omega}(\theta_{\mathrm{LF}})\). At the end of the meta-training phase, it is possible to calculate the speedup factor, which denotes the increase in the number of detected failures within the same time span as compared to the baseline. Of further interest is the break-even point, or time at which we detect the same number of failures using our dual high and learned-fidelity simulators than we did with the baseline.
2. While \(p_{\psi}(\phi\mid s)\) is conditional on the scenario, \(p_{\omega}(\theta_{\mathrm{LF}})\) is scenario-agnostic and therefore used across all scenarios during the meta-training phase. During the meta-testing phase, we evaluate whether using the learned \(p_{\omega}(\theta_{\mathrm{LF}})\) from meta-training as a prior helps to speed up learning \(p_{\psi}(\phi\mid s)\) on unseen scenarios.
Our findings for both goals are presented in Section IV.
## IV Results
### _Meta-Training_
By isolating the results of the meta-training phase, we can understand the suitability of the multi-armed bandit approach for simultaneously learning \(p_{\psi}(\phi\mid s)\) and \(p_{\omega}(\theta_{\mathrm{LF}})\). We train both \(p_{\psi}(\phi\mid s)\) and \(p_{\omega}(\theta_{\mathrm{LF}})\) using scenarios 1 through 8 as shown in Fig. 2 for 500 iterations where the probability \(p(s)\) is uniform. We evaluate the learned \(p_{\psi}(\phi\mid s)\) and \(p_{\omega}(\theta_{\mathrm{LF}})\) using 100 evaluations with \(\phi\sim p_{\psi}(\phi\mid s)\) and \(\theta^{\star}_{\mathrm{LF}}=\operatorname*{arg\,max}_{\theta_{\mathrm{LF}}}p_ {\omega}(\theta_{\mathrm{LF}})\). For each \(C_{\mathrm{budget}}\in\{0.2,0.3,0.4\}\), we calculate the TP-rate (how many failures we would've found only using the learned-fidelity simulator), the mean relative runtime of the learned-fidelity simulator when compared against the high-fidelity simulator, and the relative speedup from the baseline. All results are shown in Table III. We first note that for all \(C_{\mathrm{budget}}\), the mean learned-fidelity cost is remarkably similar given the different bounds. Second, we note the significant performance drop for \(C_{\mathrm{budget}}=0.2\) which indicates that 0.2 approaches \(\underline{\mathsf{C}}_{\mathrm{budget}}\). Finally, an almost 18 times speedup relative baseline is achieved with \(C_{\mathrm{budget}}=0.3\). In other words, at evaluation, using the learned-fidelity simulator, we found almost 18 times as many failures within the same runtime.
\begin{table}
\begin{tabular}{l l l l} \hline \hline Sensor & Fidelity Setting & Type & Values/Range \\ \hline - & simulation rate & discrete & \{2, 4, 6, **8, 10**\} Hz \\ camera & bloom level & discrete & \{**high**, low**\} \\ camera & disable bloom & discrete & \{true, **false**\} \\ camera & disable lighting & discrete & \{true, **false**\} \\ camera & disable shadows & discrete & \{true, **false**\} \\ camera & disable lens model & discrete & \{true, **false**\} \\ camera & disable depth of field & discrete & \{true, **false**\} \\ camera & disable shot noise & discrete & \{true, **false**\} \\ camera & view distance & continuous & [10, **5000**] m \\ camera & near clipping distance & continuous & [**0.2**, 20] m \\ lidar & disable shot noise & discrete & \{true, **false**\} \\ lidar & disable ambient effects & discrete & \{true, **false**\} \\ lidar & disable translucency & discrete & \{true, **false**\} \\ lidar & subsample count & discrete & \{true, **false**\} \\ lidar & raytracing bounces & discrete & \{true, 2, 3, 4, 5, \} \\ lidar & near clipping distance & continuous & [**0.2**, 20] m \\ \hline \hline \end{tabular}
\end{table} TABLE II: Fidelity settings with \(\theta_{\mathrm{HF}}\) highlighted
Fig. 2: Pre-crash scenarios. Scenarios 1 through 8 are used for meta-training, while scenarios 9 and 10 are used for meta-testing. The red vehicle represents the ego vehicle while the gray vehicle is the obstacle.
Concurrently executing the high-fidelity and learned-fidelity simulators during meta-training incurs significant computational cost. Thus, it is important to determine the runtime threshold at which our approach exhibits a higher incidence of failures compared to the baseline, also called the break-even point. The break-even point can either occur during or after training. Figure 3 shows the failures over the runtime for the baseline and for the meta-training phase with \(C_{\mathrm{budget}}=\{0.2,0.3,0.4\}\), each executed for 500 iterations. We conclude that the break-even point occurs before the end of the meta-training phase between \(50\,\mathrm{min}\) and \(80\,\mathrm{min}\) for all \(C_{\mathrm{budget}}\). In other words, despite the concurrent usage of the high and learned-fidelity simulators, we find more failures during the meta-training phase than the baseline.
Table IV shows the learned fidelity settings. While some settings differ based on \(C_{\mathrm{budget}}\), most settings remain the same. Finally, Fig. 4 shows an example of a camera image contrasting the difference between the high and learned-fidelity settings.
### _Meta-Testing_
For meta-testing, we use scenarios 9 and 10 from Fig. 2 to evaluate the effect of using the posterior distribution \(p_{\omega}(\theta_{\mathrm{LF}})\) from the meta-training phase as prior for the meta-testing phase. We compare against the case of using a uniform prior for \(p_{\omega}(\theta_{\mathrm{LF}})\) as well as the previously introduced baseline. The meta-testing phase has a duration of 200 iterations where we record the runtime and the found failures. The results are depicted in Fig. 5. Using the learned prior for \(p_{\omega}(\theta_{\mathrm{LF}})\) results in more found failures for all compute budgets \(C_{\mathrm{budget}}=\{0.2,0.3,0.4\}\) during the meta-testing phase when compared to using a uniform prior. We thus arrive at the conclusion that utilizing the posterior \(p_{\omega}(\theta_{\mathrm{LF}})\) obtained during the meta-training phase as the prior for \(p_{\omega}(\theta_{\mathrm{LF}})\) while learning \(p_{\psi}(\phi\mid s)\) for new scenarios \(s_{n+1},\ldots,s_{m}\) exhibits a beneficial impact on the rate of learning. For our experiments, we observe an increase of 50 to 100%. We also included the baseline that was introduced in Section III-D to demonstrate the two levels from this study: The comparison of the uniform prior with the baseline is the setup during meta-training while the comparison between a uniform prior and the learned prior is the setup for meta-testing. Figure 5 can be seen as the summary of this entire study. We demonstrate that even with a uniform prior on \(p_{\omega}(\theta_{\mathrm{LF}})\), the SAVME framework performs better than the baseline, but using the learned prior on \(p_{\omega}(\theta_{\mathrm{LF}})\), leads to an even more increased speedup.
### _Limitations_
We acknowledge two primary limitations inherent in our present approach. First, because we are using a multi-armed bandit framework, we are restricted to discrete or discretized scenario parameters and fidelity settings. This limitation could be overcome by using a more general Dirichlet process formulation. Second, assuming independence within and between the scenario parameters and fidelity settings might be an oversimplification that can necessitate the use of
\begin{table}
\begin{tabular}{l l l l l} \hline \hline Sensor & Fidelity Setting & \(C_{b.}=0.4\) & \(C_{b.}=0.3\) & \(C_{b.}=0.2\) \\ \hline - & simulation rate & 2 Hz & 2 Hz & 2 Hz \\ camera & bloom level & low & high & low \\ camera & disable bloom & true & true & false \\ camera & disable lighting & true & false & false \\ camera & disable shadows & true & true & true \\ camera & disable lens model & false & false & true \\ camera & disable depth of field & true & false & false \\ camera & disable shot noise & true & true & false \\ camera & view distance & 65.70 m & 285.97 m & 65.70 m \\ camera & near clipping distance & 3.67 m & 7.22 m & 0.58 m \\ lidar & disable shot noise & false & true & false \\ lidar & disable ambient effects & false & false & false \\ lidar & disable translucency & true & false & false \\ lidar & subsample count & 3 & 2 & 1 \\ lidar & raytracing bounces & 0 & 0 & 1 \\ lidar & near clipping distance & 1.68 m & 1.68 m & 13.57 m \\ \hline \hline \end{tabular}
\end{table} TABLE IV: Learned fidelity settings
Fig. 4: Comparison of images from the camera sensor between \(\theta_{\mathrm{HF}}\) and \(\theta_{\mathrm{LF}}^{*}\) for \(C_{\mathrm{budget}}=0.3\). The differences can be seen especially well in the maximum view distance and reflections.
Fig. 3: Found failures over runtime during the meta-training phase. Each experiment is 500 iterations long. For all \(C_{\mathrm{budget}}\), the break-even point is reached before the end of the training.
more advanced techniques involving probabilistic graphical models. While it is true that many applications may not be significantly impacted by these limitations, we believe they serve as a valuable foundation for expanding the potential applications of the SAVME framework in the future.
## V Conclusion
This paper presents an efficient, falsification-based validation approach for autonomous systems using meta-learning in conjunction with a Bayesian multi-armed bandit formulation. Our framework is unique as we are approaching efficient safety validation from the falsification perspective as well as from the efficient simulation perspective. In addition, with our approach, the learned scenario-agnostic fidelity settings can be used to accelerate the falsification process for new scenarios, which represents the meta aspect of our method. The SAVME framework's source code is open-source and requires minimal effort to apply to other problems.
In our experiments we use a state-of-the-art 3D driving simulator and a driving stack with camera and lidar sensors. The scenarios come from an AV-specific pre-crash typology while the simulation setup provides 16 fidelity settings. During the evaluation following meta-training, our framework demonstrates on average an almost 18-fold reduction in the time required to detect a failure. Furthermore, we find that despite the concurrent use of two simulators, more failures can be discovered even during the training phase when compared to the baseline that only uses one simulator. During the meta-learning phase, we observe that using the fidelity settings obtained during meta-training as a prior, increases the falsification process up to a factor of two.
## Acknowledgements
The authors would like to express their sincere gratitude to Allstate for their generous funding support through the Stanford Center for AI Safety as well as to Applied Intuition for the software access and technical support.
|
2309.14011 | A Truly Concurrent Semantics for Reversible CCS | Reversible CCS (RCCS) is a well-established, formal model for reversible
communicating systems, which has been built on top of the classical Calculus of
Communicating Systems (CCS). In its original formulation, each CCS process is
equipped with a memory that records its performed actions, which is then used
to reverse computations. More recently, abstract models for RCCS have been
proposed in the literature, basically, by directly associating RCCS processes
with (reversible versions of) event structures. In this paper we propose a
different abstract model: starting from one of the well-known encoding of CCS
into Petri nets we apply a recently proposed approach to incorporate
causally-consistent reversibility to Petri nets, obtaining as result the
(reversible) net counterpart of every RCCS term. | Hernán Melgratti, Claudio Antares Mezzina, G. Michele Pinna | 2023-09-25T10:25:43Z | http://arxiv.org/abs/2309.14011v3 | # A truly concurrent semantics for reversible CCS
###### Abstract.
Reversible CCS (RCCS) is a well-established, formal model for reversible communicating systems, which has been built on top of the classical Calculus of Communicating Systems (CCS). In its original formulation, each CCS process is equipped with a memory that records its performed actions, which is then used to reverse computations. More recently, abstract models for RCCS have been proposed in the literature, basically, by directly associating RCCS processes with (reversible versions of) event structures. In this paper we propose a different abstract model: starting from one of the well-known encoding of CCS into Petri nets we apply a recently proposed approach to incorporate causally-consistent reversibility to Petri nets, obtaining as result the (reversible) net counterpart of every RCCS term.
This work has been partially supported by the BehAPI project funded by the EU H2020 RISE under the Marie Sklodowska-Curie action (No: 778233), by the Italian PRIN 2020 project NiRvAna - Noninterference and Reversibility Analysis in Private Blockchains, the French ANR-18-CE25-0007 project DCore - Causal Debugging for Concurrent Systems, the INdAM-GNCS E53C22001930001 project RISICO - Reversibilita in Sistemi Concorrenti: Analisi Quantitative e Funzionali, and the European Union - NextGenerationEU SEcurity and RIghts in the CyberSpace (SERICS) Research and Innovation Program PE00000014, projects STRIDE and SWOP
a process that can perform actions \(a\) and \(b\) concurrently and one that sequentially executes these actions in any possible order (interleaving/schedule). To address this limitation, subsequent research aimed to equip CCS with _true concurrent_ semantics, adopting styles similar to Petri nets [10] and Event Structures [21, 22]. It has been shown that every CCS process can be associated with a corresponding Petri net that can mimic its computations. Various flavors of Petri nets have been explored in the literature, including _occurrence_ nets [11], a variant of _Conditions/Events_ nets [12], and _flow_ nets [1]. The works in [21] and [1] have additionally shown that the computation of a CCS process can be represented by using event structures.
In the last decades, many efforts were made to endow computation models with reversible semantics [A\({}^{+}\)20, M\({}^{+}\)20]. In particular, two different models have been proposed for CCS: reversible CCS (RCCS) [1, 13] and CCS with communication keys (CCSK) [23]. Both of them incorporate a logging mechanism in the operational semantics of CCS that enables the undoing of computation steps. Moreover, it has been shown that they are isomorphic [13] since they only differ on how they log information about past computations: while RCCS relies on some form of _memory/monitor_, CCSK uses _keys_. Previous approaches have also developed true concurrent semantics for reversible versions of CCS. For instance, it has been shown that CCSK can be associated with reversible bundle event structures [1, 2]. Also configuration structures have been associated to RCCS [1]. Nonetheless, we still lack a Petri net model for reversible CCS processes. We may exploit some recent results that connect reversible occurrence nets with reversible event structures [14, 15, 16] to indirectly recover a Petri net model from the reversible bundle event structures defined in [2]. However, we follow a different approach, which is somehow more _direct_:
1. We encode CCS processes into a mild generalization of occurrence nets, namely _unravel nets_, in the vein of Boudol and Castellani [1].
2. We show that _unravel nets_ can be made _causally-consistent_ reversible by applying the approach in [15].
3. We finally show that the reversible unravel nets derived by our encoding are an interpretation of RCCS terms.
An interesting aspect of the proposed encoding is that it highlights that all the information needed for reversing an RCCS process is already _encoded_ in the structure of the net corresponding to the original CCS process, i.e., RCCS memories are represented by the structure of the net. Concretely, if an RCCS process \(R\) is a derivative of some CCS process \(P\), then the encoding of \(R\) is retrieved from the encoding of \(P\), what changes is the position of the markings. Consider the CCS process \(P=a.\texttt{0}\) that executes \(a\) and then terminates. It can be encoded as the Petri net on the left in Figure 1 (the usage of the apparently redundant places in the postset of \(a\) will be made clearer in Section 3).
The reversible version of \(P\) is \(R=\langle\rangle\triangleright a.\texttt{0}\), where \(\langle\rangle\) denotes an initially empty memory. According to RCCS semantics, \(R\) evolves to \(R^{\prime}=\langle*,a,\texttt{0}\rangle\cdot\langle\rangle\triangleright\texttt{0}\) by executing \(a\). The
memory \(\langle*,a,0\rangle\cdot\langle\rangle\) in \(R^{\prime}\) indicates that it can go back to the initial process \(R\) by undoing \(a\). Note that the net corresponding to \(P\) (on the left) contains all the necessary information to reverse the action \(a\); intuitively, the action \(a\) can be undone by firing it in the opposite direction (i.e., by consuming tokens from the postset and producing them in its preset), or equivalently, by executing a reversing transition \(\underline{a}\) as depicted in the net shown in the middle of Figure 1. Furthermore, it is important to highlight that the net on the right of Figure 1 corresponds to the derivative \(R^{\prime}\). Consequently, the encoding of a CCS term as a net already encompasses all the information required for its reversal, which stands in contrast to the additional memories needed in the case of RCCS. This observation provides a straightforward and nearly immediate true concurrent representation of RCCS processes, effectively capturing their reversible behaviour.
### Organization of the paper
The paper is structured as follows: after establishing essential notation, we provide a brief overview of CCS and RCCS in Section 2. Next, in Section 3, we present a concise summary of Petri nets and introduce the concept of _unravel_ nets, followed by their reversible counterpart. The encoding of CCS into unravel nets and the mapping of RCCS terms into reversible unravel nets, along with correspondence results, are described in Section 4. Additionally, we present a practical implementation of the encoding and simulation of the execution in Haskell in Section 5. In the final section, we draw insightful conclusions and discuss potential avenues for future developments.
A preliminary version of this work has been published as [14]. In this version we have extended the scope and applicability of the proposed approach. We move from finite processes to infinite ones (i.e., recursive processes) by considering terms defined coinductively. Secondly, in this version we provide full and rigorous proofs of the key results. Finally, we provide a Haskell implementation of the encoding that allows for the simulation of the execution of encoded CCS processes. This practical implementation further exemplifies the feasibility and effectiveness of the approach.
### Preliminaries
We recall some notation that we will use in the paper. We denote the set of natural numbers as \(\mathbb{N}\). Let \(A\) be a set, a _multiset_ of \(A\) is defined as a function \(m:A\to\mathbb{N}\). The set of multisets of \(A\) is denoted by \(\partial A\). We assume the usual operations on multisets, such as union \(+\) and difference \(-\). For multisets \(m,m^{\prime}\in\partial A\), we write \(m\subseteq m^{\prime}\) to indicate that \(m(a)\leq m^{\prime}(a)\) for all \(a\in A\). Additionally, we define \(\llbracket m\rrbracket\) as the multiset where \(\llbracket m\rrbracket(a)=1\) if \(m(a)>0\), and \(\llbracket m\rrbracket(a)=0\) otherwise. When a multiset \(m\) of \(A\) is a set, i.e., \(m=\llbracket m\rrbracket\), we write \(a\in m\) to denote that \(m(a)\neq 0\). In this case, we often confuse the multiset \(m\) with the set \(\{a\in A\mid m(a)\neq 0\}\) or a subset \(X\subseteq A\) with the multiset \(X(a)=1\) if \(a\in A\) and \(X(a)=0\) otherwise. We also employ standard set operations such as \(\cap\), \(\cup\), or \(\setminus\), and, with a slight abuse of notation, write \(\emptyset\) for the multiset \(m\) such that \(\llbracket m\rrbracket=\emptyset\).
Given a relation \(\mathcal{R}\), we indicate with \(\mathcal{R}^{*}\) its reflexive and transitive closure.
## 2. CCS and reversible CCS
Let \(\mathcal{A}\) be a set of actions, denoted as \(a,b,c,\ldots\), and let \(\overline{\mathcal{A}}=\overline{a}\mid a\in\mathcal{A}\) be the set of their corresponding co-actions. The set containing all possible actions is denoted by \(\mathtt{Act}=\mathcal{A}\cup\overline{\mathcal{A}}\). We use \(\alpha\) and \(\beta\) to represent elements from \(\mathtt{Act}_{\tau}=\mathtt{Act}\cup\tau\), where \(\tau\) is a symbol not present in \(\mathtt{Act}\), i.e, \(\tau\notin\mathtt{Act}\), and denotes a _silent_ action.
The syntax of CCS is presented in Figure 2. A prefix (or action) in CCS can take one of three forms: an input \(a\), an output \(\overline{a}\), or the silent action \(\tau\). A term of the form \(\sum_{i\in I}\alpha_{i}.P_{i}\) represents a process that non-deterministically starts by selecting and performing some action \(\alpha_{i}\) and then continues as \(P_{i}\). We use \(\mathbf{0}\), the idle process, when \(I=\emptyset\) in place of \(\sum_{i\in I}\alpha_{i}.P_{i}\). Similarly, we use \(\alpha_{i}.P\) for a unitary sum where \(I\) is the singleton \(i\). The term \(P\parallel Q\) represents the parallel composition of processes \(P\) and \(Q\). An action \(a\) can be restricted to be visible only inside process \(P\), denoted as \(P\backslash a\). Restriction is the only binder in CCS, where \(a\) is bound in \(P\backslash a\). We addressed the representation of infinite processes by adopting an approach initiated by [1]. Instead of fixing a syntactic representation of recursion, we simplified the treatment by employing infinite regular trees. Throughout this paper, in Figure 2 and beyond, we use the symbol \(::=^{\mathsf{co}}\) to indicate that the productions should be interpreted _coinductively_. As a result, the set of processes is the greatest fixed point of the (monotonic) functor over sets defined by the grammar above [1]. Consequently, a process is a potentially infinite, _regular_ term coinductively generated by the grammar in Figure 2. A term is considered regular if it consists of finitely many _distinct_ subterms. The language generated by the coinductive grammar is thus finitely representable either using the so-called \(\mu\) notation [10] or as solutions of finite sets of equations [11]. For a more comprehensive treatment, interested readers are referred to [11].
We represent the set of all CCS processes as \(\mathcal{P}\). We denote the set of names of a process \(P\) as \(\mathtt{n}(P)\), and we use \(\mathtt{fn}(P)\) and \(\mathtt{bn}(P)\) to represent the sets of free and bound names in \(P\), respectively. (These functions can be straightforwardly defined by coinduction.)
**Definition 2.1** (CCS Semantics).: The operational semantics of CCS is defined as the LTS \((\mathcal{P},\mathtt{Act}_{\tau},\rightarrow)\) where the transition relation \(\rightarrow\) is the smallest relation induced by the rules in Figure 3.
Let us provide some comments on the rules presented in Figure 3. The act rule indicates that a non-deterministic choice proceeds by executing one of its prefixes \(\alpha_{z}\) and transitions to the corresponding continuation \(P_{z}\). The par-l and par-r rules allow the left and right processes of a parallel composition to independently execute an action while the other remains unchanged. The syn rule regulates synchronisation, allowing two processes in parallel to perform a handshake. Lastly, the hide rule restricts a certain action from being further propagated.
### Reversible CCS
Reversible CCS (RCCS) [1, 1] is a reversible variant of CCS. In RCCS, processes are equipped with a _memory_ that stores information about their past actions. The syntax of RCCS, shown in Figure 4, includes the same constructs as the original CCS formulation, but with the addition of reversible processes. A reversible process in RCCS can take one of the following forms: a _monitored_ process \(m\triangleright P\) where \(m\) represents the memory, and \(P\) is a CCS process; the parallel composition \(R\parallel S\) of the reversible processes \(R\) and \(S\); and the restriction \(R\backslash a\), where the action \(a\) is restricted to
Figure 2. CCS Syntax
the process \(R\). A _memory_ is essentially a stack of events that encodes the history of actions previously performed by a process. The top-most element in the memory corresponds to the very last action executed by the monitored process. Memories in RCCS can contain three different kinds of events1: _partial_ synchronisations \(\langle*,\alpha,Q\rangle\), _full_ synchronisations \(\langle m,\alpha,Q\rangle\), and memory _splits_\(\langle 1\rangle\) and \(\langle 2\rangle\). In a synchronisation, whether partial or full, the action \(\alpha\) and the process \(Q\) serve specific purposes in recording the selected action \(\alpha\) of a choice and the discarded branches \(Q\). The technical distinction between partial and full synchronisation will become evident when describing the semantics of RCCS. Events \(\langle 1\rangle\) and \(\langle 2\rangle\) represent the splitting of a process into two parallel ones. The empty memory is represented by \(\langle\rangle\). Let us note that in RCCS, memories also serve as unique process identifiers.
Footnote 1: In this paper, we adopt the original RCCS semantics with partial synchronisation. Later versions, such as [11], employ communication keys to uniquely identify actions.
We define the following sets: the set \(\mathcal{P}_{R}\) of all RCCS processes, the set \(\mathcal{M}\) of all possible memories, and \(\hat{\mathcal{M}}=\mathcal{M}\cup\mathcal{M}^{2}\), which includes individual as well as pairs of memories. We let \(\hat{m}\) to range over the set \(\hat{\mathcal{M}}\).
As for CCS, the only binder in RCCS is restriction, which applies at the level of both CCS and RCCS processes. Consequently, we extend the functions n, fn, and bn to RCCS processes and memories accordingly.
**Definition 2.2** (RCCS Semantics).: The operational semantics of RCCS is defined as a pair of LTSs sharing the same set of states and labels: a forward LTS \((\mathcal{P}_{R},\hat{\mathcal{M}}\times\texttt{Act}_{\tau},\rightarrow)\) and a backward LTS \((\mathcal{P}_{R},\hat{\mathcal{M}}\times\texttt{Act}_{\tau},\rightsquigarrow)\). The transition relations \(\rightarrow\) and \(\rightsquigarrow\) are the smallest relations induced by the rules in Figure 5 (left and right columns, respectively). Both relations make use of the structural congruence relation \(\equiv\), which is the smallest congruence on RCCS processes containing the rules shown in Figure 6. We define \(\hookrightarrow=\rightarrow\cup\rightsquigarrow\).
Let us provide some comments on the forward rules in Figure 5 (left column). Rule r-act allows a monitored process to perform a forward action \(\alpha_{z}\). Notably, the label of this transition pairs the executed action \(\alpha_{z}\) with the memory \(m\) of the process. At this
Figure 4. RCCS syntax
Figure 3. CCS semantics
point, we are uncertain whether the performed action will synchronise with the context or not. Consequently, a partial synchronisation event of the form \(\langle*,\alpha_{z}^{z},\sum_{i\in\Gamma\backslash z}\alpha_{i}.P_{i}\rangle\) is added on top of the memory. The '*' in the partial synchronisation event will be replaced by a memory, let's say \(m_{1}\), if the process eventually synchronises with another process monitored by \(m_{1}\). Additionally, it is essential to note that the discarded process \(Q\) is recorded in the memory. Moreover, along with the prefix, we store its position '\(z\)' within the sum. While this piece of information may be redundant for RCCS itself and was not present in the original semantics, it becomes useful when encoding an RCCS process into a net and when proving operational correspondence. This additional information enables a more straightforward representation of RCCS processes in a net-based setting and supports the validation of operational correspondence between the LTS and the net semantics. Importantly, it is worth mentioning that this straightforward modification does not alter the original semantics of RCCS, preserving its essential properties.
Rules r-par-l and r-par-r allows for the independent execution of an action in different components of a parallel composition. Rule r-syn allows two parallel processes to synchronise. For synchronisation to occur, the action \(\alpha\) in one process must match the coaction \(\overline{\alpha}\) in the other process. Once this condition is met, the two partial synchronisations are updated to two full synchronisations using the operator '@'. Let \(R\) be a monitored process, and let \(m_{1}\) and \(m_{2}\) be two memories, \(R_{m_{2}\oplus m_{1}}\) represents the process obtained from \(R\) by substituting all occurrences of \(\langle*,\alpha,Q\rangle\cdot m_{1}\) with \(\langle m_{2},\alpha,Q\rangle\cdot m_{1}\).
Figure 5. RCCS semantics
Rule r-res propagates actions through restriction, provided that the action is not on the restricted name.
Rule r-equiv allows one to exploit the structural congruence defined in Figure 6. The structural rule split enables a monitored process with a top-level parallel composition to split into left and right branches, resulting in the duplication of the memory. The structural rule res permits pushing restrictions outside monitored processes. Lastly, the structural rule \(\alpha\) allows one to take advantage of \(\alpha\)-conversion, denoted by \(=_{\alpha}\).
Backward rules are reported in the right column of the Figure 5. As one can see, for each forward rule there exists a symmetrical backward one. Rule r-act\({}^{\bullet}\) allows a monitored process to undo its last action, which coincides with the event on top of the memory stack. As we can see, all the information is stored in the last performed event, hence the rule pops out the last event on the memory, and restores back the prefix corresponding to the event and the plus context. Rules r-par-\(\mathsf{L}^{\bullet}\) and r-par-\(\mathsf{R}^{\bullet}\) allow for the independent undoing of an action in different components of a parallel composition. Rule r-syn\({}^{\bullet}\) allows for a desynchronisation: that is, two parallel components which participated to a syncrhonization, say, with labels \(\alpha\) and \(\overline{\alpha}\) can undo this syncrhonisation. Let us stress out that two processes, say \(R\) and \(S\) can undo a syncrhonization along memories \(m_{1}\) and \(m_{2}\) only if they are in the form \(R_{m_{2}@m_{1}}\) and \(S_{m_{1}@m_{2}}\). Rules r-res\({}^{\bullet}\) and r-equiv\(\mathsf{\^{\bullet}}\) acts like their forward counterparts.
**Definition 2.3** (Initial and Coherent process).: An RCCS process of the form \(\langle\rangle\triangleright P\) is referred to as _initial_. Any process \(R\) derived from an initial process using the rules in fig. 5 is called _coherent_.
**Example 2.4**.: Let \(P=a.(b\parallel c)\parallel(\overline{a}\parallel d)\). Via two application of the Split rule we obtain the following process
\[\langle\rangle\triangleright P\equiv \langle 1\rangle\cdot\langle\rangle\triangleright a.(b\parallel c) \parallel\langle 2\rangle\cdot\langle\rangle\triangleright(\overline{a}\parallel d)\] \[\equiv \langle 1\rangle\cdot\langle\rangle\triangleright a.(b\parallel c) \parallel\langle 1\rangle\cdot\langle 2\rangle\cdot\langle\rangle \triangleright\overline{a}\parallel\langle 2\rangle\cdot\langle 2\rangle\cdot\langle \rangle\triangleright d=R\]
Now, in the process \(R\) we have two monitored processes which can communicate on \(a\). That is
\[R\xrightarrow{m_{1},m_{2}:\tau} \langle a,m_{1},\mathbf{0}\rangle\cdot\langle 1\rangle\cdot\langle \rangle\triangleright(b\parallel c)\parallel\langle\overline{a},m_{2},\mathbf{0} \rangle\cdot\langle 1\rangle\cdot\langle 2\rangle\cdot\langle\rangle\triangleright\mathbf{0} \parallel\langle 2\rangle\cdot\langle 2\rangle\cdot\langle\rangle\triangleright d\] \[\equiv \langle 1\rangle\cdot\langle a,m_{1},\mathbf{0}\rangle\cdot\langle 1 \rangle\cdot\langle\rangle\triangleright b\parallel\langle 2\rangle\cdot \langle a,m_{1},\mathbf{0}\rangle\cdot\langle 1\rangle\cdot\langle\rangle\triangleright c\parallel\] \[\langle\overline{a},m_{2},\mathbf{0}\rangle\cdot\langle 1\rangle\cdot \langle 2\rangle\cdot\langle\rangle\triangleright\mathbf{0}\parallel\langle 2 \rangle\cdot\langle 2\rangle\cdot\langle\rangle\triangleright d\]
where \(m_{1}=\langle 1\rangle\cdot\langle\rangle\) and \(m_{2}=\langle 2\rangle\cdot\langle\rangle\).
An important property of a fully reversible calculus is the so called Loop Lemma, stating that any action can be undone. Formally:
Figure 6. RCCS Structural laws
**Lemma 2.5** (Loop Lemma [11]).: _Let \(R\) be a coherent process. For any forward transition \(R\xrightarrow{\hat{m}:\alpha}S\) there exists a backward transition \(S\xrightarrow{\hat{m}:\alpha}R\), and conversely._
**Corollary 2.6**.: _Let \(R\) be a coherent process. If \(R\hookrightarrow^{*}R_{1}\) then \(R_{1}\hookrightarrow^{*}R\)._
RCCS is shown to be causal consistent, that is any step can be undone provided that its consequences are undone beforehand. A consequence of causal consistent reversibility, it that any process reached by mixing computations (e.g., forward and backward transitions) can be reached by only forward computations. That is:
**Property 2.7**.: For any initial procces \(P\), if \(\langle\rangle\triangleright P\hookrightarrow^{*}R\) then \(\langle\rangle\triangleright P\rightarrow^{*}R\).
The notion of context below will be useful in the following sections.
**Definition 2.8** (Process and active contexts).: RCCS process context \(C\) and active contexts \(E\) are reversible processes with a hole \(\bullet\), defined by the following grammar:
\[C::=\bullet\;\mid\;m\triangleright C\;\mid\;\alpha.C\;\mid\;C+P\;\mid\;P+C\; \mid\;C\;\mid\;P\;\parallel C\;\mid\;C\;\parallel P\;\mid\;C\backslash A\]
\[E::=\bullet\;\mid\;R\;\parallel E\;\mid\;E\;\parallel R\;\mid\;E\backslash A\]
## 3. Petri nets, Unravel Nets and Reversible Unravel Nets
### Petri nets
We provide a brief overview of Petri nets, along with some related auxiliary notions.
**Definition 3.1**.: A _Petri net_ is a tuple \(N=\langle S,T,F,\mathsf{m}\rangle\), where \(S\) is a set of _places_, \(T\) is a set of _transitions_ (with \(S\cap T=\emptyset\)), \(F\subseteq(S\times T)\cup(T\times S)\) is the _flow_ relation, and \(\mathsf{m}\in\partial S\) is the _initial marking_.
Petri nets are conventionally represented with transitions depicted as boxes, places as circles, and the flow relation indicated by directed arcs. The presence of tokens in places is denoted by a number of '\(\bullet\)' symbols within the circle.
Given a net \(N=\langle S,T,F,\mathsf{m}\rangle\) and \(x\in S\cup T\), we define the following multisets: \({}^{\bullet}x=\{y\;|\;(y,x)\in F\}\) and \(x^{\bullet}=\{y\;|\;(x,y)\in F\}\). If \(x\) is a place then \({}^{\bullet}x\) and \(x^{\bullet}\) are (multisets) of transitions; analogously, if \(x\in T\) then \({}^{\bullet}x\in\partial S\) and \(x^{\bullet}\in\partial S\). The sets \({}^{\bullet}x\) and \(x^{\bullet}\) are respectively called the _pre_ and _postset_ of \(x\). A transition \(t\in T\) is enabled at a marking \(\mathsf{m}\in\partial S\), denoted by \(\mathsf{m}\left[t\right],\) whenever \({}^{\bullet}\mathsf{\subseteq m}\). A transition \(t\) enabled at a marking \(\mathsf{m}\) can _fire_ and its firing produces the marking \(\mathsf{m}^{\prime}=\mathsf{m}-{}^{\bullet}t+t^{\bullet}\). The firing of \(t\) at a marking \(\mathsf{m}\) is denoted by \(\mathsf{m}\left[t\right]\mathsf{m}^{\prime}\). We assume that each transition \(t\) of a net \(N\) is such that \({}^{\bullet}t\neq\emptyset\), meaning that no transition may fire _spontaneously_. Given a generic marking \(\mathsf{m}\) (not necessarily the initial one), the _firing sequence_ (shortened as \(\mathsf{fs}\)) of \(N=\langle S,T,F,\mathsf{m}_{0}\rangle\) starting at \(\mathsf{m}\) is defined as:
\(\bullet\;\mathsf{m}\) is a firing sequence (of length \(0\)), and
\(\bullet\;\mathsf{if}\;\mathsf{m}\left[t_{1}\right\rangle\mathsf{m}_{1}\; \cdots\;\mathsf{m}_{n-1}\left[t_{n}\right)\mathsf{m}_{n}\) is a firing sequence and \(\mathsf{m}_{n}\left[t\right\rangle\mathsf{m}^{\prime}\), then also \(\mathsf{m}\left[t_{1}\right\rangle\mathsf{m}_{1}\;\cdots\;\mathsf{m}_{n-1} \left[t_{n}\right)\mathsf{m}_{n}\left[t\right]\mathsf{m}^{\prime}\) is a firing sequence.
The set of firing sequences of a net \(N=\langle S,T,F,\mathsf{m}_{0}\rangle\) starting at a marking \(\mathsf{m}_{0}\) is denoted by \(\mathcal{R}_{\mathsf{m}_{0}}^{N}\) and it is ranged over by \(\sigma\). Given a \(\mathsf{fs}\;\sigma=\mathsf{m}_{0}\left[t_{1}\right\rangle\sigma^{\prime} \left[t_{n}\right)\mathsf{m}_{n}\), _start_\((\sigma)\) is the marking \(\mathsf{m}_{0}\), _lead_\((\sigma)\) is the marking \(\mathsf{m}_{n}\) and _tail_\((\sigma)\) is the fs \(\sigma^{\prime}\left[t_{n}\right)\mathsf{m}_{n}\). Given a net \(N=\langle S,T,F,\mathsf{m}_{0}\rangle\), a marking \(\mathsf{m}\) is _reachable_ iff there exists a fs \(\sigma\in\mathcal{R}_{\mathsf{m}_{0}}^{N}\) such that _lead_\((\sigma)\) is \(\mathsf{m}\). The set of reachable markings of \(N\) is \(\mathcal{M}_{N}=\{\textit{lead}(\sigma)\;|\;\sigma\in\mathcal{R}_{\mathsf{m}_{0}}^ {N}\}\). Given a fs \(\sigma=\mathsf{m}_{0}\), let \(\mathsf{m}_{0}\) be the set of transitions of \(N\). Then \(\mathsf{m}_{0}\) is a firing sequence.
The set of firing sequences of a net \(N=\langle S,T,F,\mathsf{m}_{0}\rangle\) starting at a marking \(\mathsf{m}_{0}\) is denoted by \(\mathcal{R}_{\mathsf{m}_{0}}^{N}\) and it is ranged over by \(\sigma\). Given a fs \(\sigma=\mathsf{m}_{0}\left[t_{1}\right\rangle\sigma^{\prime}\left[t_{n}\right) \mathsf{m}_{n}\), _start_\((\sigma)\) is the marking \(\mathsf{m}_{0}\), _lead_\((\sigma)\) is the marking \(\mathsf{m}_{n}\) and _tail_\((\sigma)\) is the fs \(\sigma^{\prime}\left[t_{n}\right)\mathsf{m}_{n}\). Given a net \(N=\langle S,T,F,\mathsf{m}_{0}\rangle\), a marking \(\mathsf{m}\) is _reachable_ iff there exists a fs \(\sigma\in\mathcal{R}_{\mathsf{m}_{0}}^{N}\) such that _lead_\((\sigma)\) is \(\mathsf{m}\). The set of reachable markings of \(N\) is \(\mathcal{M}_{N}=\{\textit{lead}(\sigma)\;|\;\sigma\in\mathcal{R}_{\mathsf{m}_{0}}^ {N}\}\). Given a fs \(\sigma=\{\textit{lead}(\sigma)\;|\;\sigma\in\mathcal{R}_{\mathsf{m}_{0}}^ {N}\}\), let \(\mathsf{m}_{0}\) be the set of transitions of \(N\). Then \(\mathsf{m}_{0}\) is a firing sequence.
The set of firing sequences of a net \(N=\langle S,T,F,\mathsf{m}_{0}\rangle\) starting at a marking \(\mathsf{m}_{0}\) is denoted by \(\mathcal{R}_{\mathsf{m}_{0}}^{N}\) and it is ranged over by \(\sigma\). Given a fs \(\sigma=\mathsf{m}_{0}\left[t_{1}\right\rangle\sigma^{\prime}\left[t_{n}\right) \mathsf{m}_{n}\), _start_\((\sigma)\) is the marking \(\mathsf{m}_{0}\), _lead_\((\sigma)\) is the marking \(\mathsf{m}_{n}\) and _tail_\((\sigma)\) is the fs \(\sigma^{\prime}\left[t_{n}\right)\mathsf{m}_{n}\). Given a net \(N=\langle S,T,F,\mathsf{m}_{0}\rangle\), a marking \(\mathsf{m}\) is _reachable_ iff there exists a fs \(\sigma\in\mathcal{R}_{\mathsf{m}_{0}}^{N}\) such that _lead_\((\sigma)\) is \(\mathsf{m}\). The set of reachable markings of \(N\) is \(\mathcal{M}_{N}=\{\textit{lead}(\sigma)\;|\;\sigma\in\mathcal{R}_{\mathsf{m}_{0}}^ {N}\}\). Given a fs \(\sigma=\{\textit{lead}(\sigma)\;|\;\sigma\in\mathcal{R}_{\mathsf{m}_{0}}^ {N}\}\), let \(\mathsf{m}_{0}\) be the set of transitions of \(N\). Then \(\mathsf{m}_{0}\) is a firing sequence.
The set of firing sequences of a net \(N=\langle S,T,F,\mathsf{m}_{0}\rangle\) starting at a marking \(\mathsf{m}_{0}\) is denoted by \(\mathcal{R}_{\mathsf{m}_{0}}^{N}\) and it is ranged over by \(\sigma\). Given a fs \(\sigma=\mathsf{m}_{0}\left[t_{1}\right)\sigma^{\prime}\left[t_{n}\right) \mathsf{m}_{n}\), _start_\((\sigma)\) is the marking \(\mathsf{m}_{0}\), _lead_\((\sigma)\) is the marking \(\mathsf{m}_{0}\) and _tail_\((\sigma)\) is the fs \(\sigma^{\prime}\left[t_{n}\right)\mathsf{m}_{n}\). Given a net \(N=\langle S,T,F,\mathsf{m}_{0}\rangle\), a marking \(\mathsf{m}\) is _reachable_ iff there exists a fs \(\sigma\in\mathcal{R}_{\mathsf{m}_{0}}^{N}\) such that _lead_\((\sigma)\) is \(\mathsf{m}\). The set of reachable markings of \(N\) is \
\(\mathfrak{m}\left[t_{1}\right)\mathfrak{m}_{1}\cdots\mathfrak{m}_{n-1}\left[t_{n} \right)\mathfrak{m}^{\prime}\), we write \(X_{\sigma}=\sum_{i=1}^{n}t_{i}\) for the multiset of transitions associated to \(\mathfrak{fs}\), which we call an _execution_ of the net and we write \(\mathbb{E}(N)=\{X_{\sigma}\in\partial T\mid\sigma\in\mathcal{R}_{\mathfrak{m} }^{N}\}\) for the set of the executions of \(N\). Observe that an execution simply says which transitions (and the relative number of occurrences of them) has been executed, not their (partial) ordering. Given a \(\mathfrak{fs}\)\(\sigma=\mathfrak{m}\left[t_{1}\right)\mathfrak{m}_{1}\cdots\mathfrak{m}_{n-1} \left[t_{n}\right)\mathfrak{m}_{n}\cdots\), with \(\rho_{\sigma}\) we denote the sequence \(t_{1}t_{2}\cdots t_{n}\cdots\).
**Definition 3.2**.: A net \(N=\langle S,T,F,\mathfrak{m}\rangle\) is said to be _safe_ if each marking \(\mathfrak{m}\in\mathcal{M}_{N}\) is such that \(\mathfrak{m}=\llbracket\mathfrak{m}\rrbracket\).
The notion of subnet will be handy in the following. A subnet is obtained by restricting places and transitions, and correspondingly the flow relation and the initial marking.
**Definition 3.3**.: Let \(N=\langle S,T,F,\mathfrak{m}\rangle\) be a Petri net and let \(T^{\prime}\subseteq T\) be a subset of transitions and \(S^{\prime}=\,^{\bullet}\!T^{\prime}\cup T^{\prime\bullet}\). Then, the subnet generated by \(T^{\prime}\left.N\right|_{T^{\prime}}=\langle S^{\prime},T^{\prime},F^{\prime },\mathfrak{m}^{\prime}\rangle\), where \(F^{\prime}\) is the restriction of \(F\) to \(S^{\prime}\) and \(T^{\prime}\), and \(\mathfrak{m}^{\prime}\) is the multiset on \(S^{\prime}\) obtained by \(\mathfrak{m}\) restricting to the places in \(S^{\prime}\).
### Unravel nets
To define _unravel nets_ we need the notion of _causal net_.
**Definition 3.4**.: A safe Petri net \(N=\langle S,T,F,\mathfrak{m}\rangle\) is a _causal net_ (\(\mathsf{CA}\) for short) when \(\forall s\in S\). \(|\,^{\bullet}\!s|\leq 1\) and \(|s^{\bullet}|\leq 1\), \(F^{*}\) is acyclic, \(T\in\mathbb{E}(N)\), and \(\forall s\in S\,\,^{\bullet}\!s=\emptyset\,\,\Rightarrow\,\,\mathfrak{m}(s)=1\).
Requiring that \(T\in\mathbb{E}(N)\) implies that all the transition can be executed whereas \(F^{*}\) acyclic means that dependencies among transitions are settled. Observe that causal net has no isolated and unmarked places as \(\forall s\in S\,\,^{\bullet}\!s=\emptyset\,\,\Rightarrow\,\,\mathfrak{m}(s)=1\).
**Definition 3.5**.: An _unravel net_ (\(\mathsf{UN}\) for short) \(N=\langle S,T,F,\mathfrak{m}\rangle\) is a safe net such that
1. for each execution \(X\in\mathbb{E}(N)\) the subnet \(N\big{|}_{X}\) is a \(\mathsf{CA}\), and
2. \(\forall t,t^{\prime}\in T\). \(\,^{\bullet}\!t=\,^{\bullet}\!t^{\prime}\,\,\wedge\,\,t^{\bullet}=t^{\prime \bullet}\,\,\Rightarrow\,\,t=t^{\prime}\).
Unravel nets describe the dependencies among transitions in the executions of a concurrent and distributed device and are similar to _flow nets_[1, 2]. Flow nets are safe nets in which, for every possible firing sequence, each place can be marked only once. The first condition in Definition 3.5 implies that the subnet consisting of the transitions executed by the firing sequence is a causal net. The second condition, which states that when two transitions have identical pre- and postsets are the same transition, serves the purpose of ruling out the possibility of having two different transitions that are indistinguishable because they consume and produce the same tokens (places).
In an \(\mathsf{UN}\), two transitions \(t\) and \(t^{\prime}\) are conflicting if they never appear together in an execution, _i.e._\(\forall X\in\mathbb{E}(N)\). \(\{t,t^{\prime}\}\not\subseteq X\), as formally stated below. Given a place \(s\) of an unravel net, if \(\,^{\bullet}\!s\) contains two or more transitions, then they are in conflict.
**Proposition 3.6**.: _Let \(N=\langle S,T,F,\mathfrak{m}\rangle\) be an \(\mathsf{UN}\) and \(s\in S\) be a place such that \(|\,^{\bullet}\!s|>1\). Then \(\forall t,t^{\prime}\in\,^{\bullet}\!s\). \(\forall X\in\mathbb{E}(N)\), if \(t\in X\) and \(t^{\prime}\in X\) then \(t=t^{\prime}\)._
Proof.: Take a place \(s\in S\) such that \(|\,^{\bullet}\!s|>1\) and take \(t,t^{\prime}\in\,^{\bullet}\!s\). Assume that there is an execution \(X\in\mathbb{E}(N)\) such that \(X\) contains both \(t\) and \(t^{\prime}\). As \(N\big{|}_{X}\) is a causal net we have that it is acyclic and therefore \(t\) must be equal to \(t^{\prime}\) as both produce a token in \(s\).
It is worth noting that the classical notion of an _occurrence net_[2, 3] is, in fact, a specific type of \(\mathsf{UN}\). In this context, the conflict relation is _inherited_ throughout the
transitive closure of the flow relation and can be inferred directly from the structure of the net itself. A further evidence that unravel nets generalize occurrence nets is implied also by the fact that flow nets generalize occurrence nets as well [1].
**Definition 3.7**.: An unravel net \(N=\langle S,T,F,\mathfrak{m}\rangle\) is _complete_ whenever \(\forall t\in T\). \(\exists s_{t}\in S\). \({}^{\bullet}s_{t}=\{t\}\ \wedge\ s_{t}{}^{\bullet}=\emptyset\), and \(|t^{\bullet}|>1\). We use \(\mathcal{K}_{T}\) to denote the subset of \(S\) of such places and we call the places in \(\mathcal{K}_{T}\)_key_-places. Furthermore we assume that \(|\mathcal{K}_{T}|=|T|\).
Thus, in a complete \(\mathsf{UN}\), the execution of a transition \(t\) is signaled by the marked place \(s_{t}\). Given an \(\mathsf{UN}\)\(N\), it can be turned easily into a complete one by adding for each transition the suitable place, without changing the executions of the net, thus we consider complete \(\mathsf{UN}\)s only. Completeness comes handy when defining the reversible counterpart of an \(\mathsf{UN}\).
### Reversible unravel nets
The definition of _reversible unravel nets_ builds upon that of the _reversible occurrence nets_ of [13], extending the notion just as unravel nets generalise upon occurrence nets.
**Definition 3.8**.: A _reversible unravel net_ (\(\mathsf{rUN}\) for short) is a quintuple \(N=\langle S,T,U,F,\mathfrak{m}\rangle\) such that
1. \(U\subseteq T\) and \(\forall u\in U\). \(\exists!\ t\in T\setminus U\) such that \({}^{\bullet}u=t^{\bullet}\) and \(u^{\bullet}=\,^{\bullet}t\), and
2. \(N|_{T\setminus U}\) is a complete unravel net and \(\langle S,T,F,\mathfrak{m}\rangle\) is a safe one.
The transitions in \(U\) are the reversing ones; hence, we often say that a reversible unravel net \(N\) is _reversible with respect to \(U\)_. A reversing transition \(u\) is associated with a unique non-reversing transition \(t\) (condition 1) and its effects are intended to _undo_\(t\). This fact ensures the existence of an injective mapping \(h:U\to T\setminus U\), which consequently implies that each reversible transition is accompanied by precisely one corresponding reversing transition. The final requirement stipulates that when disregarding all reversing transitions, the resulting subnet is indeed a complete unravel net and the net itself is a safe net.
Along the lines of [13], we can prove that the set of reachable markings of a reversible unravel net is not influenced by performing a reversing transition.
**Proposition 3.9**.: _Let \(N=\langle S,T,U,F,\mathfrak{m}\rangle\) be an \(\mathsf{rUN}\). Then \(\mathcal{M}_{N}=\mathcal{M}_{N|_{T\setminus U}}\)._
Proof.: Clearly \(\mathcal{M}_{N|_{T\setminus U}}\subseteq\mathcal{M}_{N}\). For the other inclusion, we first observe that if \(\mathfrak{m}\left[t\right)\) then \(t\in T\setminus U\) as none of the transitions in \(U\) is enabled at the initial marking. Consider now an \(\mathsf{fs}\ \sigma\left[u\right]m\), with \(u\in U\), and w.l.o.g. assume that all the transitions in \(\sigma\) belong to \(T\setminus U\), i.e. \(X_{\sigma}\subseteq T\setminus U\). We construct an \(\mathsf{fs}\) leading to \(m\) which does not contain any transition in \(U\). As \(\sigma\left[u\right]\) we have that \({}^{\bullet}u\subseteq\textit{lead}(\sigma)\) and this implies that the transition \(h(u)\in X_{\sigma}\). We can then write \(\sigma\) as \(\sigma^{\prime}\left[h(u)\right)\sigma^{\prime\prime}\) and none of the transitions in \(\sigma^{\prime\prime}\) uses the tokens produced by \(h(u)\) as \(N|_{X_{\sigma}}\) is a subnet of \(N|_{T\setminus U}\), which is a complete \(\mathsf{UN}\). Therefore we have that the transitions in the \(\mathsf{fs}\ \textit{lead}(\sigma^{\prime})\left[h(u)\right)\sigma^{\prime\prime}\) can be rearranged in a \(\mathsf{fs}\ \sigma^{\prime\prime\prime}\left[h(u)\right)\textit{lead}(\sigma)\). Observing that the effects of firing \(u\) at \(\textit{lead}(\sigma)\) are producing the tokens in places \({}^{\bullet}h(u)\) we have that the \(\mathsf{fs}\) we are looking for is obtained executing the transitions in \(\sigma^{\prime}\) followed by the ones in \(\sigma^{\prime\prime\prime}\) and the reached marking is precisely \(\textit{lead}(\sigma)\). Hence also \(\mathcal{M}_{N}\subseteq\mathcal{M}_{N|_{T\setminus U}}\) holds.
A consequence of this fact is that each marking can be reached by using just _forward transition_.
Given an unravel net and a subset of transitions to be reversed, it is straightforward to obtain a reversible unravel net.
**Proposition 3.10**.: _Let \(N=\langle S,T,F,\mathfrak{m}\rangle\) be a complete unravel net and let \(U\subseteq T\) be the set of transitions to be reversed. Define \(\overleftarrow{N}^{U}=\langle S^{\prime},T^{\prime},U^{\prime},F^{\prime}, \mathfrak{m}^{\prime}\rangle\) where \(S=S^{\prime}\), \(U^{\prime}=U\times\{\mathfrak{r}\}\), \(T^{\prime}=(T\times\{\mathfrak{f}\})\ \cup\ U^{\prime}\),_
\[\begin{array}{ll}F^{\prime}=&\{(s,(t,\mathfrak{f}))\mid(s,t)\in F\}\ \cup\ \{((t,\mathfrak{f}),s)\mid(t,s)\in F\}\ \cup\\ &\{(s,(t,\mathfrak{r}))\mid(t,s)\in F\}\ \cup\ \{((t,\mathfrak{r}),s)\mid(s,t)\in F\} \end{array}\]
_and \(\mathfrak{m}^{\prime}=\mathfrak{m}\). Then \(\overleftarrow{N}^{U}\) is a reversible unravel net._
Proof.: We check the conditions of Definition 3.8. The first condition is satisfied as we observe that for each transition in \((t,\mathfrak{r})\in U^{\prime}\), there exists a unique corresponding transition \((t,\mathfrak{f})\in T\times\{\mathfrak{f}\}\); moreover, \(\ {}^{\bullet}(t,\mathfrak{r})=(t,\mathfrak{f})^{\bullet}\) and \((t,\mathfrak{r})^{\bullet}=\ ^{\bullet}(t,\mathfrak{f})\). The second one depends on the fact that \(N\) is a complete UN. Finally \(N\) is, up to the renaming of transitions, equal to \(\overleftarrow{N}^{U}\big{|}_{U^{\prime}}\), which is a complete unravel net. Finally, \(\overleftarrow{N}^{U}\) is trivially safe as \(N\) is safe.
The construction above simply adds as many events (transitions) as transitions to be reversed in \(U\). The preset of each added event is the postset of the corresponding event to be reversed, and its postset is the preset of the event to be reversed. We write \(\overleftarrow{N}\) instead of \(\overleftarrow{N}^{T}\) when \(N=\langle S,T,F,\mathfrak{m}\rangle\), i.e., when every transition is reversible.
In Figure 7a we show a non-complete unravel net, whose complete version is in Figure 7b. The reversible unravel net obtained by reversing every transition is depicted in Figure 7c.
## 4. CCS processes as unravel nets
### Encoding of CCS processes
We now recall the encoding of CCS terms into Petri nets due to Boudol and Castellani [1]. It is worth noting that the original encoding was on _proved terms_ instead of plain CCS. The difference between proved terms and CCS is that somehow in a proved term the labels carry the position of the process who did the action. Hence, we will use _decorated_ versions of labels. For instance, \(\hat{a}.b\) denotes an event \(b\) that has been preceded by the occurrence of \(a\). That is, if we want to indicate the occurrence of \(b\) in a term \(a.b\) we will write \(\hat{a}.b\). Analogously, labels carry also information about the syntactical structure of a term, actions corresponding to subterms of a choice and of a parallel composition are also decorated with an index \(i\) that indicates the subterm that performs the action. An interesting aspect of this encoding is that these information is reflected in the name of the places and the transitions of the nets, which simplifies the
formulation of the behavioural correspondence of a term and its associated net. We write \(\ell(\_)\) for the function that removes decorations for a name, e.g., \(\ell(\hat{a}.\hat{b}.c)=c\).
We are now in place to define and comment the encoding of a CCS term into a net. The encoding is inductively defined on the structure of the CCS process. For a CCS process \(P\), its encoded net is \(\mathcal{N}(P)=\langle S_{P},T_{P},F_{P},\mathfrak{m}_{P}\rangle\). The net corresponding to the inactive process \(\mathbf{0}\), is just a net with just one marked place and with no transition, that is:
**Definition 4.1**.: The net \(\mathcal{N}(\mathbf{0})=\langle\{\mathbf{0}\},\emptyset,\emptyset,\{\mathbf{0 }\}\rangle\) is the net associated to \(\mathbf{0}\) and it is called _zero_.
To ease notation in the constructions we are going to present, we adopt the following conventions: let \(X\subseteq S\cup T\) be a set of places and transitions, we write \(\hat{a}.X\) for the set \(\{\hat{a}.x\ |\ x\in X\}\) containing the _decorated_ versions of places and transitions in \(X\). Analogously we lift this notation to relations: if \(R\) is a binary relation on \((S\cup T)\), then \(\hat{\alpha}.R=\{(\hat{\alpha}.x,\hat{\alpha}.y)\ |\ (x,y)\in R\}\) is a binary relation on \((\alpha.S\cup\alpha.T)\).
The net \(\mathcal{N}(\alpha.P)\) corresponding to a process \(\alpha.P\) extends \(\mathcal{N}(P)\) with two extra places \(\alpha.P\) and \(\hat{a}.\underline{\alpha}\) and one transition \(\alpha\). The place \(\alpha.P\) stands for the process that executes the prefix \(\alpha\) and follows as \(P\). The place \(\hat{a}.\underline{\alpha}\) is not in the original encoding of [1]; we have add it to ensure that the obtained net is _complete_, which is essential for the definition of the reversible net. This will become clearer when commenting the encoding of the parallel composition. It should be noted that this addition does not interfere with the behaviour of the net, since all added places are final. Also a new transition, named \(\alpha\) is created and added to the net, and the flow relation is updated accordingly.
Figures 7(a), 7(b) and 7(c) report respectively the encodings of the inactive process, of the process \(b.\mathbf{0}\) and \(a.b.\mathbf{0}\). Moreover the aforementioned figures systematically show how the prefixing operator is rendered into Petri nets. As a matter of fact, the net \(a.b.\mathbf{0}\) is built starting from the net corresponding to \(b.\mathbf{0}\) by adding the prefix \(a\). We note that also the label of transitions is affected by appending the label of the new prefix at the beginning. This is rendered in Figure 7(c) where the transition mimicking the action \(b\) is labeled as \(\hat{a}.b\) indicating that an \(a\) was done before \(b\). In what follows we will often omit such representation from figures.
**Definition 4.2**.: Let \(P\) a CCS process and \(\mathcal{N}(P)=\langle S_{P},T_{P},F_{P},\mathfrak{m}_{P}\rangle\) be the associated net. Then \(\mathcal{N}(\alpha.P)\) is the net \(\langle S_{\alpha.P},T_{\alpha.P},F_{\alpha.P},\mathfrak{m}_{\alpha.P}\rangle\) where
\[\begin{array}{rcl}S_{\alpha.P}&=&\{\alpha.P,\hat{\alpha}.\underline{\alpha} \}\cup\hat{\alpha}.S_{P}\\ T_{\alpha.P}&=&\{\alpha\}\ \cup\hat{\alpha}.T_{p}\\ F_{\alpha.P}&=&\{(\alpha.P,\alpha),(\hat{\alpha}.\underline{\alpha})\}\cup \{(\alpha,\hat{\alpha}.b)\ |\ b\in\mathfrak{m}_{P}\}\ \cup\hat{\alpha}.F_{P}\\ \mathfrak{m}_{\alpha.P}&=&\{\alpha.P\}\end{array}\]
The set of _key_-places of \(\mathcal{N}(\alpha.P)\) is \(\hat{\alpha}.\mathcal{K}_{T_{P}}\cup\{\hat{\alpha}.\underline{\alpha}\}\), where \(\mathcal{K}_{T_{P}}\) are the _key_-places of \(\mathcal{N}(P)\).
Figure 8. Example of nets corresponding to CCS processes
For a set \(X\) of transitions we write \(\|_{\mathrm{i}}X\) for \(\{\|_{\mathrm{i}}x\ |\ x\in X\}\), which straightforwardly lifts to relations.
The encoding of parallel goes along the line of the prefixing one. Also in this case we have to decorate the places (and transitions) with the position of the term in the syntax tree. To this end, each branch of the parallel is decorated with \(\|_{i}\) with \(i\) being the \(i\)-th position. Regarding the transitions, we have to add all the possible synchronisations among the processes in parallel. This is why, along with the transitions of the branches (properly decorated with \(\|_{i}\)) we have to add extra transitions to indicate the possible synchronisation. Naturally a synchronisation is possible when one label is the co-label of the other transition. Figure 9a shows the net corresponding to the process \(a.b\parallel\overline{a}.c\). As we can see, the encoding builds upon the encoding of \(a.b\) and \(\overline{a}.c\), by (i) adding to all the places and transitions whether the branch is the left one or the right one and (ii) adding an extra transition and place for the only possible synchronisation. We add an extra place (in line with the prefixes) to mark the fact that a synchronisation has taken place. Let us note that the extra places \(\underline{a}\), \(\overline{a}\) and \(\underline{\tau}\) are used to understand whether the two prefixes have been executed singularly (e.g., no synchronisation) or they contributed to do a synchronisation. Suppose, for example, that the net had not such places, and suppose that we have two tokens in the places \(\|_{0}\ \hat{a}.b\) and \(\|_{1}\ \hat{a}.b\). Now, how can we understand whether these two tokens are the result of the firing sequence \(a\),\(\overline{a}\) or they are the result of the \(\tau\) transition? It is impossible, but by using the aforementioned extra-places, which are instrumental to tell if a single prefix has executed, we can distinguish the \(\tau\) from the firing sequence \(a\),\(\overline{a}\) and then reverse accordingly.
**Definition 4.3**.: Let \(\mathcal{N}(P_{1})\) and \(\mathcal{N}(P_{2})\) be the nets associated to the processes \(P_{1}\) and \(P_{2}\). Then \(\mathcal{N}(P_{1}\|P_{2})\) is the net \(\langle S_{P_{1}\|P_{2}},T_{P_{1}\|P_{2}},F_{P_{1}\|P_{2}},\mathsf{m}_{\mathcal{ P}_{1}\|P_{2}}\rangle\) where
\[S_{P_{1}\|P_{2}} = \|_{0}S_{P_{1}}\cup\|_{1}S_{P_{2}}\cup\{s_{\{t,t^{\prime}\}}\ |\ t\in T_{P_{1}}\wedge t^{\prime}\in T_{P_{2}}\wedge\overline{\ell(t)}=\ell(t^ {\prime})\}\] \[T_{P_{1}\|P_{2}} = \|_{0}T_{P_{1}}\cup\|_{1}T_{P_{2}}\cup\{\{t,t^{\prime}\}\ |\ t\in T_{P_{1}}\wedge t^{\prime}\in T_{P_{2}}\wedge\overline{\ell(t)}=\ell(t^ {\prime})\}\] \[F_{P_{1}\|P_{2}} = \|_{0}F_{P_{1}}\cup\|_{1}F_{P_{2}}\cup\{(\{t,t^{\prime}\},s_{\{t, t^{\prime}\}})\ |\ t\in T_{P_{1}}\wedge t^{\prime}\in T_{P_{2}}\wedge\overline{\ell(t)}=\ell(t^ {\prime})\}\] \[\cup\{(\|_{i}s,\{t_{1},t_{2}\})\ |\ (s,t_{i})\in F_{P_{i}}\}\cup\{(\{t_{1},t_{2}\}, \|_{i}s)\ |\ (t_{i},s)\in F_{P_{i}}\ \wedge\ s\not\in\mathcal{K}_{T_{P_{i}}}\}\] \[\mathsf{m}_{P_{1}\|P_{2}} = \|_{0}\mathsf{m}_{P_{1}}\cup\|_{1}\mathsf{m}_{P_{2}}\]
The _key_-places of the resulting net are the following.
\[\|_{0}\mathcal{K}_{T_{P_{1}}}\cup\|_{1}\mathcal{K}_{T_{P_{2}}}\cup\{s_{\{t,t^{ \prime}\}}\ |\ t\in T_{P_{1}}\wedge t^{\prime}\in T_{P_{2}}\wedge\overline{\ell(t)}=\ell(t^ {\prime})\}\]
They are obtained by properly renaming the ones arising from the encoding of the branches and those corresponding to the synchronisations of the components.
The encoding of the choice operator is similar to the parallel one. The only difference is that we do not have to deal with possible synchronisations since the branches of a choice are mutually exclusive. Figure 9b illustrates the net corresponding to the process \(a.b+\bar{a}.c\). As in the previous examples, the net is built upon the subnets representing \(a.b\) and \(\bar{a}.c\).
**Definition 4.4**.: Let \(\mathcal{N}(P_{i})\) be the net associated to the processes \(P_{i}\) for \(i\in I\). Then \(+_{i\in I}P_{i}\) is the net \(\langle S_{+_{i\in I}P_{i}},T_{+_{i\in I}P_{i}},F_{+_{i\in I}P_{i}},\mathsf{m}_ {+_{i\in I}P_{i}}\rangle\) where:
\[\begin{array}{rcl}S_{+_{i\in I}P_{i}}&=&\cup_{i\in I}+_{i}S_{P_{i}}\\ T_{+_{i\in I}P_{i}}&=&\cup_{i\in I}+_{i}T_{P_{i}}\\ F_{+_{i\in I}P_{i}}&=&\{(+_{i}x,+_{i}y)\ |\ (x,y)\in F_{P_{i}}\}\cup\{(+_{ \mathrm{j}}s,+_{i}t)\ |\ s\in\mathsf{m}_{P_{j}}\wedge\ ^{\bullet}t\in\mathsf{m}_{P_{i}}\wedge i\neq j\}\\ \mathsf{m}_{+_{i\in I}P_{i}}&=&\cup_{i\in I}+_{i}\mathsf{m}_{P_{i}}.\end{array}\]
In this case the _key_-places of \(+_{i\in I}P_{i}\) are just the union of all _key_-places after the suitable renaming, i.e., \(\cup_{i\in I}+\);\(\mathcal{K}_{T_{P_{i}}}\).
We write \(T^{a}\) for the set all transitions in \(T\) labelled by \(a\), i.e., \(\{t\in T\ |\ \ell(t)=a\}\). The encoding of the hiding operator simply removes all transitions whose labels corresponds to actions peformed over the restricted name.
**Definition 4.5**.: Let \(P\) a CCS process and \(\mathcal{N}(P)=\langle S_{P},T_{P},F_{P},\mathsf{m}_{P}\rangle\) be the associated net. Then \(\mathcal{N}(P\setminus a)\) is the net \(\langle S_{P\setminus a},T_{P\setminus a},F_{P\setminus a},\mathsf{m}_{P \setminus a}\rangle\) where
\[\begin{array}{rcl}S_{P\setminus a}&=&\searrow_{\mathsf{a}}S_{P}\\ T_{P\setminus a}&=&\searrow_{\mathsf{a}}(T_{P}\setminus T_{a})\\ F_{P\setminus a}&=&\{(\searrow_{\mathsf{a}}s,\searrow_{\mathsf{a}}t)\ |\ (s,t)\in F_{P},t \not\in T_{P}^{a}\}\cup\{(\searrow_{\mathsf{a}}t,\searrow_{\mathsf{a}}s)\ |\ (t,s)\in F_{P},t \not\in T_{P}^{a}\}\\ \mathsf{m}_{P\setminus a}&=&\searrow_{\mathsf{a}}\mathsf{m}_{P}\end{array}\]
In this case, as the number of _firable_ transitions decreases, a corresponding decrease is observed in the number of _key_-places. Hence, \(\mathcal{K}_{\searrow_{\mathsf{a}}(T_{P}\setminus T_{a})}=\searrow_{\mathsf{a} }\cdot(\mathcal{K}_{T_{P}}\setminus\mathcal{K}_{T_{a}})\).
In Figure 10, a more complex example is depicted, illustrating the net corresponding to the process \(a.a\parallel\overline{a}+b\). In this case, the process on the right of the parallel composition can synchronise with the one on the left one in two different occasions. This is why there are two different transitions representing the synchronisation. However, due to the nature of the process on the right-hand side being a choice, there is a possibility that the right
Figure 10. A complex example: \(\mathcal{N}(a.a\parallel\overline{a}+b)\)
Figure 9. Example of nets corresponding to CCS parallel and choice operator. We omit the trailing \(\mathbf{0}\)
branch of that choice gets executed, thereby preventing the synchronization from occurring. As the right branch of the parallel constitutes a choice between two options, the encoding designates these branches as '\(\|_{1}\) +\({}_{0}\)' and '\(\|_{1}\) +\({}_{1}\)' respectively. These labels serve to identify the left and right branches of the choice, which is situated within the right branch of the parallel operator.
The following proposition is instrumental for the main correspondence result.
**Proposition 4.6**.: _The nets defined in Definitions 4.1 to 4.5 are complete unravel nets._
Proof.: By induction on the structure of a CCS process. Clearly the net \(\mathcal{N}(\mathbf{0})\) is an unravel net and it it trivially complete because it has no transition. Assume now that \(\mathcal{N}(P)=\langle S_{P},T_{P},F_{P},\mathfrak{m}_{P}\rangle\) associated with the CCS process \(P\) is a complete UN. Also \(\mathcal{N}(\alpha.P)=\langle S_{\alpha.P},T_{\alpha.P},F_{\alpha.P},\mathfrak{ m}_{\alpha.P}\rangle\) is an UN as it is obtained by adding a new transition \(\alpha\) that precedes all transitions in \(T_{P}\). Moreover, a new key-place \(\hat{\alpha}.\underline{\alpha}\) is added for such transition. Assuming now that \(\mathcal{N}(P_{1})\) and \(\mathcal{N}(P_{2})\) are the two complete UNs associated with \(P_{1}\) and \(P_{2}\). The net \(\mathcal{N}(P_{1}\|P_{2})\) is an UN as the two components, when _synchronise_, have the effect of the local changes beside the key-places. For each synchronizing transition \(\{t,t^{\prime}\}\), a corresponding key-place \(s_{\{t,t^{\prime}\}}\) exists, rendering the net complete. Similarly, \(+_{i\in I}P_{i}\) is a complete unravel net, as each \(\mathcal{N}(P_{i})\) is a complete unravel net. The additional flow arcs ensure that only transitions of a specific component are executed. Lastly, \(\mathcal{N}(P\setminus a)\) is complete because the elimination of transitions does not add any new behaviour.
### Encoding of RCCS processes
We are now at the point where we can define the network that corresponds to an RCCS process. So far, our focus has been on encoding CCS processes into nets. Since RCCS is built upon CCS processes, our encoding of RCCS naturally builds upon the encoding of CCS. To do so, we first introduce the concept of ancestor, i.e., the initial process from which an RCCS process is derived. Notably, in the context of our discussion involving coherent RCCS processes (as defined in Definition 2.3), an RCCS process invariably possesses an ancestor.
The ancestor \(\rho(R)\) of an RCCS process \(R\) can be calculated through syntactical analysis of \(R\), as all information about its past is stored within memories. The sole instance in which a process must wait for its counterpart is during a memory fork, denoted as \(\langle 1\rangle\) or \(\langle 2\rangle\).
**Definition 4.7**.: Given a coherent RCCS process \(R\), its ancestor \(\rho(R)\) is derived by using the inference rules of Figure 11. The rules use the pre-congruence relation \(\preceq\) defined as \(\equiv\) (see fig. 6) with the exception that rule Split can be only applied from right to left.
**Example 4.8**.: Consider the RCCS term \(R\) below,
\[\begin{array}{llll}R=&\langle 1\rangle\cdot\langle m_{1},a,\mathbf{0}\rangle \cdot\langle 1\rangle\cdot\langle\rangle\triangleright b&\parallel&\langle 2 \rangle\cdot\langle m_{1},a,\mathbf{0}\rangle\cdot\langle 1\rangle\cdot \langle\rangle\triangleright c&\parallel\\ &\langle m_{2},\overline{a},\mathbf{0}\rangle\cdot\langle 1\rangle\cdot \langle 2\rangle\cdot\langle\rangle\triangleright\mathbf{0}&\parallel&\langle 2 \rangle\cdot\langle 2\rangle\cdot\langle\rangle\triangleright d\end{array}\]
By applying the inference rules in Figure 11, we compute the CCS term \(P=a.(b\parallel c)\parallel\overline{a}\parallel d\), which is the ancestor of \(R\), i.e. \(\rho(R)=P\).
**Lemma 4.9**.: _For any coherent RCCS process \(R\) its ancestor \(\rho(R)\) exists and its unique._
Proof.: Since \(R\) is a coherent process then there exists a CCS process \(P\) such that \(\langle\rangle\triangleright P\hookrightarrow^{*}R\). By Property 2.7 we have that \(\langle\rangle\triangleright P\rightarrow^{*}R\), and by applying Corollary 2.6 we obtain that \(R\rightsquigarrow^{*}(\rangle\triangleright P\). The proof is then by induction on the number \(n\) of reductions contained in \(\rightsquigarrow^{*}\) and by noticing that for each application of \(\rightsquigarrow\) there exists a corresponding rule of \(\rightarrow\).
There is a tight correspondence between RCCS memories and transitions/places names. That is, a memory contains all the information to recover the path from the root to the process itself. To this end, we introduce the function \(\mathtt{path}(\cdot)\), which is inductively defined as follows
\[\mathtt{path}(m\cdot\langle m^{\prime},\alpha^{i},\mathbf{0}\rangle)= \mathtt{path}(m\cdot\langle*,\alpha^{i},\mathbf{0}\rangle)=\hat{\alpha}. \mathtt{path}(m)\] \[\mathtt{path}(m\cdot\langle m^{\prime},\alpha^{i},Q\rangle)= \mathtt{path}(m\cdot\langle*,\alpha^{i},Q\rangle)=+\hat{\alpha}.\mathtt{path}(m)\] \[\mathtt{path}(m\cdot\langle i\rangle)=\|_{(i-1)}\mathtt{path}(m)\] \[\mathtt{path}(\langle\rangle)=\epsilon\]
**Example 4.10**.: Let us consider the RCCS processes \(R_{1}\) and \(R_{2}\) defined below
\[R_{1}= \langle*,a^{1},\mathbf{0}\rangle\cdot\langle 1\rangle\cdot\langle \rangle\triangleright b\parallel\langle 2\rangle\cdot\langle\rangle\triangleright\bar{a}.c\] \[R_{2}= \langle*,b^{1},\mathbf{0}\rangle\cdot\langle m_{2},a^{1},\mathbf{ 0}\rangle\cdot\langle 1\rangle\cdot\langle\rangle\triangleright\mathbf{0} \parallel\langle m_{1},\overline{a}^{1},\mathbf{0}\rangle\cdot\langle 2 \rangle\cdot\langle\rangle\triangleright c\]
with \(m_{i}=\langle i\rangle\cdot\langle\rangle\). Their corresponding nets are shown in fig. 12.
We have that the path of the left process is \(\mathtt{path}(\langle*,a^{1},\mathbf{0}\rangle\cdot\langle 1\rangle\cdot \langle\rangle)=\|_{\mathbf{0}}\hat{a}\), while the path of the right process is \(\mathtt{path}(\langle 2\rangle\cdot\langle\rangle)=\|_{\mathbf{1}}\).
The encoding of an RCCS process should yield an equivalent net to that of its ancestor, with the only potential distinction being the marking - indicating the specific locations where tokens are placed. And such positions are inferred from the information stored in memories. Following the intuitions in section 4, we will treat names of places and transitions as strings. When we write \(\phi X\), where \(X\) is a set of strings and \(\phi\in\|_{\mathrm{i}},+_{\mathrm{i}},\hat{\alpha},\backslash_{\mathrm{a}}\), we are indicating the set \(\phi x\mid x\in X\). Then the _marking_ function \(\mu(\cdot)\) is inductively defined as
Figure 11. Ancestor inference rules
follows:
\[\mu(R\parallel S) =\mu(R)\bowtie\mu(S)\] \[\mu(R\backslash a) =\backslash_{\exists}\mu(R)\] \[\mu(m\cdot\langle m_{1},\alpha^{i},\mathbf{0}\rangle\cdot\langle \rangle\triangleright P) =\{\alpha,m_{1}\}\cup\hat{\alpha}.\mu(m\cdot\langle\rangle \triangleright P)\] \[\mu(m\cdot\langle m_{1},\alpha^{i},Q\rangle\cdot\langle\rangle \triangleright P) =\{+_{\mathrm{i}}\alpha,m_{1}\}\cup+_{\mathrm{i}}\hat{\alpha}. \mu(m\cdot\langle\rangle\triangleright P)\] \[\mu(m\cdot\langle*,\alpha^{i},\mathbf{0}\rangle\cdot\langle \triangleright P) =\{\hat{\alpha}.\underline{\alpha}\}\cup\hat{\alpha}.\mu(m\cdot \langle\rangle\triangleright P)\] \[\mu(m\cdot\langle*,\alpha^{i},Q\rangle\cdot\langle\rangle \triangleright P) =\{+_{\mathrm{i}}\hat{\alpha}.\underline{\alpha}\}\cup+_{\mathrm{i }}\hat{\alpha}.\mu(m\cdot\langle\rangle\triangleright P)\] \[\mu(m\cdot\langle i\rangle\cdot\langle\rangle\triangleright P) =\|_{\mathrm{i-1}}\mu(m\cdot\langle\rangle\triangleright P)\] \[\mu(\langle\rangle\triangleright P) =\{P\}\]
where \(\bowtie\) is defined as the usual set union on single element, and as the merge on pairs of the form \(\{t_{1},m_{2}\}\)\(\{t_{2},m_{1}\}\) where \(\{t_{1},m_{2}\}\bowtie\{t_{2},m_{1}\}=s_{\{t_{1},t_{2}\}}\) if \(\ell(t_{1})=\overline{\ell(t_{2})}\) and \(t_{i}=\mathtt{path}(m_{i})\alpha_{i}\) with \(\alpha_{i}=\ell(t_{i})\).
Figure 12. Example of nets corresponding to RCCS process \(R_{1}\) and \(R_{2}\)
**Example 4.11**.: Let us consider the RCCCS processes \(R_{1}\) and \(R_{2}\) in fig. 12. The marking of the process \(R_{1}\) is
\[\mu(\langle*,a^{1},\mathbf{0}\rangle\cdot\langle 1\rangle\cdot \langle\rangle\triangleright b\parallel\langle 2\rangle\cdot\langle\rangle \triangleright\bar{a}.c)\] \[= (\mu(\langle*,a^{1},\mathbf{0}\rangle\cdot\langle 1\rangle \cdot\langle\rangle\triangleright b))\bowtie(\|_{1}\mu(\langle\rangle \triangleright\bar{a}.c))\] \[= (\|_{0}\mu(\langle*,a^{1},\mathbf{0}\rangle\cdot\langle\rangle \triangleright b))\bowtie(\{\|_{1}\bar{a}.c\})\] \[= (\{\|_{0}\hat{a}.\underline{a}\}\cup\|_{0}\hat{a}.\mu(\langle \rangle\triangleright b))\bowtie\{\|_{1}\bar{a}.c\}\] \[= \{\|_{0}\hat{a}.\underline{a},\|_{0}\hat{a}.b\}\bowtie\{\|_{1} \bar{a}.c\}\]
and the marking of the process \(R_{2}\) is
\[\mu(\langle*,b^{1},\mathbf{0}\rangle\cdot\langle m_{2},a^{1}, \mathbf{0}\rangle\cdot\langle 1\rangle\cdot\langle\rangle\triangleright\mathbf{0} \parallel\langle m_{1},\overline{a}^{1},\mathbf{0}\rangle\cdot\langle 2 \rangle\cdot\langle\rangle\triangleright c)\] \[= (\mu(\langle*,b^{1},\mathbf{0}\rangle\cdot\langle m_{2},a^{1}, \mathbf{0}\rangle\cdot\langle 1\rangle\cdot\langle\rangle\triangleright\mathbf{0} \rangle))\bowtie(\mu(\langle m_{1},\overline{a}^{1},\mathbf{0}\rangle\cdot \langle 2\rangle\cdot\langle\rangle\triangleright c))\] \[= (\|_{0}\mu(\langle*,b^{1},\mathbf{0}\rangle\cdot\langle m_{2},a^{ 1},\mathbf{0}\rangle\cdot\langle\rangle\triangleright\mathbf{0}))\bowtie(\|_{1} \mu(\langle m_{1},\overline{a}^{1},\mathbf{0}\rangle\cdot\langle\rangle \triangleright c))\] \[= \|_{0}(\{a,m_{2}\},\hat{a}.\mu(\langle*,b^{1},\mathbf{0}\rangle \cdot\langle\triangleright\mathbf{0}\rangle))\bowtie(\|_{1}\{\{\overline{a},m_{ 1}\},\hat{\overline{a}}.\mu(\langle\rangle\triangleright c)\})\] \[= \|_{0}(\{a,m_{2}\},\hat{a}.\{\underline{b},b\})\bowtie(\|_{1}\{ \{\overline{a},m_{1}\},\hat{\overline{a}}.c\})\] \[= \{\|_{0}a,m_{2}\},\|_{0}\hat{a}.\underline{b},\hat{a}.\hat{b}\} \bowtie\{\{\|_{1}\{\overline{a},m_{1}\},\|_{1}\hat{\overline{a}}.c\}\] \[= \{\|_{0}a,\|_{1}\overline{a}\},\|_{0}\hat{a}.\underline{b},\|_{0 }\hat{a}.\hat{b},\|_{1}\hat{\overline{a}}.c\}\]
We are now in place to define a property that relates the definitions of \(\mu(\cdot)\) and \(\mathtt{path}(\cdot)\) with RCCS processes.
**Property 4.12**.: Let \(R=m\triangleright\sum_{i\in I}\alpha_{i}.P_{i}\) be a RCCS process. For any \(z\in I\) such that \(R\xrightarrow{m:\alpha_{z}}\langle*,\alpha_{z}^{z},\sum_{i\in I\setminus\{z \}}\alpha_{i}.P_{i}\rangle\cdot m\triangleright P_{z}\) we have that
\[\mu(\langle*,\alpha_{z}^{z},\sum_{i\in I\setminus\{z\}}\alpha_{ i}.P_{i}\rangle\cdot m\triangleright P_{z})= \mu(R)\setminus\{\mathtt{path}(m)+_{\mathsf{z}}\alpha_{z}.P_{z}\}\] \[\cup\ \{\mathtt{path}(m)+_{\mathsf{z}}\hat{\alpha}_{z}.\underline{ \alpha_{z}},\mathtt{path}(m)+_{\mathsf{z}}\alpha_{z}.P_{z}\}\]
Proof.: The proof is by induction on the size of \(m\). The base case with \(m=\langle\rangle\) trivially holds. In the inductive case we have \(m=m_{1}\cdot e\cdot\langle\rangle\) where \(e\) can be \(\langle i\rangle\), \(\langle*,\beta^{i},Q\rangle\) and \(\langle m_{2},\beta^{i},Q\rangle\). We will show the first two cases, with the third being similar to the second one. We have that
\[S_{0}=m_{1}\cdot\langle\rangle\triangleright\sum_{i\in I}\alpha_{i}.P_{i} \xrightarrow{m_{1}\cdot\langle\rangle:\alpha_{z}}\langle*,\alpha_{z}^{z}, \sum_{i\in I\setminus\{z\}}\alpha_{i}.P_{i}\rangle\cdot m_{1}\cdot\langle \rangle\triangleright P_{z}=R_{0}\]
and by applying inductive hypothesis (on a shorter memory) we have that
\[\mu(S_{0})=\mu(R_{0})\setminus\{\mathtt{path}(m_{1})+_{\mathsf{z}}\alpha_{z}.P_ {z}\}\cup\{\mathtt{path}(m_{1})+_{\mathsf{z}}\hat{\alpha}_{z}.\underline{ \alpha_{z}},\mathtt{path}(m_{1})+_{\mathsf{z}}\alpha_{z}.P_{z}\} \tag{4.1}\]
We proceed by case analysis.
\(e=\langle i\rangle\)**:**: let us note that \(\mathtt{path}(m)=\|_{\mathtt{i}}\mathtt{path}(m_{1})\), and that \(\mu(R)=\|_{\mathrm{i}}\mu(R_{0})\) and \(\mu(S)=\|_{\mathrm{i}}\mu(S_{0})\). Thanks to eq. (4.1) we know the form of \(\mu(S_{0})\), hence
\[\mu(S) = \|_{\mathrm{i}}\mu(S_{0})\] \[= \|_{\mathrm{i}}\mu(R_{0})\setminus\{\|_{\mathrm{i}}\mathtt{path}( m_{1})+_{\mathtt{2}}\alpha_{z}.P_{z}\}\cup\{\|_{\mathrm{i}}\mathtt{path}(m_{1})+_{ \mathtt{2}}\hat{\alpha}_{z}.\underline{\alpha_{z}},\|_{\mathrm{i}}\mathtt{ path}(m_{1})+_{\mathtt{2}}\alpha_{z}.P_{z}\}\] \[= \mu(R)\setminus\{\mathtt{path}(m)+_{\mathtt{2}}\alpha_{z}.P_{z} \}\cup\{\mathtt{path}(m)+_{\mathtt{2}}\hat{\alpha}_{z}.\underline{\alpha_{z }},\mathtt{path}(m)+_{\mathtt{2}}\alpha_{z}.P_{z}\}\]
as desired.
\(e=\langle*,\beta^{i},Q\rangle\)**:**: let us note that \(\mathtt{path}(m)=+_{\mathrm{i}}\hat{\beta}.\mathtt{path}(m_{1})\), and that \(\mu(R)=+_{\mathrm{i}}\hat{\beta}.\mu(R_{0})\cup\{+_{\mathrm{i}}\hat{\beta}. \underline{\beta}\}\), and \(\mu(S)=+_{\mathrm{i}}\hat{\beta}.\mu(S_{0})\cup\{+_{\mathrm{i}}\hat{\beta}. \underline{\beta}\}\). Thanks to eq. (4.1) we know the form of \(\mu(S_{0})\), hence
\[\mu(S) = +_{\mathrm{i}}\hat{\beta}.\mu(S_{0})\cup\{+_{\mathrm{i}}\hat{ \beta}.\underline{\beta}\}\] \[= +_{\mathrm{i}}\hat{\beta}.\{\mu(R_{0})\setminus\{\mathtt{path}(m_{ 1})+_{\mathtt{2}}\alpha_{z}.P_{z}\}\] \[\cup\ \{\mathtt{path}(m_{1})+_{\mathtt{2}}\hat{\alpha}_{z}. \underline{\alpha_{z}},\mathtt{path}(m_{1})+_{\mathtt{2}}\alpha_{z}.P_{z}\}\} \ \cup\ \{+_{\mathrm{i}}\hat{\beta}.\underline{\beta}\}\] \[= +_{\mathrm{i}}\hat{\beta}.\mu(R_{0})\setminus\{+_{\mathrm{i}}\hat{ \beta}.\mathtt{path}(m_{1})+_{\mathtt{2}}\alpha_{z}.P_{z}\}\] \[\cup\{+_{\mathrm{i}}\hat{\beta}.\mathtt{path}(m_{1})+_{\mathtt{2 }}\hat{\alpha}_{z}.\underline{\alpha_{z}},+_{\mathrm{i}}\hat{\beta}.\mathtt{ path}(m_{1})+_{\mathtt{2}}\alpha_{z}.P_{z}\}\}\cup\{+_{\mathrm{i}}\hat{\beta}. \underline{\beta}\}\] \[= +_{\mathrm{i}}\hat{\beta}.\mu(R_{0})\setminus\{\mathtt{path}(m)+_ {\mathtt{2}}\alpha_{z}.P_{z}\}\] \[\cup\{\mathtt{path}(m)+_{\mathtt{2}}\hat{\alpha}_{z}.\underline{ \alpha_{z}},\mathtt{path}(m)+_{\mathtt{2}}\alpha_{z}.P_{z}\}\}\cup\{+_{ \mathrm{i}}\hat{\beta}.\underline{\beta}\}\] \[= \mu(R)\setminus\{\mathtt{path}(m)+_{\mathtt{2}}\alpha_{z}.P_{z} \}\cup\{\mathtt{path}(m)+_{\mathtt{2}}\hat{\alpha}_{z}.\underline{\alpha_{z }},\mathtt{path}(m)+_{\mathtt{2}}\alpha_{z}.P_{z}\}\}\]
as desired.
As a consequence, we have the following corollary.
**Corollary 4.13**.: _Let \(R=m\triangleright\alpha.P\) be a RCCS process. For any \(z\in I\) such that \(R\xrightarrow{m:\alpha_{z}}\langle*,\alpha,\mathbf{0}\rangle\cdot m \triangleright P\) we have that_
\[\mu(\langle*,\alpha,\mathbf{0}\rangle\cdot m\triangleright P)=\mu(R)\setminus \{\mathtt{path}(m)\alpha.P\}\cup\{\mathtt{path}(m)\hat{\alpha}.\underline{ \alpha},\mathtt{path}(m)\alpha.P\}\]
We are now ready to formalise the reversible net corresponding to an RCCS process.
**Definition 4.14**.: Let \(R\) be an RCCS term with \(\rho(R)=P\). Then \(\overleftarrow{\mathcal{N}(R)}\) is the net \(\langle S,T,F,\mu(R)\rangle\) where \(\mathcal{N}(P)=\langle S,T,F,\mathsf{m}\rangle\).
Note that the reversible net corresponding to a coherent RCCS process \(R\) retains identical places, transitions, and flow relationships as the ancestor of R. The sole divergence lies in the marking, which is derived through the utilisation of the computational history stored within the memories of \(R\). The following is a consequence of the previous results.
**Proposition 4.15**.: _Let \(R\) be an RCCS term with \(\rho(R)=P\). Then \(\overleftarrow{\mathcal{N}(R)}\) is a reversible unravel net._
### Correctness result
We prove the correctness of our encoding in terms of a behavioural equivalence. To this aim we reformulate the definition of _forward and reverse bisimilarity_[20], initially stated for CCSK, to cope with RCCS terms and Petri nets.
**Definition 4.16** (Forward and reverse bisimulation).: Let \(R\) a coherent RCCS process and \(N=\langle S,T,F,\mathsf{m}\rangle\) an \(\mathsf{rUN}\). The relation \(\mathcal{R}\) is a forward reverse bisimulation if whenever \((R,N)\in\mathcal{R}\):
1. if \(R\xrightarrow{m\alpha}R^{\prime}\) then there exist \(t\in T\) and \(\mathsf{m}^{\prime}\) such that \(\mathsf{m}\left[t\right\rangle\mathsf{m}^{\prime}\), \(t=(\mathsf{path}(m)\alpha,\mathsf{f})\) and \((R^{\prime},\langle S,T,F,\mathsf{m}^{\prime}\rangle)\in\mathcal{R}\);
2. if \(R\xrightarrow{m\alpha}R^{\prime}\) then there exist \(t\in T\) and \(\mathsf{m}^{\prime}\) such that \(\mathsf{m}\left[t\right\rangle\mathsf{m}^{\prime}\), \(t=(\mathsf{path}(m)\alpha,\mathsf{r})\) and \((R^{\prime},\langle S,T,F,\mathsf{m}^{\prime}\rangle)\in\mathcal{R}\);
3. if \(R\xrightarrow{m_{1},m_{2}:\tau}R^{\prime}\) then there exist \((t_{1},\mathsf{f}),(t_{2},\mathsf{f})\in T\) and \(\mathsf{m}^{\prime}\) such that \(\mathsf{m}\left[(\{t_{1},t_{2}\},\mathsf{f})\right\rangle\mathsf{m}^{\prime}\), \(\overline{\ell(t_{1})}=\ell(t_{2})\), \(\mathsf{path}(m_{i})<t_{i}\) for \(i\in\{1,2\}\) and \((R^{\prime},\langle S,T,F,\mathsf{m}^{\prime}\rangle)\in\mathcal{R}\);
4. if \(R\xrightarrow{m_{1},m_{2}:\tau}R^{\prime}\) then there exist \((t_{1},\mathsf{r}),(t_{2},\mathsf{r})\in T\) and \(\mathsf{m}^{\prime}\) such that \(\mathsf{m}\left[(\{t_{1},t_{2}\},\mathsf{r})\right\rangle\mathsf{m}^{\prime}\), \(\overline{\ell(t_{1})}=\ell(t_{2})\), \(\mathsf{path}(m_{i})<t_{i}\) for \(i\in\{1,2\}\) and \((R^{\prime},\langle S,T,F,\mathsf{m}^{\prime}\rangle)\in\mathcal{R}\);
5. if \(\mathsf{m}\left[t\right]\mathsf{m}^{\prime}\) with \(t=(\mathsf{path}(m)\alpha,\mathsf{f})\) then there exists \(R,R^{\prime}\) such that \(\mu(R)=\mathsf{m}\), \(\mu(R^{\prime})=\mathsf{m}^{\prime}\), \(R\xrightarrow{m:\alpha}R^{\prime}\) and \((R^{\prime},\langle S,T,F,\mathsf{m}^{\prime}\rangle)\in\mathcal{R}\);
6. if \(\mathsf{m}\left[t\right]\mathsf{m}^{\prime}\) with \(t=(\mathsf{path}(m)\alpha,\mathsf{r})\) then there exists \(R,R^{\prime}\) such that \(\mu(R)=\mathsf{m}\), \(\mu(R^{\prime})=\mathsf{m}^{\prime}\), \(R\xrightarrow{m:\alpha}R^{\prime}\) and \((R^{\prime},\langle S,T,F,\mathsf{m}^{\prime}\rangle)\in\mathcal{R}\);
7. if \(\mathsf{m}\left[(\{t_{1},t_{2}\},\mathsf{f})\right)\mathsf{m}^{\prime}\) with \(\overline{\ell(t_{1})}=\ell(t_{2})\) and \(\mathsf{path}(m_{i})\alpha_{i}=t_{i}\) with \(\ell(t_{i})=\alpha_{i}\) for \(i\in\{1,2\}\) then there exists \(R,R^{\prime}\) such that \(\mu(R)=\mathsf{m}\), \(\mu(R^{\prime})=\mathsf{m}^{\prime}\), \(R\xrightarrow{m_{1},m_{2}:\tau}R^{\prime}\) and \((R^{\prime},\langle S,T,F,\mathsf{m}^{\prime}\rangle)\in\mathcal{R}\);
8. if \(\mathsf{m}\left[(\{t_{1},t_{2}\},\mathsf{r})\right)\mathsf{m}^{\prime}\) with \(\overline{\ell(t_{1})}=\ell(t_{2})\) and \(\mathsf{path}(m_{i})\alpha_{i}=t_{i}\) with \(\ell(t_{i})=\alpha_{i}\) for \(i\in\{1,2\}\) then there exists \(R,R^{\prime}\) such that \(\mu(R)=\mathsf{m}\), \(\mu(R^{\prime})=\mathsf{m}^{\prime}\), \(R\xrightarrow{m_{1},m_{2}:\tau}R^{\prime}\) and \((R^{\prime},\langle S,T,F,\mathsf{m}^{\prime}\rangle)\in\mathcal{R}\).
The largest forward reverse bisimulation is called forward reverse bisimilarity, denoted with \(\sim_{FR}\)
We first prove that two coherent RCCS processes are encoded within the _same_\(\mathsf{rUN}\). Subsequently, we demonstrate the equivalence between a step taken in the process algebra and the firing of an appropriate transition in the corresponding network, and vice versa.
**Lemma 4.17** (Preservation).: _Let \(R_{1}\) and \(R_{2}\) be two coherent RCCS processes. If \(R_{1}\equiv R_{2}\) then \(\overleftarrow{\mathcal{N}(R_{1})}\) and \(\overleftarrow{\mathcal{N}(R_{2})}\) are isomorphic and have the same marking._
Proof.: Since \(\equiv\) is defined on monitored processes, then the only axiom which changes the structure of the ancestor process if the \(\alpha\)-renaming. Hence \(R_{1}\) and \(R_{2}\) have the same ancestor, say \(P\), up to \(\alpha\)-renaming. It is easy to see that the two generated nets have the same places, transitions and flow relation up to renaming, hence they are isomorphic. We have just to check whether the initial markings are the same. The proof follows by induction and case analysis on the last applied axiom of \(\equiv\):
* If the last applied rule is (Split), w.l.o.g. we can assume \(R_{1}=m\triangleright(P_{1}\parallel P_{2})\) and \(R_{2}=\langle 1\rangle\cdot m\triangleright P_{1}\parallel\langle 2\rangle\cdot m \triangleright P_{2}\). We need to show that \(\mu(R_{1})=\mu(R_{2})\). By looking at the definition of \(\mu(\cdot)\) we have that \(\mu(R_{1})=\tilde{\phi}\{P_{1}\parallel P_{2}\}\) and \(\mu(R_{2})=\tilde{\phi}\|_{\mathsf{0}}\{P_{1}\}\cup\tilde{\phi}\|_{\mathsf{1}} \{P_{2}\}\), where \(\tilde{\phi}\) is a sequence of prefixes \(\phi\in\{\|_{\mathsf{i}},\mathsf{+}_{\mathsf{i}},\hat{\alpha},\backslash_{ \mathsf{a}}\}\). Also, we have that \(\tilde{\phi}\{P_{1}\parallel P_{2}\}=\tilde{\phi}\|_{\mathsf{0}}\{P_{1}\}\cup \tilde{\phi}\|_{\mathsf{1}}\{P_{2}\}=\mu(R_{1})\), as desired.
* this case is similar to the previous one.
* Suppose \(\mu(R_{1})=\mathsf{m}\cup\mathsf{m}^{\prime}\) where \(\mathsf{m}^{\prime}\) is the markings containing the bound action which will be converted by the last application of \(\equiv\). By inductive hypothesis we also have that \(\mu(R_{2})=\mathsf{m}\cup\alpha(\mathsf{m}^{\prime})\) where the \(\alpha\)-conversion is applied only to those names which contains the bound action, that is \(\mathsf{m}^{\prime}\). We have that the two nets have the same marking up to some renaming, as desired.
**Lemma 4.18** (Soundness).: _Let \(R_{1}\) be an RCCS coherent process and \(\hat{\overleftarrow{\mathcal{N}}(R_{1})}=\langle S,T,F,\mu(R_{1})\rangle\) its corresponding \(\mathsf{rUN}\). If \(R_{1}\stackrel{{\hat{m}:\alpha}}{{\longleftrightarrow}}R_{2}\) then_
* \(\hat{\overleftarrow{\mathcal{N}}(R_{2})}=\langle S,T,F,\mu(R_{2})\rangle\)_; and_
* _there exists_ \(t\in T\) _such that_ \(\mu(R_{1})\left[t\right]\mu(R_{2})\)_; and_
* _for some_ \(d\in\{\mathtt{f},\mathtt{r}\}\) _either_
* \(\hat{m}=m\) _and_ \(t=(\mathsf{path}(m)\alpha,d)\)_; or_
* \(\hat{m}=m_{1},m_{2}\) _and_ \(\alpha=\tau\) _and there exist two transitions_ \((t_{1},d),(t_{2},d)\in T\) _with_ \(\overline{\ell(t_{1})}=\ell(t_{2})\) _and_ \(\mathsf{path}(m_{i})\alpha_{i}=t_{i}\) _with_ \(\ell(t_{i})=\alpha_{i}\) _for_ \(i\in\{1,2\}\)_, and_ \(t=(\{t_{1},t_{2}\},d)\)_._
Proof.: As \(R_{1}\) is a coherent process then it has an ancestor \(\rho(R_{1})\), say \(P\), which is the same ancestor of \(R_{2}\), as \(R_{2}\) is reached by \(R_{1}\) with one reduction step. Therefore \(\hat{\overleftarrow{\mathcal{N}}(R_{1})}\) and \(\hat{\overleftarrow{\mathcal{N}}(R_{2})}\) have the same places, transitions and flow relation, the only difference being the marking. We show that for each move in the process algebra a corresponding firing of a transition \(t\in T\) exists such that \((\mu(R_{1})\setminus\,^{\bullet}t)\cup t^{\bullet}=\mu(R_{2})\).
We have two cases: either the process synchronises with the context or it performs a \(\tau\) (or a reversing of any of them). Both cases are similar, so we will focus on the first one. We proceed by induction on the derivation \(R_{1}\stackrel{{ m:\alpha}}{{\longrightarrow}}R_{2}\) with a case analysis on the last applied rule. The base cases correspond to the application of either r-act or r-act\({}^{\bullet}\).
r-act: Consider the application of the rule r-act. We have
\[R_{1}=m\triangleright\sum_{i\in I}\alpha_{i}.Q_{i}\stackrel{{ m:\alpha_{z}}}{{\longrightarrow}}\langle\ast,\alpha_{z}^{z},\sum_{i \in\Gamma\{z\}}\alpha_{i}.Q_{i}\rangle\cdot m\triangleright P_{z}=R_{2}\]
We first consider the case where \(|I|=1\). Hence we have
\[R_{1}=m\triangleright\alpha.Q\stackrel{{ m:\alpha}}{{ \longrightarrow}}\langle\ast,\alpha,\mathbf{0}\rangle\cdot m \triangleright Q=R_{2}\]
The marking corresponding to \(R_{1}\) in the net \(\hat{\overleftarrow{\mathcal{N}}(P)}=\langle S,T,F,\mathsf{m}\rangle\) is \(\mu(R_{1})=\mu(m\triangleright\alpha.Q)\) and thanks to Corollary 4.13 the marking of \(R_{2}\) is
\[\mu(R_{2}) =\mu(\langle\ast,\alpha,\mathbf{0}\rangle\cdot m\triangleright Q)\] \[=\mu(m\triangleright\alpha.Q)\setminus\{\mathsf{path}(m)\alpha.Q\} \cup\{\mathsf{path}(m)\hat{\alpha}.\underline{\alpha}\}\cup\{\mathsf{path}(m) \hat{\alpha}.\underline{\alpha}_{Q}\}\]
By construction (see Definition 4.2), the net \(\hat{\overleftarrow{\mathcal{N}}(P)}\) contains a transition \(t\in T\) such that \(t=(\mathsf{path}(m)\alpha,\mathtt{f})\), with \(\,^{\bullet}t=\{\mathsf{path}(m)\alpha.Q\}\) and \(t^{\bullet}=\{\mathsf{path}(m)\hat{\alpha}.\underline{\alpha}\}\cup\{\mathsf{ path}(m)\hat{\alpha}.\underline{\alpha}_{Q}\}\). The thesis follows by observing that such transition is enabled at \(\mu(R_{1})\) because \(\{\mathsf{path}(m)\alpha.Q\}\in\mu(R_{1})\) by definition of \(\mu(\cdot)\), and \(\mu(R_{1})\left[t\right]\mu(R_{2})\).
Consider now the case with \(|I|>1\).
\[R_{1}=m\triangleright\sum_{i\in I}\alpha_{i}.Q_{i}\stackrel{{ m:\alpha_{z}}}{{ \longrightarrow}}\langle\ast,\alpha_{z}^{z},\sum_{i\in\Gamma\{z\}}\alpha_{i}. Q_{i}\rangle\cdot m\triangleright P_{z}=R_{2}\]
The marking corresponding to \(R_{1}\) in the net \(\overleftarrow{\mathcal{N}(P)}=\langle S,T,F,\mathsf{m}\rangle\) is
\[\mu(R_{1}) =\mu(m\triangleright\sum_{i\in I}\alpha_{i}.Q_{i})\] \[=\bigcup_{i\in I}\mu(m\triangleright\alpha_{i}.Q_{i})\] and it contains the marked places \(\{\mathtt{path}(m)+_{i}\alpha_{i}.Q_{i}\mid i\in I\}\). Again by construction, the net \(\overleftarrow{\mathcal{N}(P)}\) contains a transition \(t\in T\) such that \(t=(\mathtt{path}(m)+_{\mathtt{z}}\alpha_{z},\mathtt{f})\), with \(z\in I\), \({}^{\bullet}t=\{\mathtt{path}(m)+_{\mathtt{i}}\alpha_{i}.Q_{i}\mid i\in I\}\) and \(t^{\bullet}=\{\mathtt{path}(m)+_{\mathtt{z}}\hat{\alpha}_{z}.\underline{ \alpha_{z}}\}\cup\{\mathtt{path}(m)+_{\mathtt{z}}\hat{\alpha}_{z}.\underline {\alpha_{z}}\}\cup\{\mathtt{path}(m)+_{\mathtt{z}}\hat{\alpha}_{z}.\underline {\alpha_{z}}\}\) and again \(\mu(R_{1})\,[t]\,\mu(R_{2})\) where \(\mu(R_{2})\) is the marking
\[\mu(m\triangleright\sum_{i\in I}\alpha_{i}.Q_{i})\setminus\{\mathtt{path}(m)+_ {\mathtt{i}}\alpha_{i}.Q_{i}\mid i\in I\}\cup\{\mathtt{path}(m)+_{\mathtt{z}} \hat{\alpha}_{z}.\underline{\alpha_{z}}\}\cup\{\mathtt{path}(m)+_{\mathtt{z}} \hat{\alpha}_{z}.\underline{\alpha_{z}}\}\]
r-act\({}^{\bullet}\): The case in which (r-act\({}^{\bullet}\)) is used is similar. Assume
\[R_{1}=\langle*,\alpha_{z}^{z},\sum_{i\in I\setminus\{z\}}\alpha_{i}.Q_{i} \rangle\cdot m\triangleright Q_{z}\xor{m\triangleright}\sum_{i\in I}\alpha_{i}.Q _{i}=R_{2}\]
and again take \(|I|=1\). Then \(\mu(R_{1})=\mu(\langle*,\alpha,\mathbf{0}\rangle\cdot m\triangleright Q)=\mu(m \triangleright\alpha.Q)\backslash\{\mathtt{path}(m)\alpha.Q\}\cup\{\mathtt{path }(m)\hat{\alpha}.\underline{\alpha}\}\cup\{\mathtt{path}(m)\hat{\alpha}. \underline{\alpha}\}\). The transition \(t=(\mathtt{path}(m)a,\mathtt{r})\) in \(\overleftarrow{\mathcal{N}(P)}\) is enabled at \(\mu(R_{1})\) as it is the reverse of \((\mathtt{path}(m)\alpha,\mathtt{f})\) and its execution leads to the marking \(\mu(R_{2})=\mu(m\triangleright\alpha.Q)\) as required.
The case with \(|I|>1\) follows the same argument of the forward one.
In the inductive case we have to do a case analysis on the last applied rule. We have (l-par), (r-sych), (r-res) and (r-equiv) and their reversible variants. The most representative cases are (r-sych) and (r-equiv).
r-equiv: Consider the application of the rule (r-equiv). It follows by induction and by applying Lemma 4.17.
r-equiv\({}^{\bullet}\): The application of the rule (r-equiv\({}^{\bullet}\)) follows the same argument of the previous case.
r-synch: For the (r-sych) case, let us suppose \(R_{0}=R_{0}^{1}\parallel R_{0}^{2}\). We have that \(R_{0}^{1}\parallel R_{0}^{2}\xrightarrow{m_{1},m_{2}:\tau}R_{1_{m_{1}}\alpha_{ m_{2}}}^{1}\parallel R_{1_{m_{2}}\alpha_{m_{1}}}^{2}\) with \(R_{0}^{i}\xrightarrow{m_{i}:\alpha_{i}}R_{1}^{i}\) and \(\alpha_{1}=\overline{\alpha_{2}}\). By applying the inductive hypothesis on the derivations \(R_{0}^{i}\xrightarrow{m_{i}:\alpha_{i}}R_{1}^{i}\) we have that there exists two transitions \(t_{1}\) and \(t_{2}\) such that \(\mathsf{m}_{t_{0}}^{i}\,[t_{i}]\,\mathsf{m}_{t_{1}}^{i}\), \((\mathtt{path}(m_{i})\alpha_{i},\mathtt{f})=t_{i}\), \(\mu(R_{0}^{i})=\mathsf{m}_{t_{0}}^{i}\) and \(\mu(R_{1}^{i})=\mathsf{m}_{t_{1}}^{i}\). We can desume that \({}^{\bullet}t_{1}\cap{}^{\bullet}t_{2}=\emptyset\), since they are enabled on different markings. Also, by definition we have that \(\mathsf{m}_{t_{0}}=\mu(R_{0})=\mu(R_{0}^{1})\bowtie\mu(R_{0}^{2})\). Let us note that the operator \(\bowtie\) acts on places which corresponds to past synchronisations, hence it does not affect \({}^{\bullet}t_{i}\), that is \({}^{\bullet}t_{i}\in\mathsf{m}_{t_{0}}\). Since \(\alpha_{1}=\overline{\alpha_{2}}\) then by Definition 4.3 in the net there exists a transition \(t_{\tau}=(\{\mathtt{path}(m_{1})\alpha_{1},\mathtt{path}(m_{2})\alpha_{2}\}, \mathtt{f})\) where the preset and postset are respectively \({}^{\bullet}t_{\tau}={}^{\bullet}t_{1}\cup{}^{\bullet}t_{2}\) and \(t_{\tau}{}^{\bullet}=(t_{1}{}^{\bullet}\setminus\{\mathtt{path}(m_{1})\hat{ \alpha}_{1}.\alpha_{1}\})\cup(t_{2}{}^{\bullet}\setminus\{\mathtt{path}(m_{2}) \hat{\alpha}_{2}.\underline{\alpha_{2}}\})\cup\{s_{\{\mathtt{path}(m_{1}) \alpha_{1},\mathtt{path}(m_{2})\alpha_{2}\}}\}\). Hence we have that \(\mathsf{m}_{t_{0}}\,[\overline{t_{\tau}})\,(\mathsf{m}_{\tau}\setminus{}^{ \bullet}t_{\tau})\cup t_{\tau}{}^{\bullet}\). By definition we have that \(\{\mathtt{path}(m_{1})\alpha_{1},m_{2}\}\in\mu(R_{1}^{1})\) and \(\{\mathtt{path}(m_{2})\alpha_{2},m_{1}\}\in\mu(R_{1}^{2})\) and that \(\{\mathtt{path}(m_{1})\alpha_{1},\mathtt{path}(m_{2})\alpha_{2}\}\in\mu(R_{1}^{ 1})\bowtie\mu(R_{1}^{2})\). Also let us note that the \(m_{i}@m_{j}\) operation just replace the \(*\) on top of the memory \(m_{i}\) with \(m_{j}\), which is similar to the \(\bowtie\) operator. Hence \(\mu(R_{1}^{1})\bowtie\mu(R_{1}^{2})=\mu(R_{1_{m_{1}}\alpha_{m_{2}}}^{1}\parallel R _{1_{m_{2}}\alpha_{m_{1}}}^{2})=\mathsf{m}_{t_{2}}\), as desired.
r-sych\({}^{\bullet}\): this case is analogous to (r-act\({}^{\bullet}\)).
**Lemma 4.19** (Completeness).: _Let \(R_{1}\) be an RCCS coherent process and let \(\overleftarrow{\mathcal{N}(R_{1})}=\langle S,T,F,\mu(R_{1})\rangle\) be the corresponding rUN. If \(\mu(R_{1})\left[t\right)\mathsf{m}^{\prime}\), then there exists \(R_{2}\) s.t. one of the following holds:_
* \(t=(\mathsf{path}(m)\alpha,d)\) _and_ \(R_{1}\stackrel{{ m:\alpha}}{{\longleftrightarrow}}R_{2}\) _and_ \(\overleftarrow{\mathcal{N}(R_{2})}=\langle S,T,F,\mathsf{m}^{\prime}\rangle\)_;_
* \(t=(\{t_{1},t_{2}\},d)\) _such that_ \(\overline{\ell(t_{1})}=\ell(t_{2})\)_, with_ \(t_{i}=(\mathsf{path}(m_{i})\alpha_{i},d)\)_,_ \(\alpha_{i}\in\{\ell(t_{1}),\ell(t_{2})\}\) _for_ \(i=1,2\) _and_ \(R_{1}\stackrel{{ m_{1},m_{2}:\tau}}{{\longleftrightarrow}}R_{2}\) _with_ \(\overleftarrow{\mathcal{N}(R_{2})}=\langle S,T,F,\mathsf{m}^{\prime}\rangle\)__
_with \(d\in\{\mathtt{f},\mathtt{r}\}\)._
Proof.: If \(\mu(R_{1})\left[t\right)\mathsf{m}^{\prime}\), then \(\mu(R_{1})=\mathsf{m}_{0}\cup{}^{\bullet}t\) and \(\mathsf{m}^{\prime}=(\mathsf{m}_{0}\setminus{}^{\bullet}t)\cup t^{\bullet}\). The encoding of \(\mathcal{N}(\cdot)\) is such that each transition or place name has a unique form, which corresponds to a path of a CCS term, and the transitions in \(\overleftarrow{\mathcal{N}(\cdot)}\) are of the form \((t,d)\), where \(t\) is the transition name of the CCS term and \(d\in\{\mathtt{f},\mathtt{r}\}\) is the _direction_, either forward or reverse. That is from the transition name \((t,d)\) we can isolate the RCCS term which can mimic the action.
If the transition \((t,d)\) is not a syncronization, that is \((t,d)\) is not of the form \((\{t_{1},t_{2}\},d)\), then we can assume w.l.o.g. that \(t=(\tilde{\phi}\alpha,d)\) with \(\tilde{\phi}\) being a sequence of \(\phi\in\{\|_{\mathtt{i}},\mathsf{+}_{\mathtt{i}},\hat{\alpha},\setminus_{ \mathtt{a}}\}\). Suppose \(d\) is \(\mathtt{f}\). If the last decoration in \(\tilde{\phi}\) has the form \(\mathsf{+}_{\mathtt{j}}\), that is \(\tilde{\phi}=\tilde{\phi}^{\prime}\mathsf{+}_{\mathtt{j}}\) this means there exists in the net a set of transition \(T^{\prime}=\{t_{i}=(\tilde{\phi}\beta_{i},\mathtt{f})\ \mid\ (t_{i},\mathtt{f})\in T\}\). Now, let assume that the ancestor of \(R_{1}\) is \(P\), we have that \(P=C[\sum_{i\in I}\beta_{i}.Q_{i}]\) where there exists an index \(j\in I\) such that \(\beta_{j}=\alpha\) and \(\alpha\) is the action mimicked by the transition \((t,\mathtt{f})\) and the right position of the hole in the context is calculated using \(\tilde{\phi}\). Also, since the transition is enabled in the net, then also \(R_{1}=E[(m\triangleright\sum_{i\in I}\beta_{i}.Q_{i})\backslash A]\) where \(E[\cdot]\) is an active context. Hence, we have that
\[E[(m\triangleright\sum_{i\in I}\beta_{i}.Q_{i})\backslash A]\xrightarrow{m: \beta^{j}}E[(\langle*,\beta^{j},\sum_{i\in I\backslash\{j\}}\rangle\cdot m \triangleright Q_{j})\backslash A]=R_{2}\]
By definition 4.14 we have \(\mathsf{m}=\mu(R_{1})\), and by definition 4.4\({}^{\bullet}t=\{\tilde{\phi}\beta_{i}.Q_{i}\mid i\in I\}\) and \(t^{\bullet}=\{\tilde{\phi}\tilde{\beta}_{j}.\underline{\beta_{j}}\}\cup \tilde{\phi}.\{\beta_{j}.Q_{j}\}\). Also
\[\mu(R_{1}) =\mu(E[\mathbf{0}])\bowtie\mu((m\triangleright\sum_{i\in I}\beta_ {i}.Q_{i})\backslash A)=\mathsf{m}\cup\mathsf{m}_{1}\] \[\mu(R_{2}) =\mu(E[\mathbf{0}])\bowtie\mu((\langle*,\beta^{j},\sum_{i\in I \backslash\{j\}}\rangle\cdot m\triangleright Q_{j})\backslash A)=\mathsf{m}\cup \mathsf{m}_{2}\]
where \(\mathsf{m}_{1}\) and \(\mathsf{m}_{2}\) are the results of applying the eventual synchronisation \(\bowtie\) respectively on \(\mu((m\triangleright\sum_{i\in I}\beta_{i}.Q_{i})\backslash A)\) and \(\mu((\langle*,\beta^{j},\sum_{i\in I\backslash\{j\}}\rangle\cdot m\triangleright Q _{j})\backslash A)\). Moreover, we can separate from \(\mathsf{m}_{1}\) and \(\mathsf{m}_{2}\) the key places, that is the places whose name terminates with \(\hat{\alpha}.\underline{\alpha}\) or with \(s_{\{t_{1},t_{2}\}}\). Be \(\mathsf{m}_{i}^{k}\) such markings then we have:
\[\mu(R_{1}) =\mathsf{m}\cup\mathsf{m}_{1}=\mathsf{m}\cup\mathsf{m}_{1}^{k} \cup\mathsf{m}_{1}^{\prime}\] \[\mu(R_{2}) =\mathsf{m}\cup\mathsf{m}_{2}=\mathsf{m}\cup\mathsf{m}_{2}^{k} \cup\mathsf{m}_{2}^{\prime}\]
By definition of \(\mu(\cdot)\) we have that
\[\mathsf{m}_{1}^{\prime} =\{\mathsf{path}(m).+_{\mathrm{i}}\beta_{i}.Q_{i}\ |\ i\in I\}\] \[\mathsf{m}_{2}^{\prime} =\{\mathsf{path}((\langle*,\beta^{j},\sum_{i\in I\setminus\{j\}} \rangle\cdot m).\hat{\beta}_{j}.\underline{\beta_{j}}\}\cup\{\mathsf{path}(( \langle*,\beta^{j},\sum_{i\in I\setminus\{j\}}\rangle\cdot m).\hat{\beta}_{j}. Q_{j}\}\]
It is easy to check that \(\tilde{\phi}=\mathsf{path}(m)+_{\mathrm{i}}\) and \(\tilde{\phi}=\mathsf{path}((\langle*,\beta^{j},\sum_{i\in I\setminus\{j\}} \rangle\cdot m)\). And we are done.
The cases of synchronization and backward transitions are similar.
We can now state our main result in terms of bisimulation:
**Theorem 4.20**.: _Let \(R\) be an RCCS process and let \(P=\rho(R)\) be its ancestor, then_
\[\langle\rangle\triangleright P\sim_{FR}\overleftarrow{\mathcal{N}(P)}\]
Proof.: It is sufficient to show that
\[\mathcal{R}=\{(R,\langle S,T,F,\mu(R)\rangle)\ |\ \rho(R)=P,\ \overleftarrow{ \mathcal{N}(P)}=\langle S,T,F,\mathsf{m}\rangle\}\]
is a forward revese bisimulation. It is easy to check that all the conditions of Definition 4.16 are matched by Lemma 4.18 and Lemma 4.19.
## 5. Implementation
In this section we describe an effective implementation of the proposed encoding in \(\mathsf{Haskell}\)2.
Footnote 2: The code can be accessed at [https://github.com/hmelgra/reversible-ccs-as-nets](https://github.com/hmelgra/reversible-ccs-as-nets).
### Representation of infinite nets
When working with an infinite data structure, a pivotal aspect is devising an efficient strategy to traverse the pertinent section of the structure. In our specific scenario, we prioritise the capability to identify and execute enabled transitions within a (potentially infinite) net. Therefore, our main objective is to identify those transitions that are enabled at a given marking. For this purpose, we adopt a representation of infinite nets that facilitates obtaining a truncated version of the net that contains all the enabled transitions in given marking. To maintain simplicity, we avoid explicitly representing the flow relation as a set of pairs. Instead, we associate each transition with its preset (input places) and postset (output places). Consequently, we rely on the following instrumental datatype to represent transitions.
```
--Eachtransitionconsistsofaname,apresetandapostsetdataTransitionts=Transition{trName::t,trPre::[s],trPost::[s] }
```
The parameters t and s represent the types of the names of transitions and places, respectively. In this representation, a transition is defined by its name and two lists of places, corresponding to its pre and postset.
Then, the datatype for nets is as follows:
dataNetst=Net {netPlaces::[s]\(\rightarrow\)[s] ,netTransitions::[s]\(\rightarrow\)[Transitionts] ,netMarking::[s] } } The components of a net include a marking, denoted as netMarking, which is essentially a set of places. Additionally, there are two functions, netPlaces and netTransitions, which map every marking to a set of places and transitions, respectively, of a truncated, finite version of the net. This truncated net includes all the transitions from the potentially infinite net that are enabled in the given marking.
**Example 5.1**.: Consider the infinite net \(N\) depicted in fig. 13a. One potential Haskell definition for \(N\) could be nAt[0], utilising the function nAt given in fig. 13b. This function takes a marking of type [Int] and returns a net with place names represented as integers and transitions as strings, i.e., of type Net Int String. The net's definition relies on the functions p and t, which determine the truncation of the net corresponding to a given marking. It is important to note that, for a specific marking m, any enabled transition \(t\) in m should satisfy the conditions \({}^{\bullet}t=\{i-1\}\) and \(t^{\bullet}=\{i\}\), where \(0<i\leq m\), and \(m\) is the maximum integer in m. Therefore, the function p, which maps markings to sets of places, is defined as follows:
* For the empty marking, it returns an empty set of places since no transitions are enabled in the empty marking.
* For a non-empty marking m, it generates a list containing all integers in the range from 0 to the maximum value in m plus one.
Similarly, the function t creates a list of transitions, encompassing all those among the places in p m. These transitions are defined in such a way that [i - 1] represents its preset, and [i] represents its postset. The name of the \(i\)-th transition is denoted by \(i\) occurrences of 'a'.
The auxiliary functions places and transitions, defined below, allow us to respectively retrieve the set of places and transitions from the truncation of net for its marking. For instance, places (nAt[0]) returns [0,1], and places (nAt[1,3]) gives [0,1,2,3,4].
Figure 13. The Haskell representation of a simple infinite net \(N\)
places :: Net s t [s] places n = netPlaces n $ netMarking n transitions :: Net s t [Transition t s ] transitions n = netTransitions n $ netMarking n Analogously, we rely on isTransition :: Eq t \(\Rightarrow\) t \(\rightarrow\) Net s t \(\rightarrow\) Bool to check that a given transition appears in the truncation of the net n. Then, the following predicate isEnabled allows to check if a given transition is enabled on a net. isEnabled :: (Eq s, Eq t) \(\Rightarrow\) t \(\rightarrow\) Net s t \(\rightarrow\) Bool isEnabled t n | isTransition t n = all ('elem' netMarking n) (pret n) | otherwise = False The guard isTransition t n simply checks that t appears in the truncation of the net n. In such case, a transition t is enabled if all elements in its preset appear in the marking of the net. Otherwise, the transition is not enabled.
The firing of a transition straightforwardly changes the marking of the net as expected, i.e., by removing the preset of the transition and by adding the postset of the transition
```
fire::(Eq s,Eq t) \(\Rightarrow\) t \(\rightarrow\) Net s t \(\rightarrow\) Net s t fire t n@(Net ps ts m) | isEnabled t n = Net ps ts ((m L.\(\backslash\) pre t n) ++ post t n) | otherwise = error "transition_not_enabled"
```
Note that the firing generates an error if the transition is not enabled.
### Representing CCS processes
The datatype for representing CCS actions is straight-fowardly defined as follows:
```
dataActiona =Ina--Input |Outa--Output |Tau--InternalAction
```
Note that the datatype is parametric with respect to the type a of action names. The binary predicate dual (shown below) tests whether a two actions are dual, i.e., one is an input and the other is an output performed over the same channel.
```
dual::Eqa \(\Rightarrow\)Actiona \(\rightarrow\)Actiona \(\rightarrow\)Bool dual(Out x)(In y)=x == y dual(In x)(Out y)=x == y dual_ = False The datatype for representing CCS processes is as follow.
``` dataCCS a =(Action a)::(CCS a)--Prefix |(CCS a):|(CCS a)--Parallel |(CCS a):+(CCS a)--Choice |(CCS a):\a --Restriction |Nil --Endedprocess |VarString --Processvariable
\(|\) Rec String (CCS a) -- Recursive process
The constructors are straightforward. For instance, CCS Char stands for the type of CCS processes whose channel names are characters. Then, the process \(a.a\parallel\overline{a}+b\) in Figure 10 is defined as
ccs :: CCS Char
ccs = (In 'a' :: Out 'a' :: Nil) :\(|\) ((Out 'a' :: Nil) :+ (In 'b' :: Nil))
We highlight that the datatype CCS includes constructors for the finite definition of infinite processes, i.e., Var for a process variable and Rec for a recursive definition. This choice is down to the facts that (i) our encoding uses CCS processes as the names of the elements of the generated nets; and (ii) the operational semantics of nets is defined under the assumption that names can be effectively compared (see details below). In order to have an equality test for infinite terms, we opted for a finite representation. Hence, the infinite CCS process consisting of an infinite sequence of inputs over the channel \(a\) can be defined as follows
ccs' :: CCS Char
ccs' = Rec "X" (In 'a' :: Var "X")
When dealing with the finite representation of infinite processes, we need the usual unfolding operation, which is defined in terms of the substitution of a process variable by a process. Substitution is given by the following function
subs :: CCS a --'process over which substitution is applied
\(\rightarrow\) String --'process variable to be substituted
\(\rightarrow\) CCS a --'replacement term
\(\rightarrow\) CCS a
whose defining equations are standard and therefore omitted.
The unfold function is as follows.
unfold :: CCS a \(\rightarrow\) CCS a unfold (Rec x p) = subs p x (Rec x p) unfold p = p
The function unfold will be used in the definition of the encoding.
Despite we rely on the finite representation of CCS processes, we remark that the implementation of the encoding associates **infinite** nets to recursive CCS processes.
### Implementation of the encoding
According to the encoding introduced in section 4, the names of the places and transitions of the obtained nets are (possibly) decorated CCS processes. We rely on the following datatypes introducing constructors for the names of places and transitions.
{-- Place's names --} data PlaceNames a = Proc (CCS a) -- CCS process | PKey (Action a) -- key for an action | PPref (Action a) (PlaceNames a) -- prefixed by an executed action | PParLeft (PlaceNames a) -- on the left of a parallel operator | PParRight (PlaceNames a) -- on the right of a parallel operator | PSync (TransNames a) (TransNames a) -- key for synchronisation
| PPPlusLeft (PlaceNames a) -- on the left of a sum operator | PPulsRight (PlaceNames a) -- on the right of sum operator | PRest (PlaceNames a) a -- under restriction deriving (Eq, Ord)
{-- Transition's names --} data TransNames a = Act (Action a) -- CCS process | TPref (Action a) (TransNames a) -- prefixed by an executed action | TParLeft (TransNames a) -- on the left of a parallel operator | TParRight (TransNames a) -- on the right of a parallel operator | TSync (TransNames a) (TransNames a) -- a synchronisation | TPulsLeft (TransNames a) -- on the left of a sum operator | TPusRight (TransNames a) -- on the right of sum operator | TRest (TransNames a) a -- under restriction deriving (Eq, Ord)
The above definitions are in one-to-one correspondence with the names introduced by the encoding of the previous Section, and self-explanatory.
We will use the predicate isKey on place's names that determines if a place name is a key, i.e., either PKey or PSync (its omitted definition is straightforward).
isKey :: PlaceNames a \(\rightarrow\) Bool The following function
label :: TransNames a \(\rightarrow\) Action a allows us to recover the label associated with a transition.
Then, the encoding function is given by
enc :: (Eq t) \(\Rightarrow\) CCS t \(\rightarrow\) Net (PlaceNames t) (TransNames t)
We now illustrate some of its representative defining equations. According to Definition 4.1, the encoding of the process 0 (here represented by Nil) produces a net consisting of just one marked place. We name that place Proc Nil, i.e., the CCS process 0.
enc Nil = Net (const [Proc Nil]) (const []) [Proc Nil] The fact that the net is defined in terms of the constant functions const [Proc Nil] and const [] reflect that every finite truncation, independently from the given marking, consists of just one place Proc Nil and none transition. The marking [Proc Nil] assigns one token to the unique place.
The encoding of a prefixed process follows Definition 4.2. Hence, the encoding of \(\alpha.P\) (written a ::p in the implementation) is built on top of the encoding of \(P\), i.e., the names of the places and the transitions appearing in the encoding of \(P\) are decorated with the prefix \(\hat{\alpha}\). We use PPref a for decorating a place name with the past of action a and similarly TPref a for a transition name. The following function (whose defining equations are omitted because are uninteresting) is in charge of applying renamings to a net.
rename :: (s \(\rightarrow\) s') \(\rightarrow\) (t \(\rightarrow\) t') \(\rightarrow\) (s' \(\rightarrow\) Maybe s) \(\rightarrow\) Nets t \(\rightarrow\) Net s' t' The first and second parameter correspond respectively to the renaming of places and transitions. The third one is instrumental for mapping a marking on the decorated names
to a marking of the encoding of \(P\), which is needed for computing a truncation. Then, the equation for the encoding of a ::p is as follows.
```
1enc(a::p)=Netst[Proc(a::p)]
2where
3NetaSpATpamp=rename(PPrefa)(TPrefa)(unwrapPrefa)$encp
4sm=ifnullmenthen[]else[Proc(a::p),PKeya]++aSp
5tm=ifnullmenthen[]else
6Transition(Acta)[Proc(a::p)](PKeya:amp):aTp
7Note that line 3 introduces the net corresponding to the encoding of p, with its element suitable renamed. Then, the places and transitions of the (truncations of the) net are given by the defining equations of s and t. Besides the fact that they are empty for empty markings, their definitions mimic Definition 4.2. The encoding of p is extended with two places, one for the process (i.e., Proc (a::p)) and one for the key (i.e, PKeya), and one transition of name Acta, whose preset is Proc (a::p) and whose poset corresponds to the initial marking of the encoding of p, i.e., amp, and the the new key PKeya.
As for the illustrated cases, the remaining equations follow the corresponding definitions in Section 4.
### Reversing nets
Reversible nets, are implemented as nets with tagged transitions: the tag Fwd stands for forward transitions and Bwd are for reversing transitions. The corresponding data type is as follows.
```
1dataDirecteda
2=Fwda
3|Bwdaderiving(Eq,Ord) ```
Then, the following function rev takes a net and generates its reversible version.
```
1rev::Netstst->Nets(Directedt)
2rev(Netstm)=Netst'm
3where
4t'=foldreverse[].t
5
6reverse(Transitionxyz)=
7(Transition(Fwdx)yz:).(Transition(Bwdx)zy:) ```
Consider the network, denoted as Netstm, which is translated into a new net with the same sets of places and markings, represented as Netst'm. The set of transitions t' in the new net is obtained by applying the following transformations to each transition Transition x y z from the original set t:
* Add a forward transition, denoted as Transition (Fwdx)yz, to tag each transition in t as forward.
* Add the corresponding reversing transition, denoted as Transition (Revx)yz, to maintain the bidirectional nature of the net.
### Simulation
The concepts introduced in the previous sections can now be effectively utilised to simulate the behavior of reversible CCS processes. To illustrate this, let us consider the definition of the infinite CCS process ccs below.
1. ccs1 :: CCS Int
2. ccs1 = Rec "X" (In 1 :: Var "X")
3
4. ccs2 :: CCS Int
5. ccs2 = Rec "X" ((Out 2 :: Var "X") ::+ (Out 1 :: Nil))
6
7. ccs :: CCS Int
8. ccs = (ccs1 : | ccs2) :\ 1 This process is defined as the parallel composition of two infinite processes, where the shared name 1 is restricted.
To obtain the corresponding reversible net, we apply the encoding followed by the reversing function, represented as rev(enc ccs).
Using the functions that determine the enabled transitions of net and compute the firing of transitions, we can seamlessly implement a simulation function to replicate the behavior of the process.
1. simulate :: (Show s, Show t, Ord t, Eq s) \(\Rightarrow\) Net s t \(\rightarrow\) IO () Then, the evaluation of
2. simulate $ rev (enc ccs)
shows the set of enabled transitions of the obtained net, which are as follows.
1. Enabled transitions: 1) \(\rightarrow\)(|r:+l:2!)\1
2. 2) \(\rightarrow\)(|l:1?*|r:+r:1!)\1 The name (|r:+l:2!)\1 of the first transition indicates that it corresponds to the output performed on channel 2 by the left branch (i.e., +l:) of the right hand of the parallel composition (i.e., |r:). Similarly, the symbol \(*\) in the name |l:1?*|r:+r:1!)\1 indicates that the transition corresponds to a synchronisation between the input performed on channel 1 by the left hand side of the parallel composition (i.e., |l:) and the output on channel 1 performed by the right branch of the right hand side of the parallel composition (i.e., |r:+r:).
At this point, any of the two transitions can be fired. After firing the first one, the obtained set of enabled transitions is the following.
1. Enabled transitions: 1) \(\rightarrow\)(|r:+l:^2!.+l:2!)\1
2. 2) \(\rightarrow\)(|l:1?*|r:+l:^2!.+r:1!)\1
3. 3) \(\leftarrow\)(|r:+l:2!)\1
The first two transitions mirror the ones originally enabled; however, their names indicate that actions on the right-hand side of the parallel composition causally depend on the preceding performed action (the prefix +l:^2!).
In addition to these two forward transitions, there is one reversing transition that undoes the previously executed action.
## 6. Conclusions and future works
On the line of previous research we have equipped a reversible process calculus with a non sequential semantics by using one of the classical encoding of process calculi into nets. What comes out from the encoding is that the machinery to reverse a process was already present in the encoding. Other approaches to address true concurrency in reversible calculi have been explored, for instance [1, 2], but we believe that our approach is somehow more natural.
The current results applies to RCCS, but we do believe that the same encoding could be used to model CCSK processes. As a matter of fact, in CCSK the information is stored directly in the process and executed prefixes are marked with communications keys and in our encoding it is signalled by a token in _key_-places. For example if we take the process \(P=a.Q\) in CCSK the process evolves in \(a[i].Q\) where the forward behaviour of the process is \(Q\) while the backward behaviour is represented by the marked prefix \(a[i]\). The same mechanisms applies to synchronisations. If we take the process \(a.b.\mathbf{0}\parallel\overline{a}.\mathbf{0}\) the process can make a synchronisation followed by the \(b\) action and evolves to \(a[i].b[j].\mathbf{0}\parallel\overline{a}[i].\mathbf{0}\). In this way, the synchronisation on \(a\) cannot be undone if first the action \(b\) is undone. By looking on how history information is kept into CCSK processes, it is clear that there is a tight correspondence between the marked prefixes, the key-places and \(\hat{\cdot}\) decorations we have used in unravel nets. Also in CCSK the process structure does not change, and the marking of the reversible net would correspond to the marked prefixes. The whole encoding and the machinery connected to it is left for future work.
Our result relies on unravel nets, that are able to represent _or_-causality. The consequence is that the same event may have different pasts. Unravel nets are naturally related to _bundle_ event structures [12, 13], where the dependencies are represented using _bundles_, namely finite subsets of conflicting events, and the bundle relation is usually written as \(X\mapsto e\). Starting from an unravel net \(\langle S,T,F,\mathfrak{m}\rangle\), and considering the transition \(t\in T\), the bundles representing the dependencies are \({}^{\bullet}s\mapsto t\) for each \(s\in{}^{\bullet}t\), and the conflict relation can be easily inferred by the semantic one definable on the unravel net. This result relies on the fact that in any unravel net, for each place \(s\), the transitions in \({}^{\bullet}s\) are pairwise conflicting. The _reversible_ bundle structures add to the bundle relation (defined also on the reversing events) a prevention relation, and the intuition behind this relation is the usual one: some events, possibly depending on the one to be reversed, are still present and they _prevent_ that event to be reversed. The problem here is that in an unravel net, differently from occurrence nets, is not so easy to determine which transitions depend on the happening of a specific one, thus potentially preventing it from being reversed. An idea would be to consider all the transitions in \(s^{\bullet}\) for each \(s\in t^{\bullet}\), but it has to be carefully checked if this is enough. Thus, which is the proper "reversible bundle event structure" corresponding to the reversible unravel nets has to be answered, though it is likely that the conditions to be posed on the prevention relations will be similar to the ones considered in [10, 11]. Once that also this step is done, we will have the full correspondence between reversible processes calculi and non sequential models.
Another future works idea would be to move from reversible CCS to reversible \(\pi\)-calculus [14, 15] by relying on the results of [1]. In [1] a truly concurrent semantics of \(\pi\)-calculus is given in form of Petri nets with inhibitor arcs. We could exploit our previous results on reversibility and Petri nets with inhibitor arcs [15, 16] to obtain a truly concurrent semantics for reversible \(\pi\)-calculus in Petri nets with inhibitor
arcs. Alternatively we could exploit the encoding of reversible \(\pi\)-calculus into rigid families (based on configuration structures), given in [10], and to bring it to Petri nets.
|
2310.00107 | Linear classification methods for multivariate repeated measures data --
a simulation study | Researchers in the behavioral and social sciences often use linear
discriminant analysis (LDA) for predictions of group membership
(classification) and for identifying the variables most relevant to group
separation among a set of continuous correlated variables (description). In
this paper, we compare existing linear classification algorithms for
nonnormally distributed multivariate repeated measures data in a simulation
study based on Likert-type data. It is widely accepted that, as a multivariate
technique, LDA provides more accurate results by examining not only the
relationship between the independent and dependent variables but also the
relationships within the independent variables themselves. In educational and
psychological research and other disciplines, longitudinal data are often
collected which provide additional temporal information. However, linear
classification methods for repeated measures data are rarely discussed in the
literature despite these potential applications. These methods are more
sensitive to actual group differences by taking the complex correlations
between time points and variables into account, when compared to analyzing the
data at each time point separately. Moreover, data in the behavioral and social
sciences rarely fulfill the multivariate normality assumption, so we consider
techniques that additionally do not require multivariate normality. The results
show that methods which include multivariate outlier removal before parameter
estimation as well as robust parameter estimation using generalized estimating
equations (GEE) perform better than the standard repeated measures LDA which
assumes multivariate normality. The results of the longitudinal support vector
machine (SVM) were not competitive. | Ricarda Graf, Marina Zeldovich, Sarah Friedrich | 2023-09-29T19:38:34Z | http://arxiv.org/abs/2310.00107v1 | # Linear classification methods for multivariate repeated measures data - a simulation study
###### Abstract
Researchers in the behavioral and social sciences often use linear discriminant analysis (LDA) for predictions of group membership (classification) and for identifying the variables most relevant to group separation among a set of continuous correlated variables (description). In this paper, we compare existing linear classification algorithms for non-normally distributed multivariate repeated measures data in a simulation study based on Likert-type data.
It is widely accepted that, as a multivariate technique, LDA provides more accurate results by examining not only the relationship between the independent and dependent variables but also the relationships within the independent variables themselves. In educational and psychological research and other disciplines, longitudinal data are often collected which provide additional temporal information. However, linear classification methods for repeated measures data are rarely discussed in the literature despite these potential applications. These methods are more sensitive to actual group differences by taking the complex correlations between time points and variables into account, when compared to analyzing the data at each time point separately. Moreover, data in the behavioral and social sciences rarely fulfill the multivariate normality assumption, so we consider techniques that additionally do not require multivariate normality.
The results show that methods which include multivariate outlier removal before parameter estimation as well as robust parameter estimation using generalized estimating equations (GEE) perform better than the standard repeated measures LDA which assumes multivariate normality. The results of the longitudinal support vector machine (SVM) were not competitive.
**Keywords:** Linear classification, Multivariate repeated measures data, Nonnormality, Robustness
## 1 Introduction
In psychology and the social sciences, discriminant analysis (DA) is traditionally applied to classification tasks in data with continuous variables since its invention by Fisher (1936). Its importance for the behavioral sciences has often been emphasized in reviews, tutorials and textbooks (Boedeker and Kearns, 2019; Sherry, 2006; Field, 2017; Huberty and Olejnik, 2006;
Fletcher et al 1978; Betz 1987; Garrett 1943). It has been applied to a large number of problems in experimental and applied psychology for class prediction as well as description (Rogge and Bradbury 1999; Langlois et al 2000; O'Brien et al 2009; Kumpulainen et al 2021; Shinba et al 2021; Stoyanov et al 2022; Aggarwala et al 2022).
Longitudinal data are collected in various disciplines since they provide additional information about temporal changes. Longitudinal studies in psychology and the social sciences (Jensen et al 2021; Banks et al 2021; McLanahan et al 2019) provide potential applications for repeated measures DA or alternative linear classification techniques. At the same time, textbooks discussing DA do not mention respective repeated measures approaches (Lix and Sajobi 2010).
Traditional classification approaches for continuous multivariate repeated measures data typically assume multivariate normality (Roy and Khattree 2005a,b; Tomasko et al 2010; Gupta 1986), but this assumption is rarely fulfilled by psychological datasets and hard to verify for small sample sizes (Delacre et al 2017; Rausch and Kelley 2009; Beaumont et al 2006; Neto et al 2016). Psychological data, especially those obtained using patient-reported instruments, are often characterized by skewness.
There are only few alternative approaches which relax or overcome the multivariate normality assumption and take the complex correlation structure between time points and variables into account. We consider the modifications of repeated measures LDA by Brobbey et al. (2021, 2022) that are more robust to deviations from multivariate normality. In their work, they compare the performance of the standard repeated measures LDA (which is based on the unstructured pooled covariance matrix estimate) once to its performance with preceding multivariate outlier removal using two different trimming algorithms by Rousseeuw (1985), and once to its performance when the covariance is estimated by a parsimonious Kronecker product structure using the generalized estimating equations (GEE) model (Inan 2015), respectively. In both cases, comparisons are made for a number of different simulation scenarios but data are always simulated assuming a parsimonious Kronecker structure for group means and covariance matrices, respectively, and correlations between variables that remain constant over time. Furthermore, the two robust methods are not compared among each other. We furthermore consider the generalization of the support vector machine classifier by Chen and Bowman (2011) to longitudinal data which uses a weighted combination of multivariate measurements taken at several time points as input. This longitudinal SVM, when used with a linear kernel, can also be used as a descriptive method, since it provides a weight vector corresponding to the variables' relative importance for separating the classes similar to Fisher discriminant function coefficients in DA.
In this paper, we are trying to mimick realistic datasets. We base simulations on unstructured means and covariance matrices estimated from psychometric reference datasets which differ in sample sizes, sample size ratios, class overlap, temporal variation and number of measurement occasions.
In our simulations, we compare the performance of the standard repeated measures LDA with the performance of repeated measures LDA based on GEE estimates by Brobbey et al. (2022), the repeated measures LDA when estimating the parsimonious Kronecker product covariance and the longitudinal SVM, each time either without or with preceding application of one of the two trimming algorithms as proposed by Brobbey (2021). In this way, we compare all potential combinations of these classification procedures applicable to linear classification problems of multivariate repeated measures data and evaluate their performance in data which deviate from multivariate normality. Furthermore, we evaluate the algorithms' performance in the reference data using a nonparametric bootstrap approach which estimates confidence intervals for the point estimates (Wahl et al 2016).
The paper is organized as follows. In Section 2, we describe the methods, i.e. the bootstrap approach proposed by Wahl et al. (2016) as well as the robust or nonparametric linear classification procedures, describe the reference datasets and the simulation setup. In Section 3, we present and discuss the results and provide recommendations based on the findings. Conclusions are made in Section 4.
## 2 Data and Methods
In this section, we will describe the traditional repeated measures LDA, its robust versions and the nonparametric longitudinal SVM for classification of nonnormally distributed repeated
measures data. We consider a situation with a categorical outcome variable \(i\in\{0,1\}\) (corresponding to two distinct groups) and \(n=n_{0}+n_{1}\) samples, where measurements of \(p\) variables are taken at \(t\) consecutive time points. We consider complete data, i.e. for each individual \(j\in\{1,\ldots,n_{i}\}\), each measurement \(l=1,\ldots,p\) is taken at each time point \(k=1,\ldots,t\).
Table 1 gives an overview of the considered methods. Furthermore, we describe the non-paramtric bootstrap approach for estimation of the methods' performance in the original data (Wahl et al 2016), the simulation setup and the reference datasets.
### Multivariate repeated measures LDA
For LDA, the unknown parameters \(\mathbf{\mu}_{i}\in\mathds{R}^{pt}\), the group-specific mean vectors, and \(\mathbf{\Sigma}\in\mathds{R}^{pt\times pt}\), the common covariance matrix, need to be estimated from the data. The covariance matrix \(\mathbf{\Sigma}\) is assumed to be positive definite. Assuming that \(\mathbf{\Sigma}\) is unstructured, all distinct correlations between each pair of the \(p\) variables and each combination of the \(t\) time points must be estimated. If the dataset is small, the estimate \(\widehat{\mathbf{\Sigma}}\) may become singular, i.e. if \(n\leq pt\). In order to reduce the complexity of \(\mathbf{\Sigma}\) or to estimate \(\mathbf{\Sigma}\) more efficiently, a reduced number of parameters can be considered by assuming, for example, a Kronecker product structure \(\mathbf{\Sigma}=\mathbf{\Sigma}_{t\times t}\otimes\mathbf{\Sigma}_{p\times p}\). Here, \(\mathbf{\Sigma}_{t\times t}\in\mathds{R}^{t\times t}\) comprises the correlations between the \(t\) time points and \(\mathbf{\Sigma}_{p\times p}\in\mathds{R}^{p\times p}\) comprises the correlations between the \(p\) variables. The number of unknown parameters reduces from \((pt(pt+1)/2)\) for an unstructured covariance matrix to \(p(p+1)/2+t(t+1)/2\) for a Kronecker product covariance matrix (Naik and Rao 2001). It can be estimated by the flip-flop algorithm, which gives maximum likelihood estimates of \(\mathbf{\Sigma}_{t\times t}\) and \(\mathbf{\Sigma}_{p\times p}\)(Lu and Zimmerman 2005). The flip-flop algorithm is suitable in case the entries in the vector of observations \(\mathbf{x}\in\mathds{R}^{pt}\) can be separated with respect to two factors, which are the time points and variables in case of multivariate longitudinal data.
Brobbey et al. (2021, 2022) developed two approaches for robust LDA based on the Kronecker product estimate of the covariance matrix that will be described in the following.
The LDA classification rule states that a new observation \(\mathbf{x}\in\mathds{R}^{pt}\) is assigned to class 0 if
\[\left(\mathbf{x}-\frac{\mathbf{\mu}_{0}+\mathbf{\mu}_{1}}{2}\right)^{T}\mathbf{\Sigma}^{- 1}(\mathbf{\mu}_{0}-\mathbf{\mu}_{1})>\log\left(\frac{\pi_{1}}{\pi_{0}}\right)\]
where \(\pi_{i},i\in\{0,1\}\), is the prior probability of class \(i\), and \(\mathbf{\Sigma}^{-1}\) can be replaced by \(\mathbf{\Sigma}_{t\times t}^{-1}\otimes\mathbf{\Sigma}_{p\times p}^{-1}\)(Brobbey 2021).
\begin{table}
\begin{tabular}{p{113.8pt} p{113.8pt}} \hline \hline Linear classification method & Description \\ \hline Repeated measures linear discriminant analysis (LDA) & Parametric method depending on estimates of the group means and common covariance matrix \\
1) standard/traditional & (unstructured) pooled covariance matrix \\
2) robust & a) (parsimonious) Kronecker product covariance estimated by flip-flop algorithm \\ & b) (unstructured) covariance matrix estimated using the joint Generalized Estimating Equations model \\ \hline Longitudinal Support Vector (SVM) using a linear kernel & Machine \\ \hline \hline \end{tabular}
\end{table}
Table 1: Overview of the considered linear classification methods for nonnormally distributed multivariate repeated measures data. The performance of each classification method is estimated either without or in combination with preceding multivariate outlier removal (using the Minimum Volume Ellipsoid (MVE) or the Minimum Covariance Determinant (MCD) algorithm, respectively).
## 2.1.1 Robust trimmed likelihood LDA for multivariate repeated
measures data
The rationale behind robust trimmed likelihood LDA for multivariate repeated measures data (Brobbey, 2021) is to use more robust estimators of the sample mean and covariance matrix in order to increase the accuracy of LDA predictions. Many estimators of these sample statistics are particularly prone to outliers, which are hard to detect in multivariate data with \(p>2\) variables. A popular measure of robustness, the finite sample breakdown point by Donoho (1982) and Donoho and Huber (1983), is the smallest number or fraction of extremely small or large values that must be added to the original sample that will result in an arbitrarily large value of the statistic. While many estimators of multivariate location and scatter break down when adding \(n/(p+1)\) outliers (Donoho, 1982), estimators based on the Minimum Volume Ellipsoid (MVE) and Minimum Covariance Determinant (MCD) algorithms (Rousseeuw, 1985) have a substantially higher break-down point of \((\lfloor n/2\rfloor-p+1)/n\)(Woodruff and Rocke, 1993; Rousseeuw and Driessen, 1999).
The high-breakdown linear discriminant analysis (Hawkins and McLachlan, 1997) for cross-sectional data is also based on the MCD algorithm and has already been implemented in the R package rrcov(Todorov, 2022).
Robust trimmed likelihood LDA for multivariate repeated measures data can also be used as a supporting analysis, showing that the results of the usual analysis are not severely affected by outliers.
The MCD is statistically more efficient than the MVE algorithm because it is asymptotically normal (Butler et al, 1993), its distances are more precise, i.e. it is more capable of detecting outliers (Rousseeuw and Driessen, 1999). The MCD algorithm takes subsets of size \((n+p+1)/2\leq h\leq n\) of the dataset (for \(h>p\)) and determines the particular subset of \(h\) observations out of the \(\binom{n}{h}\) possible subsets for which the determinant of the sample covariance \(\widehat{\mathbf{\Sigma}}\) becomes minimal. The MVE algorithm chooses the subset of \(h\) observations for which the ellipsoid containing all \(h\) data points becomes minimal.
Brobbey (2021) suggests to estimate the class means \(\mathbf{\mu}_{0}\) and \(\mathbf{\mu}_{1}\) as well as the common covariance matrix \(\mathbf{\Sigma}\) in the reduced dataset derived after applying the MCD or MVE algorithm, respectively. She furthermore suggests to estimate the Kronecker product structure of the covariance matrix since it is more parsimonious than the unstructured equivalent, which may not be estimable for small sample sizes. We apply both versions, where we once estimate the unstructured pooled covariance matrix
\[\widehat{\mathbf{\Sigma}}=\frac{(n_{0}-1)\widehat{\mathbf{\Sigma}}_{0}+(n_{1}-1) \widehat{\mathbf{\Sigma}}_{1}}{(n_{0}-1)+(n_{1}-1)}\]
and once the Kronecker product covariance \(\widehat{\mathbf{\Sigma}}=\widehat{\mathbf{\Sigma}}_{t\times t}\otimes\widehat{\mathbf{ \Sigma}}_{p\times p}\), where \(\widehat{\mathbf{\Sigma}}_{t\times t}\) and \(\widehat{\mathbf{\Sigma}}_{p\times p}\) are the pooled covariances between the \(t\) time points and \(p\) variables, respectively. The flip-flop algorithm (Lu and Zimmerman, 2005) is used to estimate \(\widehat{\mathbf{\Sigma}}_{t\times t}^{\text{\normalsize}}\) and \(\widehat{\mathbf{\Sigma}}_{p\times p}^{i},i\in\{0,1\}\) from the data.
We also apply the MVE and MCD algorithm, respectively, to the data when using the other linear classification methods described in the following sections, which has not been done before.
### Generalized estimation equations (GEE) discriminant analysis for repeated measures data
Joint generalized estimating equations (GEEs) are another possibility to derive more robust estimates of the sample means and covariance matrix from multivariate longitudinal data (Brobbey et al, 2022; Inan, 2015). GEEs provide population-level parameter estimates, which are consistent and asymptotically normally distributed even in case of misspecified working correlation structures of the outcome variables. The covariance matrix is estimated by a robust sandwich estimator (Hardin and Hilbe, 2013). Brobbey et al. (2022) proposed the use of GEEs for multivariate repeated measures data (Inan, 2015) in the context of repeated measures LDA. The population-level estimates of the GEE model are plugged into the repeated measures LDA classification rule. For parsimony, the joint GEE model by Inan (2015) uses a decomposition of the working correlation matrix into a \(t\times t\) within- and a \(p\times p\) between-multivariate response
correlation matrix through the Kronecker product.
We fitted the joint GEE model by Inan (2015) to the data of each group \(i\in\{0,1\}\) to obtain the class-specific means and covariance matrix estimates, which we subsequently pooled to obtain the common covariance matrix of the entire dataset. We drop the class index \(i\) here for better readability.
The joint GEE model estimates parameters \(\mathbf{\beta}_{l}\in\mathds{R}^{s_{l}+1}\), specific to each variable \(\mathbf{x}_{jl}\in\mathds{R}^{t},l\in\{1,\ldots,p\},j\in\{1,\ldots,n_{i}\}\). Although the measurements of each variable \(\mathbf{x}_{jl}\in\mathds{R}^{t}\) can have their own set of \(s_{l}\) covariates (Lipsitz et al, 2009), in our case, time is the only covariate for all \(p\) variables, i.e. \(s_{l}=1,l=1,\ldots,p\).
In the context of repeated measures LDA, the vector \(\mathbf{x}_{j}\in\mathds{R}^{pt}\) of repeated measurements represents the continuous outcome variables and the measurement occasion \(k\in\{1,\ldots,t\}\) represents the categorical independent variable. In this case, the independent variables are given as a \(pt\times 2p\) block diagonal matrix \(\mathbf{Z}_{j}=\mathrm{diag}(\mathbf{Z}_{jl}:1\leq l\leq p)\), where \(\mathbf{Z}_{jl}=(\mathbf{1}_{t}\ \mathbf{z}_{j1})=(\mathbf{1}_{t}\ (1,\ldots,t)^{T})\in \mathds{R}^{t\times 2}\) is the matrix of covariates of the \(l\)th outcome variable and identical for all \(p\) variables.
The GEE does not require the complete specification of the distribution of repeated measurements \(\mathbf{x}_{j}\) but only the correct specification of the first two moments and the link function connecting the covariates and marginal means (Lipsitz et al, 2009; Wang, 2014):
\[\mathrm{E}(\mathbf{x}_{j}) =\mathbf{\mu}_{j}=g^{-1}(\mathbf{Z}_{j}\mathbf{\beta})\] \[\mathrm{Var}(\mathbf{x}_{j}) =\mathbf{\Sigma}_{j}=(\sigma_{jlk})_{\begin{subarray}{c}l=1,\ldots, p\\ k=1,\ldots,t\end{subarray}}\]
where \(g\) is the link function, \(\mathbf{\beta}=(\mathbf{\beta}_{1},\ldots,\mathbf{\beta}_{p})\). We chose the identity link function for all \(p\) variables, i.e. \(\mathbf{\mu}_{j}=\mathbf{Z}_{j}\mathbf{\beta}\) and assumed an approximate Gaussian distribution as the marginal distribution of each \(\mathbf{x}_{jl},l=1,\ldots,p\).
For \(\mathbf{\beta}=(\mathbf{\beta}_{1},\ldots,\mathbf{\beta}_{p})\in\mathds{R}^{2p}\), and in case of no missing data, the GEE model is (Liang and Zeger, 1986):
\[U(\mathbf{\beta},\gamma,\rho)=\sum_{j=1}^{n_{i}}\mathbf{D}_{j}^{T}\mathbf{\Sigma}_{j} ^{-1}(\mathbf{x}_{j}-\mu_{j})=0\]
which can be solved with the Fisher scoring algorithm and where \(\mathbf{D}_{j}=\frac{\partial\mathbf{\mu}_{j}(\mathbf{\beta})}{\partial\mathbf{\beta}}\) is a block diagonal matrix of derivatives. The working covariance matrix \(\mathbf{\Sigma}_{j}\in\mathds{R}^{pt\times p\mu}\) in the joint GEE (Inan, 2015) is computed as
\[\mathbf{\Sigma}_{j} =\psi\cdot((\mathbf{A}^{1/2})^{T}(\mathbf{R}_{p\times p}(\gamma) \otimes\mathbf{R}_{t\times t}(\rho))\mathbf{A}^{1/2})\qquad\text{where}\] \[\mathbf{A} =\mathrm{diag}(\mathrm{Cov}(\mathbf{x}_{j}))\]
The correlation matrices \(\mathbf{R}_{p\times p}(\gamma)\) and \(\mathbf{R}_{t\times t}(\rho)\) may depend on additional parameters \(\gamma\) and \(\rho\), if they have a particular structure such as compound symmetry or autoregressive structure. Here, \(\mathbf{R}_{p\times p}(\gamma)\) is the \(p\times p\) correlation matrix of the \(p\) variables, \(\mathbf{R}_{t\times t}(\rho)\) is the \(t\times t\) correlation matrix of the \(t\) repeated measurement occasions, and \(\psi\) is a scale parameter. Liang and Zeger (1986) suggested replacing \(\mathbf{R}_{p\times p}(\gamma)\) and \(\mathbf{R}_{t\times t}(\rho)\) by the working correlation matrices and showed that the estimates \(\mathbf{\beta}\) are still consistent even for misspecified working correlations.
We assumed unstructured correlation matrices for \(\mathbf{R}_{p\times p}(\gamma)\) and \(\mathbf{R}_{t\times t}(\rho)\), respectively.
## 2 Longitudinal Support Vector Machine
The original linear SVM for cross-sectional data and linearly separable classes (Vapnik, 1982) has been modified such that an overlap between the samples of both classes is to some extent allowed (Cortes and Vapnik, 1995) depending on the choice of the regularization parameter \(C\). Chen and Bowman (2011) generalized the SVM classifier for a single time point (cross-sectional data) such that it becomes applicable to longitudinal data. In their longitudinal SVM algorithm, temporal changes are modeled by considering a linear combination of the observations \(\mathbf{x}_{j}\) and a parameter vector \(\mathbf{\beta}=(1,\beta_{1},\ldots,\beta_{t-1})\), which represents the coefficients for each time point \(k\). Then, \(\widetilde{\mathbf{x}}_{j}=\mathbf{x}_{j1}+\beta_{1}\mathbf{x}_{j2}+\cdots+ \beta_{t-1}\mathbf{x}_{jt}\), are provided as input to the traditional SVM. Combining the \(p\) observations from all \(t\) time points in a single vector
assumes that the distances between time points are the same. The approach also assumes a fixed number of \(p\) observations per time point \(k\) (complete data) just as in case of LDA.
The Lagrange multipliers \(\boldsymbol{\alpha}=(\alpha_{1},\ldots,\alpha_{n})\in\mathds{R}^{n}\) and the temporal change parameters \(\boldsymbol{\beta}\in\mathds{R}^{t}\) are iteratively optimized using convex quadratic programming. Although this SVM classifier can also estimate nonlinear decision boundaries depending on the type of kernel matrix that is used, we apply a linear kernel in order to compare its performance to the other linear classifiers and since the absolute values of the weight vector \(\mathbf{w}\in\mathds{R}^{p}\) estimated by the SVM can be interpreted as variable importance in case of a linear kernel matrix. A summary of the longitudinal SVM algorithm using the linear soft-margin approach can be found in the supplement S1.
Although the SVM algorithm does not make any distributional assumptions, the regularization parameter \(C\) needs to be optimized. We use the SSVMP algorithm (Sentelle et al, 2016), a modification of the SVMpath algorithm (Hastie et al, 2004) to find the optimal value of \(C\). The SSVMP algorithm is applicable for unequal class sizes and semidefinite kernel matrices in contrast to the original version by Hastie et al. (2004). The path algorithm finds the optimal value \(\lambda=1/C\) with high accuracy, since it considers all possible values of \(C\). At the same time, it is computationally efficient compared to the generally recommended grid search. It has been shown that the choice of \(C\) can be critical for the generalizability of the SVM model (Hastie et al, 2004).
## 2.3 Nonparametric bootstrap approach
In order to obtain performance estimates in the reference data, we used a nonparametric bootstrap approach for point estimates by Wahl et al. (2016), which is an extension of the algorithm by Jiang et al. (2008) and based on the.632+ bootstrap method (Efron and Tibshirani, 1997). It allows to quantify the uncertainty of point estimates by constructing confidence intervals. The.632+ bootstrap estimate (\(\hat{\theta}^{.632+}\)) of the performance measure of interest is computed as a weighted average of the apparent performance \(\hat{\theta}^{orig,orig}\) (training and test data given by the original dataset) and the "out-of-bag" (OOB) performance \(\hat{\theta}^{bootstrap,OOB}\) (training data given by the bootstrap dataset, randomly sampled with replacement, and test data given by the samples not present in the bootstrap dataset). The formula is:
\[\hat{\theta}^{.632+}=(1-w)\cdot\hat{\theta}^{orig,orig}+w\cdot\hat{\theta}^{ bootstrap,OOB}\]
Then each bootstrap dataset is assigned a weight \(w_{b}=\hat{\theta}^{bootstrap,bootstrap}-\hat{\theta}^{orig,orig}\), where \(\hat{\theta}^{bootstrap,bootstrap}\) is the value of the performance measure, when the bootstrap dataset is used as training as well as test dataset. The \(\frac{\alpha}{2}\) and \(1-\frac{\alpha}{2}\) percentiles of the empirical distribution of these weights, \(\xi_{\frac{\alpha}{2}}\) and \(\xi_{1-\frac{\alpha}{2}}\), give the confidence intervals of \(\hat{\theta}^{.632+}\):
\[[\hat{\theta}^{.632+}-\xi_{1-\frac{\alpha}{2}},\hat{\theta}^{.632+}+\xi_{ \frac{\alpha}{2}}]\]
The nonparametric bootstrap assumes that the observations \(\mathbf{x}_{ij},i\in\{0,1\},j=1,\ldots,n_{i}\) are independent.
## 2.4 Reference datasets
Two datasets with different numbers of repeated measurement occasions are used as reference datasets. Each one comprises measurements of four continuous predictor variables which are measured at two time points (CORE-OM dataset) and four time points (CASP-19 dataset), respectively. The binary outcome variable represents the group (\(y\in\{0,1\}\)). Both datasets consist of Likert-type data from psychological questionnaires, measured on a 5-point and 4-point Likert scale, respectively.
We created reference datasets from these data in order to compare the methods' performance in different (almost) realistic settings, not in order to draw any substantive conclusions about the data themselves.
The first dataset (Zeldovich, 2018) is a self-report questionnaire of psychological distress abbreviated to CORE-OM (Clinical Outcomes in Routine Evaluation-Outcome Measure). It assesses the progress of psychological or psychotherapeutic treatment using four domains (subjective
well-being, problems/symptoms, life functioning, risk/harm) measured on a 5-point Likert scale (0: not at all, 4: most or all the time). We created a balanced and an unbalanced dataset by choosing two different variables available in the dataset to form groups. The balanced dataset results from splitting the observations at the median age to form groups of younger (\(n_{0}=93\)) and older participants (\(n_{1}=93\)), denoted as "dataset 1" in the following. The unbalanced dataset uses the binary variable hospitalisation as group variable and is denoted as "dataset 2" in the following. Non-hospitalised participants represent group 0 (\(n_{0}=42\)) and hospitalised ones group 1 (\(n_{1}=142\)).
The second dataset is a self-report questionnaire of quality of life for adults aged 60 and older abbreviated to CASP-19. The dataset on CASP-19 is derived from waves 2, 3, 4, and 5 of The English Longitudinal Study of Ageing (ELSA) (Banks et al, 2021). The CASP-19 questionnaire comprises the four subdomains control, autonomy, self-realization and pleasure measured on a 4-point Likert scale (0: often, 3: never). Loneliness as one of the factors affecting quality of life (Talarska et al, 2018) is chosen as the group variable. For this purpose, the sample was dichotomized at a score value of three determined from two questions related to loneliness ("Old age is a time of loneliness", "As I get older, I expect to become more lonely"), answered on a 5-point Likert scale (1: strongly agree, 5: strongly disagree) by the participants during wave 2. Persons who feel less lonely represent group 0 (\(n_{0}=254\)) and those who feel more lonely represent group 1 (\(n_{1}=1682\)). Since the group differences were nevertheless marginal (similar to dataset 1), we modified these data in order to increase them. Group 1 remained unchanged, but for group 0 only those observations, for which the variables "control" and "self-realization" took on values above their respective 0.51 percentile remained. The dataset is referred to as "dataset 3" in the following.
Answers to questions of each subdomain in the Likert-type questionnaires are summarized in a score, where higher scores indicate a higher level of distress (dataset 1 and 2), and a better quality of life (dataset 3), respectively. Analyses and data simulations are based on these scores. Boxplots showing the distribution of these scores computed from the reference data are presented in Figure 1. For dataset 1, boxes of both groups are much more comparable than for dataset 2, where the groups are more distinct. For dataset 3, the groups are also distinct for each variable but there is only little temporal variation despite four instead of two measurement occasions compared to dataset 1 and 2.
Table 2 shows that our reference datasets substantially differ from multivariate normality, i.e. \(p\)-values of the \(\chi^{2}\) test corresponding to the Mardia measure of skewness are all significant.
\begin{table}
\begin{tabular}{l c c c c} \hline & \(b_{1,p}\) & \(\chi^{2}\) test statistic & df & \(p\)-value \\ \hline _Dataset 1_ & 14.8 & 460.2 & 120 & 2.52E-41 \\ \hline _Dataset 2_ & 14.8 & 453.8 & 120 & 2.75E-40 \\ \hline _Dataset 3_ & 31.6 & 10199.8 & 816 & 0.0 \\ \hline \end{tabular}
\end{table}
Table 2: Mardia measure of multivariate skewness (\(b_{1,p}\)), value of the corresponding \(\chi^{2}\) test statistic with respective \(p\)-value for the reference data (\(\alpha\) = 0.05), i.e.
Dataset 1: CORE-OM dataset with group variable _age_ (\(n_{0}=93,n_{1}=93\)),
Dataset 2: CORE-OM dataset with group variable _hospitalisation_ (\(n_{0}=42,n_{1}=142\)),
Dataset 3: CASP-19 dataset with group variable _loneliness_ (\(n_{0}=254,n_{1}=1682\)).
Figure 1: Boxplots showing the variables’ distribution in the reference datasets:
(a) Dataset 1: CORE-OM dataset, group variable _age_ (\(n_{0}=n_{1}=93\))
(b) Dataset 2: CORE-OM dataset, group variable _hospitalisation_ (\(n_{0}=42,n_{1}=142\), non-hospitalised patients represent group 0 and hospitalised patients represent group 1)
(c) Dataset 3: CASP-19 dataset, group variable _loneliness_ (\(n_{0}=254,n_{1}=1682\), participants who feel less lonely represent group 0 and participants who feel more lonely represent group 1)
## 2.5 Simulation study approach and software
Our simulation study aims at mimicking the reference datasets. A brief overview of the steps in the simulation study is given in Figure 2. For each scenario, 2000 datasets are simulated. Data are simulated from the multivariate normal distribution (as a reference), from the multivariate truncated normal distribution which only takes on values within specified boundaries (similar to the scales in the reference data) and multivariate lognormally distributed data in order to include an extremely skewed distribution (overview in Table 3). Data are either not trimmed or trimmed using the MCD and the MVE algorithm, respectively, before applying the classification algorithms.
Sample sizes for the training data are chosen identical to the sample sizes of the original datasets. Sample sizes for the test data are always \(n_{0}=n_{1}=1000\) in order to decrease variation of the performance measure estimates.
Application of the linear SVM algorithm requires a data-preprocessing step and finding an optimal hyperparameter \(C\) which determines the maximum amount of overlap allowed between samples of both classes. Since the SVM algorithm relies on the Euclidean distance to determine the optimal decision boundary, data are preprocessed by standardization before applying the method. Machine-learning algorithms generally require the optimization of hyperparameters. We applied the simple SVM path (SSVMP) algorithm by Sentelle et al. (2016) as suggested by Chen and Bowman (2011) in order to determine the optimal regularization parameter \(C\). It is available as MATLAB code (Sentelle, 2015), which we rewrote in R.
The SSVMP algorithm ran into errors for the largest dataset 3. It did not give results for the vast majority of datasets in scenario 3 (either no convergence was reached after the maximum number of 100 iterations or the assumptions of the Cholesky decomposition originally incorporated in the algorithm were not fulfilled), thus results of the longitudinal SVM are not computed for dataset 3.
The longitudinal SVM algorithm requires to specify a maximum number of iterations used for finding the optimal separating hyperplane parameters. The iterative algorithm for optimization of the Lagrange multipliers \(\boldsymbol{\alpha}\) and temporal change parameters \(\boldsymbol{\beta}\) in the longitudinal SVM is repeated until the Euclidean distance between two consecutive estimates of \(\boldsymbol{\alpha}_{m}\) becomes less than 1E-08 or the maximum number of 100 iterative steps is reached. The number of times for which the longitudinal SVM algorithm converged in the different settings can be found in Supplementary Table S4.
The MVE and MCD algorithm cannot be applied if the variability in at least one variable is too low to determine unique quantiles. They both failed for the bootstrap approach using dataset 3 (Table 4) because there is hardly any variability for the variable "self-realization" in group 0. The flip-flop algorithm (Lu and Zimmerman, 2005) used by Brobbey (2021) for estimating the Kronecker product structure of the covariance matrix from the training data was iterated until the Frobenius norm of two consecutive Kronecker product covariance matrices became less than or equal to 1E-04, a proposed stopping criterion by Castaneda and Nossek (2014).
We used the following software for data simulations. We implemented the longitudinal SVM in R and used the R package Rcplex(Bravo et al, 2021), an R interface to linear and quadratic solvers of the IBM ILOG CPLEX Optimization Studio (IBM ILOG, 2021). We used the implementations of MVE and MCD algorithm from the R package MASS(Ripley et al, 2022), the joint GEE model as implemented in the R package JGEE(Inan, 2015), and implemented the version of the flip-flop algorithm in R as described in Lu and Zimmerman (2005). For simulation of multivariate normally, lognormally, truncated normally distributed data, we used the respective functions from the R packages MASS(Ripley et al, 2022), compositions(van den Boogaart et al, 2022), and tmvtnorm(Wilhelm and Manjunath, 2022), respectively. For the truncated normal distribution, the rejection method (default) was used.
We compared the methods' performance with respect to different measures of discrimination. These consider the similarity between true and predicted class labels. We chose predictive accuracy, the proportion of correctly predicted class labels, and the Youden index (Youden, 1950), which combines the sensitivity and specificity of the classification model in a single measure (Youden index = \(|\)Sensitivity + Specificity -1\(|\)). Recommendations based on theses measures can differ a lot. Predictive accuracy of an algorithm may be high in data with highly unbalanced classes if the label of the larger class is predicted for all samples. In this case the Youden index will have the minimum value of zero. Therefore it is reasonable to consider both measures.
For visual assessment, summary ROC curves (Reitsma et al, 2005), which represent sensitivity and specificity estimates for all 2000 simulated datasets in combination, are computed using the R package mada(Doebler, 2020). They are based on a bivariate normal model of sensitivity and specificity, which is identical to the hierarchical summary ROC model by Rutter and Gatsonis (2001) when no covariates affecting either sensitivity or specificity are included (Harbord et al, 2007), and which is recommended for meta-analyses of test performances in the Cochrane Handbook (Cochrane Diagnostic Test Accuracy Working Group, 2011).
Results and discussion
### Performance in the reference data
Figure 2 shows the ROC curves based on the \(\hat{\theta}^{.632+}\) bootstrap estimates of sensitivity and specificity for each reference dataset in order to provide a first visual impression of the algorithms' performance in the reference data. These estimates of sensitivity and specificity including their 95% confidence intervals can be found in Supplementary Table S2.
In the balanced scenario (dataset 1), the performance of all methods is very similar and could not distinguish the classes very well. This could already be assumed from the boxplots (Figure 1) which largely overlap for dataset 1. In the unbalanced scenarios (dataset 2 and 3), the different extensions of LDA to repeated measures data clearly perform better than the longitudinal
Figure 2: ROC curves showing the algorithms’ discriminative performance, based on the \(\hat{\theta}^{.632+}\) bootstrap estimates of sensitivity and specificity.
(a) Dataset 1: CORE-OM dataset, group variable _age_ (\(n_{0}=n_{1}=93\))
(b) Dataset 2: CORE-OM dataset, group variable _hospitalisation_ (\(n_{0}=42,n_{1}=142\), non-hospitalised patients represent group 0 and hospitalised patients represent group 1)
(c) Dataset 3: CASP-19 dataset, group variable _loneliness_ (\(n_{0}=254,n_{1}=1682\), participants who feel less lonely represent group 0 and participants who feel more lonely represent group 1)
SVM, for which Chen and Bowman (2011) only demonstrated the performance for equal sample sizes. Considering the results, the method may not work as well for highly imbalanced data. Table 4 shows the \(\hat{\theta}^{\cdot 632+}\) bootstrap estimates for predictive accuracy and the Youden index with their respective 95% confidence intervals. In Dataset 1 (equal sample sizes), the methods' performance is very similar, and only the longitudinal SVM has a higher predictive accuracy compared to the standard LDA (LDA (\(\Sigma_{pooled}\))) with non-overlapping confidence intervals. For the LDA-based methods, performance slightly improves after multivariate outlier removal (application of the MVE or MCD algorithm, respectively). For Dataset 2 (highly imbalanced sample sizes), the most obvious result is the poor performance of the longitudinal SVM. In Dataset 3 (highly imbalanced but larger sample sizes than Dataset 2, less temporal variation), predictive accuracy and Youden Index are much lower for LDA based on GEE estimates compared to the other algorithms.
Results using the MVE and MCD algorithm could not be computed since these methods only work if unique quantiles can be determined but the variable "self-realization" has too low variability in group 0. The results of the longitudinal SVM are missing since the data often did not fulfil the assumptions of the Cholesky decomposition used in the algorithm or the maximum number of 100 iterative steps was exceeded.
## 3.2 Performance in the simulated data
Summary ROC plots for all simulation scenarios are shown in Supplementary Figure S1. The summary ROC curves essentially show the better discriminative ability of the classification algorithms in Datasets 2 and 3, respectively, where (mean) measurements between the groups differ more than in Dataset 1.
Complete simulation results showing the mean and standard errors of all performance measures can be found in the supplementary material (Supplementary Table S3). Since the focus is rather comparing the methods' performance than on the exact numbers, we show a visual comparison of the methods' predictive accuracy (Figure 3). Since the results with respect to the Youden index are very similar they are shown in Supplementary Figure S2.
Figure 3 shows the boxplots of predictive accuracy estimated in the 2000 test datasets. Notably, the standard repeated measures LDA algorithm (based on the usual pooled covariance estimate, \(\widehat{\mathbf{\Sigma}}_{pool}\)) does not perform best in any simulation scenario, although the difference is especially marginal for Dataset 1. For Dataset 1, the LDA based on the parsimonious Kronecker product covariance (\(\Sigma_{KP}\)) and LDA based on estimates of the joint GEE model LDA(GEE), respectively, perform slightly better. For Dataset 2 and 3, respectively, LDA(GEE) generally performs best with respect to predictive accuracy (and the Youden index, Supplementary Figure S2), although there are some outliers among the 2000 simulations with worse results. In the nonnormally distributed data (middle and right column), the advantage of multivariate outlier removal through the MVE and MCD algorithms becomes apparent, where the use of the MCD algorithm (in green) often results in higher predictive accuracy compared to the use of the MVE algorithm (in blue). The longitudinal SVM only works comparably well for Dataset 1 (small but balanced sample sizes).
The high computational times for the longitudinal SVM (as a nonparametric method) are another disadvantage (Table 5). The traditional LDA which uses the pooled covariance estimate is least computationally intensive since it does not involve any iterative procedure.
Figure 3: Boxplots showing the distribution of the algorithms’ predictive accuracy estimated from 2000 simulated datasets for the multivariate normal (left), multivariate lognormal (center) and multivariate truncated normal distribution (right). Results with the highest median value are highlighted in darker colours.
## 3 Recommendations
In summary, the (nonparametric) longitudinal SVM seems not to be recommendable due to its poor performance in data with unbalanced class sizes, its relatively high computational times and potential errors the algorithm may run into for some data. However, it may work well for equal class sizes (Chen and Bowman, 2011). Repeated measures LDA based on estimates of the joint GEE model (LDA(GEE)) usually performs best, whether or not the data are normally distributed. Also, results of the LDA(GEE) method are much less affected by multivariate outliers than those of the standard repeated measures LDA (LDA (\(\mathbf{\Sigma}_{pooled}\))) and the LDA using the more parsimonious Kronecker product covariance estimate (LDA (\(\mathbf{\Sigma}_{KP}\))). An additional advantage is also, that the method is already implemented in the R package JGEE(Inan, 2015), and can therefore easily be applied.
LDA (\(\mathbf{\Sigma}_{pooled}\)) or LDA (\(\mathbf{\Sigma}_{KP}\)) in combination with multivariate outlier removal may still perform better in specific cases. For these classification methods, especially the MCD algorithm for prior outlier removal seems advantageous, that is also implemented in the R package MASS(Ripley et al, 2022).
## 4 Conclusions
Longitudinal studies are conducted in psychology and other disciplines. Data in psychology and the social sciences are often characterized by nonnormal distributions, especially skewness. LDA is widely applied as a standard technique in these fields, e.g. to questionnaire data where answers are measured on Likert scales, either for classification tasks or for identification of variables most relevant to group separation. Repeated measures techniques are preferable for the analysis of data that are collected repeatedly over time compared to conducting several independent analyses per time point. We compared the performance of robust repeated measures DA techniques proposed by Brobbey et al. (2021, 2022) and the longitudinal SVM by Chen and Bowman (2011) using multiple performance measures. We based these comparisons on real psychometric datasets which differ with respect to sample size, sample size ratio, class overlap, temporal variation and number of repeated measurement occasions. We thus considered additional scenarios to those in Brobbey et al. (2021, 2022), where Kronecker product structures of means and covariances and constant correlations of the variables over time were assumed. We also compared several robust methods among each other in contrast to comparing a particular robust method to the standard method at a time. We included the longitudinal SVM because it is similar to repeated measures LDA in that they are both linear classifiers for which variable weights can additionally be computed and temporal correlations are considered in the analysis. We did not consider extensions of other supervised machine learning algorithms for classification since they usually assume independence between time points (Ribeiro and Freitas, 2019) and do not have a comparably intuitive interpretation of variable weights as the linear SVM. Still they may be useful in case data shall be grouped based on categorical variables, but traditionally scores based on Likert-type data are considered to be continuous variables.
In order to raise awareness of the considered linear classification techniques and their potential application to psychometric data, we compare them in data simulations based on Likert-type data from psychological questionnaires. We computed point estimates of their performance with confidence intervals in the reference data using a nonparametric bootstrap approach and compared their performance in simulated data based on parameter estimates obtained from the reference datasets. We found that the repeated measures LDA based on parameter estimates from the joint GEE model developed by Brobbey et al. (2022) most often gives the best results. The MCD algorithm most often leads to better classification performance when the (original) data show some group mean difference (only partially or non-overlapping boxes in boxplots). Both methods have already been implemented. The results of one of the methods or both methods in combination can be computed at least as an additional sensitivity analysis. Potential drawbacks of our study are the limited number of reference datasets considered for the method comparison and the single trimming parameter value (10%) for outlier removal used for the MVE and MCD algorithm. To date, no recommendations on the choice of the trimming parameter for multivariate data exist. Multiple values can be tried for the analysis of an actual dataset.
We followed the guidelines for neutral comparison studies by Weber et al. (2019) and the general design of simulation studies by Morris et al. (2019).
Supplementary information.
## S.1 Longitudinal Support Vector Machine
SVM is based on the structural risk minimization (SRM) principle which is intended to minimize the expectation of the test error for a trained SVM model, called the expected risk \(R(a)\). Its value usually cannot be computed directly but the upper boundary is given by the sum of the mean error \(R_{emp}(a)\) and the so-called Vapnik-Chervonenkis confidence. Among a pre-defined set of functions, as, for example, the set of linear functions, the one which yields the maximum margin between the samples of both classes is determined.
The upper bound of the expected risk \(R(a)\) holds with a probability of \(1-\eta\) and is defined as (Vapnik, 2010; Burges, 1998):
\[R(a) \leq R_{emp}(a)+\sqrt{\left(\frac{h(\log(2n/h)+1)-\log(\eta/4)}{n }\right)}\quad,\text{ where}\] \[R_{emp}(a) =\frac{1}{2}\sum_{i=1}^{n}|y_{j}-f(\mathbf{x}_{j},a)|\]
and \(n\) denotes the number of training observations, \(a\) a specific support vector machine model, \(h\) the Vapnik-Chervonenkis dimension (which equals \(n+1\) for the linear SVM), and \(f(\mathbf{x}_{j},a)\) representing the predicted class labels by model \(a\).
The SSVMP algorithm (Sentelle, 2015; Sentelle et al, 2016) optimizes the inverse of the regularization parameter, \(\lambda=1/C\). Starting with a high value of \(\lambda\) such that all samples lie within the margin of the SVM, it successively determines a strictly decreasing sequence of \(\lambda\) values for which the set of support vectors changes for each \(\lambda\) value, and it stops if no more observations are left inside of the margin (linearly separable case) or if the next \(\lambda\) value would be zero.
In the algorithm by Chen and Bowman (2011), the linear kernel matrix (or Gram matrix) \(\mathbf{G}_{m}\) is given by:
\[\mathbf{G}_{m}=\begin{pmatrix}\mathbf{G}_{m}^{11}&\ldots&\mathbf{G}_{m}^{1t}\\ \vdots&\ddots&\vdots\\ \mathbf{G}_{m}^{t1}&\ldots&\mathbf{G}_{m}^{tt}\end{pmatrix}=\begin{pmatrix} \mathbf{X}_{1}^{T}\mathbf{X}_{1}&\ldots&\mathbf{X}_{1}^{T}\mathbf{X}_{t}\\ \vdots&\ddots&\vdots\\ \mathbf{X}_{t}^{T}\mathbf{X}_{1}&\ldots&\mathbf{X}_{t}^{T}\mathbf{X}_{t}\end{pmatrix} \in\mathds{R}^{nt\times nt}\]
where
\[\mathbf{X}_{k}=\begin{pmatrix}y_{1}\mathbf{x}_{1k}\\ \vdots\\ y_{n}\mathbf{x}_{nk}\end{pmatrix}\in\mathds{R}^{n\times p},\quad k=1,\ldots,t.\]
The dual form of the convex quadratic program (QP) is given by:
\[\min_{\boldsymbol{\alpha}}\frac{1}{2}\boldsymbol{\alpha}_{m}^{T} \mathbf{G}_{m}\boldsymbol{\alpha}_{m}-\mathbf{1}^{T}\boldsymbol{\alpha}\] \[\text{s.t.}\quad C\geq\alpha_{j}\geq 0\quad\forall j=1, \ldots,n\qquad\qquad\sum_{k=1}^{t}\sum_{j=1}^{n}\boldsymbol{\alpha}_{m}(j+(k- 1)n)y_{j}=0\] \[\text{where}\] \[\boldsymbol{\alpha}_{m}=(\boldsymbol{\alpha},\beta_{1} \boldsymbol{\alpha},\ldots,\beta_{t-1}\boldsymbol{\alpha})^{T}\in\mathds{R}^{ tn\times 1}\]
and \(\boldsymbol{\alpha}_{m}^{T}\mathbf{G}_{m}\boldsymbol{\alpha}_{m}\) is a convex function in the Lagrange multipliers \(\boldsymbol{\alpha}\) and the temporal change parameters \(\boldsymbol{\beta}\). The parameters \(\boldsymbol{\alpha}\) and \(\boldsymbol{\beta}\) can be optimized iteratively with respect to this
objective function:
1. Initialize \(\mathbf{\beta}=(1,\beta_{1},\ldots,\beta_{t-1})\).
2. Assuming the value of \(\mathbf{\beta}\) is known, \(\mathbf{\alpha}\) is optimized. The optimization problem becomes: \[\min_{\mathbf{\alpha}}\frac{1}{2}\mathbf{\alpha}^{T}\left(\sum_{k_{1}=1}^ {t}\sum_{k_{2}=1}^{t}\beta_{k_{1}}\beta_{k_{2}}\mathbf{G}_{m}^{k_{1}k_{2}} \right)\mathbf{\alpha}-\mathbf{1}^{T}\mathbf{\alpha}\] s.t. \[C\geq\alpha_{j}\geq 0\quad\forall j=1,\ldots,n \sum_{j=1}^{n}\alpha_{j}y_{j}=0\]
3. Assuming the value of \(\mathbf{\alpha}\) is known, \(\mathbf{\beta}\) is optimized. The optimization problem becomes: \[\min_{\mathbf{\beta}}\frac{1}{2}\mathbf{\beta}^{T}\begin{pmatrix}\mathbf{ \alpha}^{T}\mathbf{G}_{m}^{11}\mathbf{\alpha}&\ldots&\mathbf{\alpha}^{T}\mathbf{G}_{m }^{1t}\mathbf{\alpha}\\ \vdots&\ddots&\vdots\\ \mathbf{\alpha}^{T}\mathbf{G}_{m}^{t1}\mathbf{\alpha}&\ldots&\mathbf{\alpha}^{T}\mathbf{ G}_{m}^{tt}\mathbf{\alpha}\end{pmatrix}\mathbf{\beta}\] s.t. \[\beta_{0}=1 \left(\sum_{j=1}^{n}\alpha_{j}y_{j}\right)\sum_{k=1}^{t}\beta_{ k}=0\] Steps 2 and 3 are repeated until convergence. The dual form allows to directly identify the support vectors. Their corresponding entries of \(\mathbf{\alpha}_{m}\) are different from zero, i.e. \(\mathbf{x}_{j}\) is a support vector if \(\left\{\mathbf{\alpha}_{m}(j+(k-1)n)\right\}_{k=1,\ldots,t}>\mathbf{0}\quad j\in \left\{1,\ldots,n\right\}\). Having found the optimal solution \(\mathbf{\alpha}_{m}^{*}\) through the iterative QP approach, the weight vector \(\mathbf{w}\) and the intercept \(b\) in the decision function \(h\) can be determined: \[h(\mathbf{x}) =\frac{1}{n}\sum_{j=1}^{n}\mathbf{w}^{T}\mathbf{x}^{T}\mathbf{\beta}^{*}+b\] where \[\mathbf{w} =\sum_{j=1}^{n}y_{j}\alpha_{j}^{*}\widetilde{\mathbf{x}}_{j}= \sum_{j=1}^{n}y_{j}\alpha_{j}^{*}\{\mathbf{x}_{j1}+\beta_{1}^{*}\mathbf{x}_{j 2}+...+\beta_{t-1}^{*}\mathbf{x}_{jt}\}\quad\in\mathds{R}^{p}\] \[b =\frac{1}{n}\sum_{j=1}^{n}\mathbf{w}^{T}(\mathbf{x}_{j}^{T}\mathbf{ \beta}^{*})-y_{j}\quad\in\mathds{R}\]
For computation of the intercept \(b\), the data \(\mathbf{x}_{j}\) are used in \(t\times p\) matrix form. New data samples are assigned negative class labels (\(y=-1\)) if \(h(\mathbf{x})<1\), otherwise they are assigned to the positive class (\(y=1\)).
\begin{table}
\begin{tabular}{l l l} \hline \hline Parameter & Description & Estimate \\ \hline \(n_{0}\) & sample size & \\ & group 0 & 42 \\ \hline \(n_{1}\) & sample size & 142 \\ & group 1 & \\ \hline \(n_{test}\) & test & \\ & sample sizes & \(n_{0}=n_{1}=1000\) \\ \hline \(n_{sim}\) & simulation & 2000 \\ \hline \(\mathbf{\mu}_{0}\) & class mean & \\ & group 0 & \\ \hline \(n_{1}\) & sample size & 93 \\ \hline \(n_{test}\) & test sample size & \(n_{0}=n_{1}=1000\) \\ \hline \(n_{sim}\) & simulation & 2000 \\ \hline \(\mathbf{\mu}_{0}\) & class mean & \\ & group 0 & \\ \hline \(\mathbf{\mu}_{1}\) & class mean & \\ & group 1 & \\ \hline \(n_{test}\) & test sample size & \(n_{0}=n_{1}=1000\) \\ \hline \(n_{sim}\) & simulation & 2000 \\ \hline \(\mathbf{\mu}_{0}\) & class mean & \\ & group 0 & \\ \hline \(n_{1}\) & sample size & 93 \\ \hline \(n_{test}\) & test sample size & \(n_{0}=n_{1}=1000\) \\ \hline \(n_{sim}\) & simulation & 2000 \\ \hline \(\mathbf{\mu}_{0}\) & class mean & \\ & group 0 & \\ \hline \(n_{1}\) & sample size & 93 \\ \hline \(n_{test}\) & test sample size & \(n_{0}=n_{1}=1000\) \\ \hline \(n_{sim}\) & simulation & 2000 \\ \hline \(\mathbf{\mu}_{0}\) & class mean & \\ & group 0 & \\ \hline \(n_{1}\) & sample size & 93 \\ \hline \(n_{test}\) & test sample size & \(n_{0}=n_{1}=1000\) \\ \hline \(n_{sim}\) & simulation & 2000 \\ \hline \(n_{sim}\) & simulation & \\ & group 0 & \\ \hline \(n_{1}\) & sample size & 93 \\ \hline \(n_{test}\) & test sample size & \(n_{0}=n_{1}=1000\) \\ \hline \(n_{sim}\) & simulation
\begin{table}
\begin{tabular}{l l l} \hline \hline Parameter & Description Estimate \\ \hline \(n_{0}\) & sample size \\ & group 0 \\ \hline \(n_{1}\) & sample size \\ & group 1 \\ \hline \(n_{test}\) & test \\ & sample sizes \\ \hline \(n_{sim}\) & simulation \\ & runs \\ \hline \(\mathbf{\mu}_{0}\) & class mean \\ & group 0 \\ \hline \(\mathbf{\mu}_{1}\) & class mean \\ & group 1 \\ \hline \(\mathbf{\mu}_{1}\) & class mean \\ & group 1 \\ \hline \(\mathbf{\mu}_{1}\) & \(\begin{pmatrix}0.31&0.15&0.1&0.18&0.19&0.12&0.1&0.16&0.18&0.12&0.09 &0.15&0.18&0.13&0.1&0.16\\ 0.15&0.25&0.08&0.13&0.11&0.15&0.06&0.11&0.14&0.07&0.10\\ 0.13&0.18&0.18&0.08&0.08&0.08&0.13&0.12&0.07&0.08&0.11&0.12\\ 0.19&0.13&0.16&0.32&0.15&0.11&0.13&0.12&0.02&0.14&0.14&0.27&0.17 \\ 0.12&0.15&0.06&0.11&0.16&0.25&0.08&0.14&0.13&0.17&0.07&0.13&0.14&0.07&0.12 \\ 0.11&0.06&0.11&0.13&0.18&0.08&0.14&0.16&0.16&0.16&0.16&0.17&0.13&0.14&0.07 &0.12\\ 0.14&0.06&0.11&0.13&0.18&0.08&0.14&0.16&0.16&0.16&0.17&0.12&0.13&0.1 &0.07&0.13&0.13&0.13 \\ 0.14&0.06&0.11&0.12&0.19&0.14&0.16&0.13&0.17&0.12&0.13&0.17&0.12&0.14 &0.24\\ 0.16&0.12&0.19&0.12&0.19&0.14&0.16&0.13&0.17&0.12&0.13&0.17&0.12&0.14 &0.24\\ 0.18&0.11&0.07&0.14&0.12&0.13&0.11&0.17&0.13&0.16&0.19&0.22&0.14&0.11&0.18 \\ 0.12&0.14&0.16&0.11&0.14&0.17&0.07&0.12&0.16&0.24&0.14&0.14&0.17&0.08 &0.13\\ 0.09&0.06&0.11&0.13&0.09&0.07&0.12&0.13&0.08&0.19&0.16&0.1&0.07&0.13&0.14 \\ 0.15&0.14&0.11&0.12&0.17&0.12&0.13&0.13&0.19&0.14&0.16&0.13&0.13&0.15 &0.26 \\ 0.18&0.11&0.08&0.16&0.2&0.13&0.17&0.12&0.14&0.14&0.19&0.34&0.18&0.13&0.22 \\ 0.13&0.14&0.06&0.12&0.16&0.10&0.17&0.12&0.14&0.17&0.10&0.18&0.28 &0.09&0.16 \\ 0.11&0.17&0.11&0.14&0.17&0.07&0.12&0.14&0.11&0.08&0.13&0.15&0.13&0.09 &0.19&0.18\\ 0.16&0.10&0.11&0.12&0.17&0.12&0.13&0.24&0.18&0.13&0.14&0.26&0.22&0.16 &0.18&0.36/\\ \hline \(\mathbf{\Sigma}_{\text{r}\times\text{r}}\) & correlation \\ & matrix & (time points) & \(\begin{pmatrix}1&0.69&0.66&0.64\\ 0.69&1&0.75&0.73\\ 0.66&0.75&1&0.76\\ 0.64&0.73&0.76&1\\ \end{pmatrix}\) & & & & & & & & & \\ \hline \(\mathbf{\Sigma}_{p\times p}\) & correlation & \(\begin{pmatrix}1&0.39&0.27&0.37\\ 0.39&1&0.22&0.31\\ 0.27&0.22&1&0.51\\ 0.37&0.31&0.51&1\\ \end{pmatrix}\) & & & & & & \\ \hline \hline \end{tabular}
\end{table}
Table S 1: Parameters estimated from Dataset 2: CASP-19 dataset, group variable _loneliness_ (variable 1: control, variable 2: autonomy, variable 3: self-realization, variable 4: pleasure).
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c} \hline \hline \multicolumn{6}{c}{original} & \multicolumn{6}{c}{MVE} & \multicolumn{6}{c}{MCD} \\ \hline LDA & LDA & LDA & \multirow{2}{*}{SVM} & LDA & LDA & LDA & \multirow{2}{*}{SVM} & LDA & LDA & LDA & \multirow{2}{*}{SVM} & \multirow{2}{*}{(\(\Sigma_{pooled}(\Sigma_{KP})\) (GEE)} & \multirow{2}{*}{(\(\Sigma_{KP}\))} & \multirow{2}{*}{(\(\text{GEE}\)} & \multirow{2}{*}{SVM} \\ \cline{3-3} \cline{8-11} LDA & LDA & LDA & & & & & LDA & & LDA & & \\ \(\Sigma_{pooled}(\Sigma_{KP})\) & (GEE) & SVM & (\(\Sigma_{pooled}(\Sigma_{KP})\) (GEE) & SVM & (\(\Sigma_{pooled}(\Sigma_{KP})\) (GEE) & SVM \\ \hline \multicolumn{6}{c}{_Dataset 1_} \\ \multicolumn{6}{c}{Predictive accuracy} \\
0.461 & 0.519 & 0.514 & **0.54** & 0.485 & **0.54** & 0.52 & 0.536 & 0.497 & 0.536 & 0.52 & **0.539** \\
0.321, & (0.39, & (**0.475,** & (0.377, & (**0.449,** & (0.439, & (0.444, & (0.357, & (0.429, & (0.418, & **0.47,** \\
0.466) & 0.535) & 0.535) & **0.706** & 0.539 & **0.605** & 0.622) & 0.659) & 0.513) & 0.574) & 0.627) & **0.674)** \\ \multicolumn{6}{c}{Youden index} \\
0.088 & 0.099 & 0.09 & **0.106** & 0.104 & **0.125** & 0.113 & 0.099 & 0.095 & **0.114** & 0.104 & 0.108 \\
0, & (0, & (0, & (0, & (0, & (0, & (0, & (0, & (0, & (0, & (0,, & (0,,, & (0,,, \\
0.117) & 0.159) & 0.134) & **0.289** & 0.223) & **0.266** & 0.269) & 0.217) & 0.149) & **0.213)** & 0.203) & 0.269) \\ \multicolumn{6}{c}{Sensitivity} \\
0.461 & 0.444 & 0.527 & **0.702** & 0.445 & 0.511 & 0.498 & **0.706** & 0.473 & 0.516 & 0.517 & **0.729** \\
0.134, & (0.212, & (0.359, & (**0.594,** & (0.275, & (0.342, & (0.405, & (**0.613,** & (0.254, & (0.302, & (0.405, & **0.679,** \\
0.601) & 0.641) & 0.564) & **1)** & 0.663) & 0.703) & 0.633) & **1)** & 0.619) & 0.642) & 0.659) & **1)** \\ \multicolumn{6}{c}{Specificity} \\
0.543 & **0.569** & 0.543 & 0.363 & 0.532 & **0.575** & 0.535 & 0.353 & 0.528 & **0.567** & 0.519 & 0.339 \\
0.337, & **(0.343,** & (0.406, & (0, & (0.316, & (**0.424,** & (0.411, & (0, & (0.318, & **0.427,** & (0, \\
0.784) & **0.746** & 0.623) & 0.653) & 0.697) & **0.756)** & 0.654) & 0.472) & 0.673) & **0.741)** & 0.622) & 0.427) \\ \multicolumn{6}{c}{_Dataset 2_} \\ \multicolumn{6}{c}{Predictive accuracy} \\
**0.844** & 0.84 & 0.765 & 0.296 & **0.852** & 0.847 & 0.779 & 0.34 & **0.839** & 0.833 & 0.77 & 0.348 \\
**(0.801,** & (0.791, & (0.667, & (0, & (**0.814,** & (0.803, & (0.724, & (0, & **0.784,** & (0.763, & (0.705, & (0, \\
**0.898) & 0.9) & 0.82) & 0.357) & **0.939)** & 0.939) & 0.963) & 0.394) & **0.92)** & 0.91) & 0.95) & 0.402) \\ \multicolumn{6}{c}{Youden index} \\
0.422 & 0.457 & **0.466** & 0 & 0.489 & 0.512 & **0.502** & 0 & 0.489 & 0.503 & **0.507** & 0 \\
0.21, & (0.258, & **(0.25,** & (0, & (0.336, & (0.417, & (**0.397,** & (0, & (0.306, & (0.331, & (**0.38,** & (0, \\ 0.646) & 0.63) & **0.575** & 0) & 0.756) & 0.821) & **0.873)** & 0) & 0.716) & 0.715) & **0.872)** & 0) \\ \multicolumn{6}{c}{Sensitivity} \\
**0.958** & 0.937 & 0.792 & 0.095 & **0.944** & 0.921 & 0.802 & 0.158 & **0.919** & 0.901 & 0.783 & 0.167 \\
**(0.937,** & (0.895, & (0, & (**0.902,** & (0.836, & (0.717, & (0, & **0.862,** (0.809, & (0.679, & (0, \\
**1) & 1) & 0.865) & 0.095) & **1)** & 1)) & 0.158) & **1)** & 0.999) & 0.966) & 0.167) \\ \multicolumn{6}{c}{Specificity} \\
0.5 & 0.52 & 0.673 & **0.957** & 0.54 & 0.571 & 0.697 & **0.941** & 0.57 & 0.6 & 0.721 & **0.939** \\ (0.264, & (0.289, & (0.499, **0.957,** & (0.361, & (0.467, (0.571, & (0.941, & (0.363, & (0.431, & (0.602, & (**0.939,** \\
0.727) & 0.71) & 0.788) & **1)** & 0.806) & 0.928) & 0.98) & **1)** & 0.793) & 0.861) & 1)) \\ \hline \end{tabular}
\end{table}
Table **S 2**: Performance in the reference data using the bootstrap approach by Wahl et al. (2016) and 2000 bootstrap datasets. Mean performance (95% confidence interval) is indicated for each reference dataset, i.e. Dataset 1: CORE-OM, group variable _age_ (\(n_{0}=n_{1}=93\)), Dataset 2: CORE-OM, group variable _hospitalisation_ (\(n_{0}=42,n_{1}=142\)), Dataset 3: CASP-19, group variable _loneliness_ (\(n_{0}=254,n_{1}=1682\)). The highest point estimates \(\hat{\theta}^{632+}\) are shown in bold.
\begin{table}
\begin{tabular}{l l
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{original} & \multicolumn{4}{c}{MVE} & \multicolumn{4}{c}{MCD} \\ \cline{2-13} & LDA & LDA & LDA & \multirow{2}{*}{SVM} & LDA & LDA & LDA & \multirow{2}{*}{SVM} & LDA & LDA & LDA & \multirow{2}{*}{SVM} & LDA & LDA & \multirow{2}{*}{LDA} & LDA & \multirow{2}{*}{SVM} \\ & (\(\Sigma_{pooled}\))(\(\Sigma_{KP}\)) & (GEE) & & & & & & & & & & & \\ \hline \multicolumn{13}{l}{Predictive accuracy} \\ \(\mathcal{N}\) & 0.545 & **0.55** & 0.544 & 0.541 & 0.542 & **0.547** & 0.54 & 0.538 & 0.541 & **0.547** & 0.539 & 0.539 \\ \(\mathcal{N}\) & (0.018) & **(0.017)** & (0.019) & (0.021) & (0.019) & **(0.018)** & (0.02) & (0.023) & (0.019) & **(0.019)** & (0.021) & (0.023) \\ \(\mathcal{L}\mathcal{N}\) & 0.543 & **0.548** & 0.546 & 0.542 & 0.543 & **0.549** & 0.546 & 0.539 & 0.545 & **0.55** & 0.55 & 0.539 \\ \(\mathcal{L}\mathcal{N}\) & (0.018) & **(0.019)** & (0.029) & (0.021) & (0.019) & **(0.02)** & (0.027) & (0.022) & (0.018) & **(0.018)** & (0.023) & (0.02) \\ \(\mathcal{T}\mathcal{N}\) & 0.542 & **0.546** & 0.542 & 0.532 & 0.542 & **0.547** & 0.542 & 0.531 & 0.543 & **0.547** & 0.543 & 0.531 \\ \(\mathcal{T}\mathcal{N}\) & (0.02) & **(0.021)** & (0.021) & (0.021) & (0.02) & **(0.021)** & (0.021) & (0.021) & (0.02) & **(0.021)** & (0.02) & (0.02) \\ \multicolumn{13}{l}{Youden index} \\ \multicolumn{13}{l}{\(\mathcal{N}\)} & 0.091 & **0.101** & 0.088 & 0.087 & 0.085 & **0.096** & 0.083 & 0.083 & 0.084 & **0.095** & 0.082 & 0.084 \\ \(\mathcal{N}\) & (0.033) & **(0.031)** & (0.034) & (0.031) & (0.034) & **(0.032)** & (0.035) & (0.033) & (0.035) & **(0.034)** & (0.035) & (0.033) \\ \(\mathcal{L}\mathcal{N}\) & 0.087 & 0.098 & **0.101** & 0.089 & 0.088 & 0.099 & **0.1** & 0.084 & 0.091 & 0.102 & **0.103** & 0.083 \\ \(\mathcal{L}\mathcal{N}\) & (0.033) & (0.031) & **(0.038)** & (0.031) & (0.035) & (0.033) & **(0.038)** & (0.032) & (0.035) & (0.033) & **(0.036)** & (0.031) \\ \multicolumn{13}{l}{\(\mathcal{T}\mathcal{N}\)} & 0.085 & **0.094** & 0.086 & 0.069 & 0.085 & **0.096** & 0.086 & 0.066 & 0.087 & **0.097** & 0.087 & 0.066 \\ \(\mathcal{T}\mathcal{N}\) & (0.037) & **(0.037)** & (0.037) & (0.032) & (0.037) & **(0.036)** & (0.037) & (0.031) & (0.037) & **(0.036)** & (0.037) & (0.03) \\ \multicolumn{13}{l}{Sensitivity} \\ \multicolumn{13}{l}{\(0.546\)} & 0.55 & 0.544 & **0.7** & 0.543 & 0.548 & 0.541 & **0.689** & 0.542 & 0.547 & 0.54 & **0.688** \\ \(\mathcal{N}\) & (0.035) & (0.034) & (0.035) & **(0.145)** & (0.038) & (0.038) & (0.039) & **(0.148)** & (0.039) & (0.039) & (0.04) & **(0.148)** \\ \multicolumn{13}{l}{\(0.469\)} & 0.468 & 0.479 & **0.647** & 0.512 & 0.52 & 0.521 & **0.668** & 0.52 & 0.531 & 0.529 & **0.676** \\ \(\mathcal{L}\mathcal{N}\) & (0.04) & (0.04) & (0.04) & **(0.177)** & (0.041) & (0.04) & (0.045) & **(0.174)** & (0.042) & (0.041) & (0.044) & **(0.178)** \\ \multicolumn{13}{l}{\(0.515\)} & 0.512 & 0.512 & **0.714** & 0.536 & 0.537 & 0.534 & **0.725** & 0.542 & 0.544 & 0.54 & **0.733** \\ \multicolumn{13}{l}{\(\mathcal{T}\mathcal{N}\)} & 0.046) & (0.046) & (0.045) & **(0.195)** & (0.043) & (0.043) & (0.043) & **(0.043)** & **(0.043)** & **(0.192)** \\ \multicolumn{13}{l}{Specificity} \\ \(\mathcal{N}\) & 0.544 & **0.549** & 0.543 & 0.383 & 0.541 & **0.547** & 0.54 & 0.388 & 0.54 & **0.546** & 0.539 & 0.389 \\ \(\mathcal{N}\) & (0.035) & **(0.035)** & (0.036) & (0.139) & (0.039) & **(0.038)** & (0.04) & (0.14) & (0.04) & **(0.039)** & (0.041) & (0.138) \\ \multicolumn{13}{l}{\(0.617\)} & **0.628** & 0.612 & 0.438 & 0.574 & **0.578** & 0.571 & 0.411 & 0.57 & **0.57** & 0.57 & 0.402 \\ \(\mathcal{L}\mathcal{N}\) & (0.051) & **(0.054)** & (0.063) & (0.172) & (0.044) & **(0.044)** & (0.048) & (0.17) & (0.043) & **(0.042)** & (0.044) & (0.174) \\ \multicolumn{13}{l}{\(0.569\)} & **0.58** & 0.573 & 0.349 & 0.548 & **0.557** & 0.55 & 0.336 & 0.544 & **0.551** & 0.545 & 0.328 \\ \(\mathcal{T}\mathcal{N}\) & (0.05) & **(0.053)** & (0.052) & (0.192) & (0.043) & **(0.044)** & (0.044) & (0.195) & (0.041) & **(0.042)** & (0.042) & (0.189) \\ \hline \hline \end{tabular}
\end{table}
Table 5: **3a**: Performance of the algorithms in 2000 simulated datasets for dataset 1 (CORE-OM dataset, group variable _age_, training data: \(n_{0}=n_{1}=93\), test data: \(n_{0}=n_{1}=1000\)). Data are simulated from the multivariate normally (\(\mathcal{N}\)), logn
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{original} & \multicolumn{4}{c}{MVE} & \multicolumn{4}{c}{MCD} \\ \cline{2-13} & LDA & LDA & LDA & \multirow{2}{*}{SVM} & LDA & LDA & LDA & \multirow{2}{*}{SVM} & LDA & LDA & LDA & \multirow{2}{*}{SVM} & LDA & LDA & \multirow{2}{*}{LDA} & LDA & \multirow{2}{*}{SVM} \\ & (\(\Sigma_{pooled}\))(\(\Sigma_{KP}\)) & (GEE) & & & & & & & & & & & \\ \hline \multicolumn{13}{l}{Predictive accuracy} \\ \(\mathcal{N}\) & 0.712 & 0.71 & **0.735** & 0.559 & 0.716 & 0.713 & **0.73** & 0.572 & 0.719 & 0.715 & **0.729** & 0.58 \\ \(\mathcal{N}\) & (0.019) & (0.015) & **(0.022)** & (0.078) & (0.021) & (0.016) & **(0.03)** & (0.08) & (0.021) & (0.016) & **(0.03)** & (0.081) \\ \(\mathcal{L}\mathcal{N}\) & 0.647 & 0.646 & **0.703** & 0.516 & 0.668 & 0.666 & **0.712** & 0.541 & 0.673 & 0.67 & **0.719** & 0.557 \\ \(\mathcal{L}\mathcal{N}\) & (0.025) & (0.028) & **(0.051)** & (0.051) & (0.025) & (0.025) & **(0.048)** & (0.076) & (0.024) & (0.023) & **(0.042)** & (0.084) \\ \(\mathcal{T}\mathcal{N}\) & 0.671 & 0.682 & **0.757** & 0.542 & 0.707 & 0.715 & **0.755** & 0.575 & 0.713 & 0.723 & **0.755** & 0.578 \\ \(\mathcal{T}\mathcal{N}\) & (0.034) & (0.032) & **(0.015)** & (0.071) & (0.028) & (0.023) & **(0.018)** & (0.079) & (0.026) & (0.02) & **(0.017)** & (0.079) \\ \multicolumn{13}{l}{Youden index} \\ \multicolumn{13}{l}{\(0.425\)} & 0.42 & **0.471** & 0.117 & 0.433 & 0.426 & **0.461** & 0.144 & 0.438 & 0.43 & **0.459** & 0.159 \\ \(\mathcal{N}\) & (0.039) & (0.03) & **(0.044)** & (0.156) & (0.042) & (0.033) & **(0.053)** & (0.16) & (0.041) & (0.033) & **(0.056)** & (0.162) \\ \multicolumn{13}{l}{\(0.295\)} & 0.292 & **0.407** & 0.031 & 0.335 & 0.331 & **0.425** & 0.082 & 0.345 & 0.339 & **0.439** & 0.113 \\ \(\mathcal{L}\mathcal{N}\) & (0.049) & (0.055) & **(0.099)** & (0.101) & (0.051) & (0.049) & **(0.094)** & (0.152) & (0.048) & (0.046) & **(0.082)** & (0.168) \\ \multicolumn{13}{l}{\(0.343\)} & 0.363 & **0.514** & 0.084 & 0.414 & 0.43 & **0.51** & 0.144 & 0.427 & 0.445 & **0.511** & 0.156 \\ \(\mathcal{T}\mathcal{N}\) & (0.069) & (0.065) & **(0.03)** & (0.142) & (0.057) & (0.046) & **(0.035)** & (0.159) & (0.052) & (0.04) & **(0.033)** & (0.157) \\ \multicolumn{13}{l}{Sensitivity} \\ \multicolumn{13}{l}{\(0.93\)} & 0.903 & 0.746 & 0.185 & **0.919** & 0.895 & 0.743 & 0.237 & **0.915** & 0.892 & 0.743 & 0.259 \\ \(\mathcal{N}\) & **(0.017)** & (0.022) & (0.034) & (0.256) & **(0.021)** & (0.025) & (0.039) & (0.271) & **(0.021)** & (0.025) & (0.042) & (0.273) \\ \multicolumn{13}{l}{\(0.945\)} & 0.928 & 0.661 & 0.05 & **0.938** & 0.923 & 0.709 & 0.139 & **0.942** & 0.927 & 0.726 & 0.195 \\ \(\mathcal{L}\mathcal{N}\) & **(0.019)** & (0.028) & (0.067) & (0.167) & **(0.021)** & (0.026) & (0.065) & (0.267) & **(0.019)** & (0.024) & (0.056) & (0.299) \\ \multicolumn{13}{l}{\(0.929\)} & **0.93** & 0.768 & 0.139 & 0.901 & **0.905** & 0.769 & 0.243 & 0.896 & **0.898** & 0.767 & 0.265 \\ \multicolumn{13}{l}{\(0.81\)} & **(0.019)** & (0.033) & (0.245) & (0.022) & **(0.02)** & (0.033) & (0.279) & (0.021) & **(0.019)** & (0.032) & (0.28) \\ \multicolumn{13}{l}{Specificity} \\ \(\mathcal{N}\) & 0.495 & 0.517 & 0.725 & **0.932** & 0.514 & 0.531 & 0.718 & **0.912** & 0.523 & 0.538 & 0.716 & **0.901** \\ \(\mathcal{N}\) & (0.05) & (0.041) & (0.039) & **(0.107)** & (0.056) & (0.045) & (0.046) & **(0.115)** & (0.054) & (0.045) & (0.047) & **(0.119)** \\ \multicolumn{13}{l}{\(0.35\)} & 0.364 & 0.745 & **0.981** & 0.396 & 0.408 & 0.716 & **0.942** & 0.404 & 0.412 & 0.713 & **0.919** \\ \(\mathcal{L}\mathcal{N}\) & (0.063) & (0.077) & (0.068) & **(0.069)** & (0.064) & (0.068) & (0.058) & **(0.119)** & (0.061) & (0.063) & (0.054) & **(0.137)** \\ \multicolumn{13}{l}{\(0.413\)} & 0.433 & 0.746 & **0.944** & 0.513 & 0.526 & 0.741 & **0.901** & 0.531 & 0.547 & 0.743 & **0.892** \\ \(\mathcal{T}\mathcal{N}\) & (0.082) & (0.079) & (0.033) & **(0.107)** & (0.069) & (0.059) & (0.037) & **(0.129)** & (0.064) & (0.051) & (0.035) & **(0.134)** \\ \hline \hline \end{tabular}
\end{table}
Table **S 3b**: Performance of the algorithms in 2000 simulated datasets for dataset 2 (CORE-OM dataset, group variable _hospitalisation_, training data: \(n_{0}=42,n_{1}=142\), test data: \(n_{0}=n_{1}=1000\)). Data are simulated from the multivariate normally (\(\mathcal{N}\)), lognorm
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline & \multicolumn{2}{c}{original} & \multicolumn{3}{c}{MVE} & \multicolumn{3}{c}{MCD} \\ \cline{2-10} & LDA & LDA & LDA & LDA & LDA & LDA & LDA & LDA & LDA & LDA \\ & \((\Sigma_{pooled})(\Sigma_{KP})\) & (GEE) & \((\Sigma_{pooled})(\Sigma_{KP})\) & (GEE) & & \((\Sigma_{pooled})(\Sigma_{KP})\) & (GEE) \\ \hline \multicolumn{10}{l}{Predictive accuracy} \\ & **0.698** & 0.698 & 0.795 & **0.705** & 0.705 & 0.794 & **0.711** & 0.71 & 0.794 \\ \(\mathcal{N}\) & **(0.011)** & (0.011) & (0.01) & **(0.013)** & (0.012) & (0.01) & **(0.011)** & (0.01) \\ & **0.701** & 0.707 & 0.772 & **0.751** & 0.753 & 0.789 & **0.763** & 0.764 & 0.794 \\ \(\mathcal{LN}\) & **(0.011)** & (0.011) & (0.013) & **(0.011)** & (0.011) & (0.01) & **(0.01)** & (0.01) & (0.01) \\ & **0.614** & 0.596 & 0.745 & **0.647** & 0.633 & 0.749 & **0.652** & 0.642 & 0.748 \\ \(\mathcal{TN}\) & **(0.018)** & (0.02) & (0.011) & **(0.016)** & (0.017) & (0.011) & **(0.015)** & (0.016) & (0.011) \\ \multicolumn{10}{l}{Youden index} \\ & **0.395** & 0.397 & 0.59 & **0.41** & 0.409 & 0.587 & **0.421** & 0.42 & 0.589 \\ \(\mathcal{N}\) & **(0.022)** & (0.021) & (0.02) & **(0.025)** & (0.023) & (0.021) & **(0.023)** & (0.022) & (0.02) \\ \multicolumn{10}{l}{\(\mathcal{LN}\)} \\ & **0.403** & 0.414 & 0.543 & **0.502** & 0.507 & 0.579 & **0.526** & 0.529 & 0.588 \\ \(\mathcal{LN}\) & **(0.023)** & (0.022) & (0.026) & **(0.022)** & (0.021) & (0.021) & **(0.021)** & (0.02) & (0.02) \\ \multicolumn{10}{l}{\(\mathcal{TN}\)} \\ & **0.227** & 0.191 & 0.491 & **0.295** & 0.266 & 0.498 & **0.304** & 0.283 & 0.497 \\ \(\mathcal{TN}\) & **(0.036)** & (0.04) & (0.022) & **(0.033)** & (0.034) & (0.022) & **(0.031)** & (0.032) & (0.021) \\ \multicolumn{10}{l}{Sensitivity} \\ & **0.973** & 0.967 & 0.798 & **0.969** & 0.963 & 0.797 & **0.967** & 0.961 & 0.798 \\ \(\mathcal{N}\) & **(0.006)** & (0.006) & (0.016) & **(0.007)** & (0.007) & (0.018) & **(0.007)** & (0.007) & (0.017) \\ & **0.971** & 0.961 & 0.889 & **0.941** & 0.926 & 0.853 & **0.931** & 0.915 & 0.846 \\ \(\mathcal{LN}\) & **(0.006)** & (0.007) & (0.014) & **(0.009)** & (0.01) & (0.015) & **(0.01)** & (0.011) & (0.016) \\ & **0.985** & 0.993 & 0.736 & **0.977** & 0.988 & 0.769 & **0.976** & 0.986 & 0.773 \\ \(\mathcal{TN}\) & **(0.005)** & (0.003) & (0.02) & **(0.007)** & (0.004) & (0.02) & **(0.006)** & (0.005) & (0.02) \\ \multicolumn{10}{l}{Specific} \\ & **0.422** & 0.43 & 0.792 & **0.441** & 0.446 & 0.791 & **0.454** & 0.46 & 0.791 \\ \(\mathcal{N}\) & **(0.023)** & (0.023) & (0.017) & **(0.028)** & (0.026) & (0.018) & **(0.024)** & (0.023) & (0.018) \\ & 0.432 & 0.452 & **0.655** & 0.561 & 0.581 & **0.726** & 0.595 & 0.614 & **0.742** \\ \(\mathcal{LN}\) & (0.025) & (0.023) & **(0.026)** & (0.024) & (0.022) & **(0.023)** & (0.022) & (0.021) & **(0.021)** \\ & 0.242 & 0.198 & **0.755** & 0.318 & 0.278 & **0.729** & 0.328 & 0.297 & **0.724** \\ \(\mathcal{TN}\) & (0.037) & (0.042) & **(0.017)** & (0.035) & (0.036) & **(0.019)** & (0.033) & (0.034) & **(0.019)** \\ \hline \hline \end{tabular}
\end{table}
Table **S 3c**: Performance of the algorithms in 2000 simulated datasets for dataset 3 (CASP-19 dataset, group variable _loneliness_, training data: \(n_{0}=254,n_{1}=1682\), test data: \(n_{0}=n_{1}=1000\)). Data are simulated from the multivariate normally (\(\mathcal{N}\)), lognormally (\(\mathcal{LN}\)) and truncated normally (\(\mathcal{TN}\)) distribution. Parameter estimates are obtained from the training data without trimming (original) or after trimming by applying the MVE or MCD algorithm, respectively, keeping 90% of the training data. Highest mean values are shown in bold.
MVE: Minimum volume ellipsoid, MCD: Minimum covariance determinant, LDA: Linear discriminant analysis, SVM: Support vector machine, \(\Sigma_{pooled}\): pooled covariance matrix, \(\Sigma_{KP}\): Kronecker product covariance matrix, GEE: covariance matrix of Generalized estimating equation.
**Fig. S 1a**
**Fig. S 1b**
**Fig. S 1c**
**Fig. S 1**: Summary ROC curves showing the algorithms discriminative performance in 2000 datasets simulated from the multivariate normally (\(\mathcal{N}\)), lognormally (\(\mathcal{LN}\)) and truncated normally (\(\mathcal{TN}\)) distribution, respectively. The black dots and circles represent the mean and confidence region.
(a) Dataset 1: CORE-OM dataset, group variable _age_ (\(n_{0}=n_{1}=93\))
(b) Dataset 2: CORE-OM dataset, group variable _hospitalisation_ (\(n_{0}=42,n_{1}=142\))
(c) Dataset 3: CASP-19 dataset, group variable _loneliness_ (\(n_{0}=254,n_{1}=1682\))
MVE: Minimum volume ellipsoid, MCD: Minimum covariance determinant, LDA: Linear discriminant analysis, SVM: Support vector machine, \(\Sigma_{pool}\): pooled covariance matrix, \(\Sigma_{KP}\): Kronecker product covariance matrix, GEE: covariance matrix of Generalized estimating equation.
**Fig. S 2a**
**Fig. S 2b**
**Fig. S 2c**
**Fig. S 2**: Boxplots showing the distribution of Youden index estimated in the 2000 simulated datasets for the multivariate normal (left), multivariate lognormal (center) and multivariate truncated normal distribution (right). Results with the highest median value are highlighted in darker colours.
(a) Dataset 1: CORE-OM dataset, group variable _age_ (\(n_{0}=n_{1}=93\))
(b) Dataset 2: CORE-OM dataset, group variable _hospitalisation_ (\(n_{0}=42,n_{1}=142\))
(c) Dataset 3: CASP-19 dataset, group variable _loneliness_ (\(n_{0}=254,n_{1}=1682\))
MVE: Minimum volume ellipsoid, MCD: Minimum covariance determinant, LDA: Linear discriminant analysis, SVM: Support vector machine, \(\Sigma_{pooled}\): pooled covariance matrix, \(\Sigma_{KP}\): Kronecker product covariance matrix, GEE: covariance matrix of Generalized estimating equation.
\begin{table}
\begin{tabular}{l l l l l l l l} \hline \hline & \multicolumn{2}{c}{original} & \multicolumn{2}{c}{MVE} & \multicolumn{2}{c}{MCD} \\ \cline{2-7} & \multicolumn{1}{c}{_Dataset 1_} & \multicolumn{1}{c}{_Dataset 2_} & \multicolumn{1}{c}{_Dataset 1_} & \multicolumn{1}{c}{_Dataset 2_} & \multicolumn{1}{c}{_Dataset 1_} & \multicolumn{1}{c}{_Dataset 2_} \\ \hline \(\mathcal{N}\) & 1212 & 819 & 1191 & 776 & 1186 & 715 \\ \(\mathcal{LN}\) & 1296 & 1180 & 1235 & 1139 & 1253 & 1091 \\ \(\mathcal{TN}\) & 1244 & 823 & 1251 & 591 & 1250 & 520 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Number of times the iterative algorithm for determining the optimal solution \(\mathbf{\alpha}_{m}\) in the longitudinal SVM converged (maximum number of iterations: 100) when applied to the 2000 datasets simulated from the multivariate normally (\(\mathcal{N}\)), lognormally (\(\mathcal{LN}\)) and truncated normally (\(\mathcal{TN}\)) distribution based on parameter estimates obtained from Dataset 1 (CORE-OM dataset, group variable _age_ (\(n_{0}=n_{1}=93\))), Dataset 2 (CORE-OM dataset, group variable _hospitalisation_ (\(n_{0}=42,n_{1}=142\))).
## Declarations
* Funding: No funding was received to assist with the preparation of this manuscript.
* Conflict of interest: The authors have no competing interests to declare that are relevant to the content of this article.
* Ethics approval: Not applicable.
* Availability of data and materials: Reference datasets 1 and 2 (Zeldovich 2018) are available upon request from the corresponding author. Reference dataset 3 (Banks et al 2021) is publicly available from the UK data service through the website [https://www.elsa-project](https://www.elsa-project). ac.uk/accessing-elsa-data.
* Code availability: Supplementary files containing the R code used for data simulations can be found on Figshare ([https://figshare.com/s/104aeb2a870a810f80bd](https://figshare.com/s/104aeb2a870a810f80bd)).
|
2305.20061 | Towards Neural Path Tracing in SRAM | We present an experimental neural path tracer designed to exploit the large
on-chip memory of Graphcore intelligence-processing-units (IPUs). This open
source renderer demonstrates how to map path tracing to the novel software and
hardware architecture and is a useful tool for analysing in-cache
neural-rendering scenarios. Such scenarios will be increasingly important if
rasterisation is replaced by combinations of ray/path tracing, neural-radiance
caching, and AI denoising/up-scaling, for which small neural networks are
already routinely employed. A detailed description of the implementation also
serves as a self-contained resource for more general software design on IPU. | Mark Pupilli | 2023-05-31T17:38:06Z | http://arxiv.org/abs/2305.20061v1 | # Towards Neural Path Tracing in SRAM
###### Abstract
We present an experimental neural path tracer designed to exploit the large on-chip memory of Graphcore intelligence-processing-units (IPUs). This open source renderer demonstrates how to map path tracing to the novel software and hardware architecture and is a useful tool for analysing in-cache neural-rendering scenarios. Such scenarios will be increasingly important if rasterisation is replaced by combinations of ray/path tracing, neural-radiance caching, and AI denoising/up-scaling, for which small neural networks are already routinely employed. A detailed description of the implementation also serves as a self-contained resource for more general software design on IPU.
## 1 Introduction
Path tracing Kajiya (1986) has replaced rasterisation for high-quality offline rendering Fascione et al. (2018); Burley et al. (2018); Christensen et al. (2018) and real-time rendering seems to be at the start of a similar transition. For example, the ability of GPUs to train and run small neural-networks (NNs) at high throughput and low latency Muller (2021) allows the bulk of the path tracing workload to be approximated by a neural-radiance-cache Muller et al. (2021). Whilst expensive, NN compute is accelerated by dedicated hardware, amenable to reduced precision arithmetic, and deterministic: all of this is in contrast to the full path tracing algorithm. Neural networks are also utilised for real-time denoising and spatio/temporal super-resolution Yang et al. (2023) and on state of the art GPUs up to 88% of displayed pixels are AI generated as a result NVIDIA (2022). The increased visual fidelity from path tracing and the effectiveness of neural-radiance caches make it a distinct possibility that real-time rendering pipelines become predominantly neural. The key to realising real-time performance for networks in use today is keeping network weights in GPU on-chip memory (SRAM/L1-cache and registers) for as long as possible Muller (2021); Muller et al. (2022). This trend raises the question: do current GPU architectures have the right balance of on-chip memory capacity to off-chip memory bandwidth to adapt to a fundamental shift in the nature of rendering computations?
IPUs were nominally designed for artificial intelligence (AI) and are in no way intended for rendering. That said, each chip is massively parallel like a GPU, but in contrast contains 897MiB SRAM close to the processing cores. Some key architectural differences between IPU and GPU are summarised in Table 1. We are interested in using the large
Figure 1: Images path traced on a Graphcore Bow-Pod-16. The HDR environment light is compressed into 97 KiB of neural network weights. The neural-network weights, activations, and scene BVH reside entirely in on-chip SRAM.
on-chip memory to explore neural rendering configurations that are not currently possible on other hardware. To this end we present an IPU implementation of path tracing combined with a high dynamic range (HDR) neural environment lighting. The network we employ has a similar architecture and size to models used in other neural rendering tasks (see Section 2.2), and those algorithms often additionally involve some form of ray-casting/tracing. For this reason our simple application is a useful tool for reasoning about the performance and viability of in-SRAM neural rendering, regardless of how future hardware adapts.
## 2 Related Work
### Neural Networks in Path Tracing
Monte Carlo (MC) path tracing Kajiya (1986) is a rendering algorithm which accumulates the contributions of all possible light paths in a scene using Monte Carlo integration. It exactly integrates the rendering equation _in expectation_ and therefore simulates many physical effects producing high fidelity images. Because MC convergence follows an inverse square law it is inevitable that sampling must stop at some point and give way to a denoising algorithm, often implemented using sophisticated AI based denoisers Intel(r)(2021); Zhang et al. (2021).
Neural networks have also become key to realising real-time path tracing. For example, in Muller et al. (2020) two neural networks reduce variance of the MC integration: a normalizing-flow network is employed to learn a proposal distribution to enable efficient importance sampling of BSDFs and a second network is used to correct errors introduced by the proposal. In Muller et al. (2021) a simpler neural-network is trained online using extended path traces in order that the majority of paths can be stopped after a few bounces, and then fed through a small MLP (en masse) that returns an estimate of the path's remaining radiance. For efficiency, weights are streamed into L1 cache or registers once and then reused repeatedly on the entire batch of rays Muller (2021). Depending on batch-size, NN inference throughput for 128 hidden neurons and 2 layers (H128, L2) is reported to be between _280M_ and _390M_ samples/sec on a 3090 RTX.
### Other Neural Rendering Techniques
Neural radiance fields (NeRFs) Mildenhall et al. (2021) are neural networks trained to approximate continuous functions from \(\mathbb{R}^{5}\mapsto\mathbb{R}^{4}\):
\[(r,g,b,w)=f(x,y,z,\theta,\phi) \tag{1}\]
They map rays in 3D space (point and normalised direction vector in spherical coordinates) into an RGB colour and a weight. This allows a NeRF to encode volumetric data: output colour nominally represents the radiance transmitted along the ray (but in usual practice it is a low dynamic range quantity non-linearly related to luminance). The weight allows the network to represent free space and opacity, but also makes the rendering differentiable allowing it to be part of the training loop. These neural-volume representations have found utility in a number of domains including semantic perception Zhi et al. (2021); Blomqvist et al. (2023) and even form the basis for text-to-3D generative AI models Poole et al. (2022).
#### 2.2.1 Prevalence of the Multi-Layer-Perceptron
The function NeRFs encode is low dimensional, so small MLP/relu networks (\(10^{5}\) or \(10^{6}\) parameters) give acceptable approximations. In Mildenhall et al. (2021), for example, the architecture is an MLP with 9 layers and 256 hidden neurons. Deep neural networks exhibit a spectral bias Rahaman et al. (2018) which prevents them from learning high frequency functions (like natural images) without adding engineered features or inductive bias to overcome this. In NeRF the low dimensional input coordinates are embedded into a higher dimensional space using Fourier features Tancik et al. (2020). SIREN networks Sitzmann et al. (2020) on the other hand, encourage the network to learn higher frequencies by using sinusoidal activation functions and these networks have the added advantage that they can learn to approximate partial derivatives much more effectively than a similar sized network that uses Fourier features. In CoConet Bricman and Ionescu (2018) they train an MLP network to approximate images but use a hand engineered embedding of pixel coordinates into a higher dimensional input space. A detailed theoretical analysis on alternative input embeddings is given in Tancik et al. (2020).
Whilst models discussed so far use MLPs, Minnen et al. (2018) employs a convolutional network for image compression. They report the compression artifacts induced by this method are more palatable than JPEG to human observers (smoothed but not blocky) for equivalent bit-rates. Another non-MLP example is the neural texturing sytem of Thies et al. (2019) where a convolutional U-net is used. The network has an encoder/decoder structure hence the rendering system is entirely NN based (16M parameters).
State of the art neural material/texturing systems also utilise MLPs. In Kuznetsov et al. (2021) the problem is again tackled using an MLP that approximates the bi-directional-texture function from \(\mathbb{R}^{7}\mapsto\mathbb{R}^{3}\). Most recently Zeltner et al. (2023) implements a fully neural material system, combines it with path tracing, and demonstrates improved performance over the traditional shading pipeline: the transition away from rasterisation seems imminent.
Since many techniques in this field use the MLP as a core building block, metrics from our on-chip system should be useful as a baseline for reasoning about the benefits of large on-chip SRAM for a range of neurally based rendering algorithms.
## 3 Implementation
We will briefly discuss aspects of the IPU hardware and software architecture relevant to the implementation, more detail is given in Appendix A. The GC200 processor used in this work contains 1472 tiles (cores) and each tile has six, multiple-instruction-multiple-data (MIMD), barrel scheduled, hardware threads called _workers_. Each worker issues one instruction packet in round-robin fashion and can dual issue floating point and integer/memory operations. All threads can work in unison to drive each tile's accumulating matrix product (AMP) unit, achieving the peak FLOP rate (Table 1) for matrix multiplies and convolutions. Tiles can synchronise and communicate using a bulk synchronous parallel (BSP) execution scheme that alternates between internal or external exchange of data and local tile computations. The latest variant of the GC200 uses wafer-on-wafer technology and has the designation Bow. Bow IPUs have identical micro-architecture but a 40% higher clock speed.
### IPU Path Tracer Design
The path tracer itself is very simple. Paths are traced through the scene with no light sampling: contributions are only accumulated if paths hit an emissive object by chance or if the path escapes the scene and receives a contribution from the environment light. The environment light field is encoded in a neural network as described in Section 3.2 so the sampling loop alternates between path tracing operations and NN inference. Paths are terminated early by roulette with probability proportional to their radiometric throughput. While the lack of light sampling makes the implementation sample inefficient, it simplifies reasoning about performance.
An entirely on-chip solution needs to store ray payload and results, the scene description, and neural network weights all in SRAM. While 897MiB might seem plenty for this task we need to consider some of the finer details of the architecture. First note the memory is split into 1472 tiles, 624 KiB per core. Cores can not directly access memory of other tiles nor can the cores issue direct accesses to off-chip memory and there is no unified address space. This means that the processor is most efficient when each core is repeatedly accessing data from its local SRAM in parallel and that DRAM access must be batched into a BSP compatible schedule (Appendix A.2.3).
#### 3.1.1 Scene Data
The first design decision we make is to replicate the scene description on every tile, but spread the neural network weights across the entire chip. The primary reason for this is that routing ray data between tiles containing, for example, different treelets, requires fine grained dynamic exchange patterns. While this is possible in principle using just-in-time (JIT) compiled exchange sequences, this is not exposed in the currently available SDK which only allows for exchanges that are fixed at compile time. Batching ray data for neural network inference has no ordering requirement, so inter-tile exchanges of inputs and results can easily be pre-compiled and in fact the graph compiler can make sophisticated data movement decisions for us (see Section 3.3.1).
\begin{table}
\begin{tabular}{r|r l}
**Property** & **IPU Bow GC200** & **GPU A100** \\ \hline Independent instruction streams (IIS) & 1472 (tiles) x 6 (threads) = 8832 & 108 (SMs) x 64 (warps) = 6912 \\ SRAM per core & 624 KiB per tile & 192 KiB (up to 164 KiB shared) per SM \\ _Contention free_ SRAM/registers per IIS & 624/6 = **104 KiB** & (192 + 256)/64 = **7 KiB** \\ SIMD vector width per IIS & 2 float & 32 float \\ & 4 half & 64 half \\ Integer compute & Non-vectorized but dual issue & Vectorized but no dual issue \\ Peak TFLOP/sec (half, non-zero) & 349 & 312 \\ \end{tabular}
\end{table}
Table 1: Comparison of IPU and GPU (NVIDIA (2020)) variants with similar silicon area from the same process node (7nm).
BVH node, triangle, index and vertex buffers are all serialised into a single binary chunk. The IPU programming model is built around variables of a single data type (i.e. tensors) so the idiomatic way of passing the scene data would be using structures of arrays. However, we found this makes the graph description overly verbose and increases the code size needed for inter-tile exchange, so we prefer to serialise all the data into one buffer and pay the cost of de-serialising the data on each tile. De-serialisation takes around 522 worker cycles (3132 total cycles as we only use a single worker). The additional benefit of this scheme is possible experimentation with exchange of treelets Aila and Karras (2010) between tiles and/or DRAM in future.
#### 3.1.2 Ray Data and external DRAM
We could distribute rays and the frame-buffer across tiles but early experiments with this approach proved limiting in terms of both the resolution of images that can be kept in SRAM and also in terms of ability to store auxiliary payload data with the rays. From an API design point of view it is desirable to associate extra data with each ray, for example normals, hit-point, or other arbitrary output variables. For this reason we choose to stream ray data from external DRAM allowing use of standard ray data structures (84 bytes per ray) instead of a compact representation. DRAM bandwidth in current IPU systems is limited (25.6 GB/sec) but the API allows us to reserve some compute tiles to act as "smart I/O controllers": 32 tiles perform ray loads and stores in parallel with 1440 tiles rendering each ray batch.
The number of rays that each worker thread processes is specified as a graph-compile time constant. This allows us to balance the workload so that DRAM fetches are effectively hidden and large enough ray batches are generated to achieve efficient neural-network inference. (The batch size for each neural network query is _rays-per-worker_ x 6 x 1440.)
Using DRAM this way allows rendering extremely high resolutions, hiding of DRAM latency, and has another advantage: many standard resolutions are divisible by both 1440 and 6 which ensures all worker threads are utilised (e.g. the 8K format 7680 x 4320 = 3840 x 6 x 1440).
#### 3.1.3 IPU Ray Tracing Kernels
The ray tracing implementation is plain C++. The only IPU specific code is: 1) use of the vector _float2_ data type (2 x float32 elements) in the ray-slab intersection test to improve vectorisation; and 2) intrinsics to access the IPU's hardware random number generator inline in the path trace kernel (for example _builtin_ip_u_rand_f32()_). We do not yet employ ray-packetisation, which would assist the compiler further with auto-vectorisation. The simplicity is deliberate, partly because we want to share non-obfuscated reference code for IPU graphics programming, and partly because the MIMD nature of the worker threads means there are no execution stream coherence issues we need to manage. As an indication of optimisation potential and generated code quality, the triangle intersection routine (derived from PBRT-v3 Pharr et al. (2016)) compiles to code with a FLOP arithmetic intensity of 0.23 (67/289) FLOPs per issue with 20/67 of the FLOPs in vectorised instructions. The compiler emits many 32-bit load/stores with only 20/71 using 64-bit. This suggests there is potential to increase FLOP intensity with an IPU specific ray packetisation strategy and optimisations that ensure use of 64/128-bit load/stores. For comparison, the same routine compiled with g++9.4 targeting x86-64+AVX2 generates 0.30 (67/226) FLOPs/issue, 15 of which are fused multiplies, with no vectorised instructions emitted.
#### 3.1.4 Bounding Volume Hierarchy
We found memory optimisations important in building a usable system: next to neural network weights, the bounding volume hierarchy (BVH) is the most significant consumer of tile memory. Our BVH nodes do not differ much from common CPU or GPU implementations with an initial BVH-2 tree built using Embree Wald et al. (2014) on the host CPU. This is then compacted into an array of carefully packed, pointer-less nodes, similar to those used in Pharr et al. (2016), where the first child is implicitly stored next in the array.
We further reduce memory consumption by storing the extent of the bounding volume in float16 and casting to float32 on-demand at intersection time (the box origins are stored at float32). This reduces node size from 32 bytes to 24. At BVH construction time we use a special software implemented rounding, _round-to-nearest-not-lower_, to cast from float32 to float16 which guarantees no intersection can be missed as the bounding box is never smaller than it would have been if stored at float32. Run-time overhead is a negligible cast back from float16 to float32. There are prior examples of compressed BVH nodes Benthin et al. (2018), with the smallest typically intended for hardware implementation (the feasibility of 1-bit hierarchical encoding is demonstrated in Keely (2014)). Experimenting with sub 16-bit precision seems appealing, however, on IPU casting between integer and floating-point register files can have significant overhead, integer bit manipulation instruc
tions are not vectorised, and the smallest load/stores are 32-bit. Very low precision encoding also increases the total number of intersection tests due to looser bounding boxes. Hence, we prefer the compromise of half precision extents, keeping everything in floating-point. This gives a negligible performance loss due to additional intersections (\(<1\%\) ray throughput on test scenes) yet still gains a 25% BVH memory saving.
### Neural Environment Lighting
Our environment lighting model is conceptually like any other neural image field (NIF) approximator. It approximates the low dimensional function: \((r,g,b)=f(u,v)\).
The function takes input coordinates \(u,v\), applies positional embeddings as in Tancik et al. (2020) and predicts a colour value using a sequence of dense layers with ReLU activation functions. A schematic for a specific size of the HDR-NIF model is shown in Figure 2 where hidden size H (128) and number of dense-relu layers L (4) can be varied. The initial Fourier encoded \(u,v\) co-ordinates are always concatenated with the last odd layer before the middle of the network, which has a reduced dense layer output size to compensate. The Fourier feature dimension can be varied but we keep it fixed at 40 in all experiments in this paper. The final trainable layer has no activation function allowing the network to regress high dynamic range functions. The network's final layer is a static colour-conversion matrix (defaulting to YUV to RGB). The dynamic range of the RGB training samples is logarithmically compressed as in LeGendre et al. (2019) (during inference we apply the inverse exponential tone-mapping to the final output of the network).
#### 3.2.1 Training
Training samples are drawn uniformly on normalised \(u,v\) coordinates and sub-pixel samples are taken from the image using a bi-linear filter. During training we intermittently evaluate the network by feeding \(u,v\) coordinates on a regular grid to reconstruct an image and then compute the PSNR versus the original input image. The model is implemented in Keras and the network is trained on a single GC200 IPU.
We train the network with master weights stored in float16 and enable stochastic rounding Gupta et al. (2015) which the IPU supports in hardware. This requires a loss scale which we fix at _16384_. We use an ADAM optimiser with _learning-rate := 0.001_ and internal variables (mean, uncentered-variance) stored at float32. The loss is a Huber loss with _delta := 0.001_.
#### 3.2.2 Approximating HDR Images
While not the focus of this work some investigation into the behaviour of HDR-NIF convergence was necessary in order to get accurately lit renders and some anecdotal results are included in Section 4. We discovered that existing training regimes such as Minnen et al. (2018), Bricman and Ionescu (2018) do not behave well for high resolution HDR images, so we make two notable adjustments. First we employ a Huber loss, which we found to outperform MSE in terms of accuracy: we hypothesise that encouraging the network to learn high luminance samples more slowly leads to more stable gradients early in training but did not investigate further. We also add a final, fixed, linear colour-space conversion transform to the end of the network: this forces the network to predict samples in a colour-space of our choosing which we found affects accuracy (see Section 4.3).
#### 3.2.3 Precision
The IPU does not have hardware sin/cos instructions. Fourier feature training samples are pre-computed with sin/cos at float32 but during inference, where the Fourier features must be computed on the fly, we use optimised software float16 sin/cos implementations. This does not noticeably affect reconstruction quality but improves performance significantly. In path tracing, where we need float32 \(sincos()\), we use an optimised, but generic, software implementation Moshier (1992). This is important
Figure 2: NIF model schematic showing activation sizes with operations between them.
because the C++ standard library equivalents use double precision which will be emulated on IPU. Not only is this slow, but the emulation code itself consumes around 10 KiB per tile.
### Combining Neural-HDRIs and Path Tracing
Integrating HDR-NIF lighting into the path tracer is conceptually straight forward: to determine the lighting contribution from the environment to a given ray, we just convert its direction in the world coordinate frame to equirectangular \([u,v]\) coordinates and submit these to the NIF which returns the \([r,g,b]\) light contribution to be multiplied with the accumulated ray throughput. For this to be efficient we need to run inference on large batches of ray queries in parallel. The only similar use of NIFs in the literature is Poole et al. (2022) where the 3D models are represented using NeRF, but the background is separated, using an neural environment field representation similar to ours. In their case however, the field approximation is low dynamic range.
#### 3.3.1 Compute Graph Compilation
The Poplar/Poplibs graph compilation framework (Appendix A.2) plans how matrix-multiplies are distributed across cores/tiles automatically. For many model configurations considered here, it is possible that a single layer's weights fit in one tile's SRAM. In this case Poplibs can decide to replicate weights of individual layers across every tile on-demand, maximising utilisation of the AMP units. It is able to do this because inter-tile bandwidth is high enough that there is a net performance improvement despite the overhead of broadcasting weights before executing each layer. It is interesting to note that Poplibs' matrix-multiply planner is sophisticated enough to make this decision completely automatically: on other platforms this kind of optimisation must be done by hand (as in Muller (2021)).
When the path trace kernel finishes, the queries it produces are not laid out efficiently for the first matrix multiply in the MLP network. Exchanging information between IPU cores is fast so this is not a problem. However, transferring data between IPUs over IPU link is slower than on-chip exchanges. For this reason when we use multiple IPUs the NIF model is replicated across all the chips which then run data parallel with no communication between them.
#### 3.3.2 Batch-Serialisation
The matrix-multiply planner allows us to trade performance versus memory use. If we want to increase performance we can allocate the planner a larger proportion of SRAM for weights and activations. However, when memory use is close to the limit, the optimiser often resorts to serialisation of the output channels of each layer to reduce memory needed for temporary activations. We prefer to manually manage activation memory by batch-serialising the entire network, which we found leads to better inference throughput and memory use in this case.
## 4 Results
All results use the same path tracing parameters: a maximum path length of 10 and roulette termination beginning at depth 3. If the reader wishes to extrapolate performance figures to real-time rendering scenarios they should bare in mind that state of the art real-time path tracing algorithms typically sample one path per-every-other pixel, for a few bounces, then execute one or more neural-networks to complete the task. Here we fully trace paths for every pixel, then execute a neural network, and repeat taking many samples.
### Test Scenes
The test scenes evaluated are shown in Figure 1. These scenes are designed to give an indicative spread of results that we can expect for the range of scene sizes and NIF configurations that fit in IPU SRAM. Table 2 contains some baseline metrics for these scenes. These were measured on an IPU Bow system. In results where we do not specify Bow the performance was measured on classic GC200 chips.
### Precision of Ray Tracing Operations
In Section 3.1.3 we mention that some CPU code lowers to AVX fused multiply add/subtract (FMA/S) instructions. The IPU has no infinite intermediate precision FMA/S and we avoided any code that results in fallback to double emulation, so it is prudent to check the accuracy of basic ray
\begin{table}
\begin{tabular}{l|c c c c c}
**Scene** & Box & Box & Spheres & Small BVH & Large \\ & & + NIF & + NIF & + NIF & BVH \\ \hline
**Giga-paths/sec** & 0.826 & 0.418 & 1.333 & 0.350 & 0.487 \\
**BVH KiB (per tile)** & 189 & 189 & 0.258 & 265 & 397 \\ \end{tabular}
\end{table}
Table 2: Metrics for test scenes from Figure 1 traced on a Bow-Pod-16. NIF model size is 97KiB (H128, L5).
tracing operations on IPU. In Table 3 we compare normals and primary hit-points against two CPU implementations: one that uses Embree for ray tracing operations, and one that runs the IPU kernels on CPU. This allows us to distinguish disparities due to algorithmic differences from those due to floating point arithmetic.
We note that differences between the same kernels on IPU/CPU are tiny and consistent, with no difference at all in the primary hits. Against Embree errors are still small but there are notable outliers. The first is the hit-point delta for the box scene, this is because the box scene uses the original Cornell box data with no scaling applied (so dimension range is of the order \(100\times\) larger than the other scenes). The second notable difference is in normals for the small/large BVH scenes compared to box/spheres. In this case the BVH scenes contain many triangles and triangle intersection has the largest algorithmic difference compared to Embree. In effect the larger error is due to outliers where rays hit adjacent (but still valid within machine precision) primitives. In those cases the normal will change dramatically (but the hit-point much less so). Overall, we don't believe these differences should cause any difficulties beyond those normally encountered when ray tracing with single precision.
### Colourspaces
We found that images with significant dynamic range require careful selection of the colour-space in which the NN learns its representation. This problem is exacerbated with smaller batch-sizes (\(O(10^{3})\)). In extreme examples, such as where midday sun is the only light source, training can be very unstable if the network learns in the RGB colour-space. In this case all the energy in the scene comes from \(<<1\%\) of the HDRI's pixels: in effect the most important training samples are outliers.
Results of forcing the network to learn in different colour-spaces with a sun-lit HDRI are shown in Figure 3. Table 4 shows the peak-signal-to-noise-ratio (PSNR) of the reconstructed NIF versus the orignal HDRI. Note the background where the NIF is directly hit by primary rays: it contains few perceptible differences between the three images because the errors in the few sun samples are only noticeable when used for path tracing. Only YUV gives the correct result in this case, despite YCoCg having higher PSNR in luminance channel and RGB space. The YCoCg colour matrix completely de-correlates gradients flowing back from the loss layer whilst in contrast, the RGB colour-space is known to have highly correlated components. We hypothesise that having some correlation between gradients is beneficial, but the network is not able to separate luminance and chrominance information from RGB samples alone. We believe these aspects warrant further investigation.
### NIF Accuracy
In Figure 4 we can see a comparison of path tracing results for the _spheres_ scene with HDR-NIFs of increasing model size and a reference image rendered in Blender Community (2018). This simple scene is used so that we can vary the NIF model size through a large range without exhausting SRAM. In Figure 4(a) even though the background is a very poor approximation of the original HDRI the shadows and caustics look comparable to the higher quality NIFs and original reference in 4(d). This explains why tiny neural radiance caches can be so effective: because in that use case the neural-field approximation is only used to complete paths and never directly visualised.
We list the PSNR values for the _urban alley_ HDRI in Table 5 because it is an outlier in terms of other PSNR results. Figure 7 shows model-sizes versus PSNRs and path rate (paths/second) for a sweep over 7 different HDRI images: _urban alley_ can be seen as the left most vertical grouping of scatter points with significantly lower PSNR than the rest. This may be because it has high texture and colour variation as well as large dynamic range. In this case we note that the chromatic PSNR is a poor predictor of reconstruction quality which is at odds with Table 4.
\begin{table}
\begin{tabular}{r|c c c}
**Color-space** & **PSNR** & **PSNR** & **PSNR** \\ & **RGB** & **Luminance** & **Chrominance** \\ \hline RGB & 32.4 & 37.8 & 28.1 \\ YCoCg & **57.8** & **61.5** & 43.9 \\ YUV & 57.7 & 61.4 & **45.7** \\ \end{tabular}
\end{table}
Table 4: PSNRs for Borghese Gardens HDR-NIF approximation using different colour spaces. Highest PSNRs in bold.
\begin{table}
\begin{tabular}{r|c c|c c}
**Scene** & **MSE (vs Embree)** & & **MSE (vs CPU)** & \\ \hline & Normal & Hit & Normal & Hit \\ Box & \(1.1\times 10^{-13}\) & \(7.6\times 10^{-9}\) & \(2.6\times 10^{-16}\) & 0 \\ Spheres & \(2.1\times 10^{-14}\) & \(2.5\times 10^{-14}\) & \(1.6\times 10^{-16}\) & 0 \\ Small BVH & \(4.5\times 10^{-7}\) & \(2.3\times 10^{-14}\) & \(1.8\times 10^{-16}\) & 0 \\ Large BWI & \(1.2\times 10^{-7}\) & \(7.1\times 10^{-14}\) & \(2.2\times 10^{-16}\) & 0 \\ \end{tabular}
\end{table}
Table 3: Precision test results showing the worst (across the \(x,y,z\) components) mean squared error (MSE) for normals and primary hit-points.
We are aware of perceptual HDR metrics, such as Mantiuk et al. (2011), which should be preferred over PSNR in some circumstances. However, here we are jointly interested in the perceptual reconstruction of the directly visible background, and indirect effects that affect validity of the lighting simulation. It is not immediately clear how perceptual metrics help us in this case. For the remainder of the paper we continue to report three variations of PSNR but recognise their limited utility.
### Performance
Figure 5 shows a roofline plot of sample count versus path rate when rendering 720 \(\times\) 720 images of the box scene on a single IPU GC200 chip from a Pod-4 (classic) system.
Figure 4. Comparison of neural path tracing results with different HDR-NIF model sizes for the _urban alley_ HDRI.
Figure 5. Effect of sample count on path rate (paths/sec), with and without an HDR-NIF.
Figure 3. Images path traced using HDR-NIFs trained in different colour-spaces. (Network that learns a YUV representation gives light with the correct hue).
\begin{table}
\begin{tabular}{r|l l l}
**Model Config** & **PSNR** & **PSNR** & **PSNR** \\ & **RGB** & **Luminance** & **Chrominance** \\ \hline H64 L2 & 29.0 & 31.1 & **14.5** \\ H256 L4 & 29.3 & 35.6 & 14.1 \\ H1024 L8 & **29.4** & **37.3** & 14.0 \\ \end{tabular}
\end{table}
Table 5. PSNRs for Urban Alley HDR-NIF approximations. Highest PSNRs in bold.
Even with relatively low bandwidth the transfers can be completely overlapped with compute when taking more than 256 samples per pixel. The DRAM ray transfer bandwidth is ultimately limited by transfer size, which in turn is limited by the amount of ray-data that fits on 32 I/O tiles (see Table 6). Note that ray batch data is double buffered on the I/O tiles to allow saving one batch while reading the next efficiently in a pipeline.
Bytes per ray batch in Table 6 does not exhaust I/O tile memory so in principle, we could increase the DRAM bandwidth slightly by increasing the batch size, which amounts to increasing _rays-per-worker_ (Section 3.1.2). However, in Figure 6(a) we can see that overall path trace rate peaks at 8 _rays-per-worker_ (and this seems consistent across many NIF configurations). So unless very small numbers of samples are required, there is no performance benefit from increasing rays-per-worker beyond 8, despite this limiting the DRAM bandwidth utilisation. Furthermore, increasing _rays-per-worker_ to 10 exhausts SRAM for larger NIF models (Figure 6(b)) so there is limited room for manoeuvre and we generally stick to 6 or 8 rays-per-worker.
#### 4.5.1 Scaling with Clock Speed
Bow IPU systems have identical micro-architecture to IPU-Classic systems but a 40% higher clock speed. If we take enough samples to maintain overlapped DRAM transfers then we would expect all on-chip operations to scale perfectly with clock speed. We measure a single test case (Figure 1(d)) on a Bow system to check this. Results are given in Table 7 where samples per clock are almost identical between the two systems, demonstrating the expected scaling.
### Model-size/PSNR versus Path Rate
Figure 7 shows how NIF model configuration affects path sampling rate (paths/sec) and PSNR (dB). Sample rates are again for a single GC200. As well as sweeping across NIF size we also sweep across 7 HDR-NIFs, trained on the HDRIs listed in Appendix B.
Unfortunately, chromatic PSNR (Figure 7(b)) appears insensitive to model size, despite it having a large effect on correctness of the lighting, but we have seen that lighting can be affected by outlying samples that will not be captured by PSNR metrics (Figure 3). Luminance (Figure 7(a)) is more sensitive: only models with either hidden size above 128, or layer counts above 4, (or both), reliably result in PSNRs above 60. These are interesting thresholds because they are precisely the points at which current GPUs can not operate in the regime that loads weights into SRAM/registers exactly once Muller (2021). Of course, things are not so clear cut, as we have seen in Figure 4(a), path tracing effects can have high fidelity even when the background reconstruction quality is unacceptable.
## 5 Conclusions and Future Work
We have presented results that demonstrate the potential and utility of Graphcore processors for exploring on-chip neural rendering techniques. The current system is effective when all data, other than ray streams, fits in SRAM. The capability to render, even small scenes, entirely from SRAM is a capability unique to IPUs and we believe it is worth exploring further.
Functionally, the requirement for all BVH data to fit on one tile is currently the most significant limitation. Algorithms for distributed rendering between thousands of CPUs, such as Fouladi et al. (2022), may well be applicable at a smaller scale within a single IPU processor, and we would like to explore similar algorithms to exchange both treelets and rays between tiles, taking advantage of the high on-chip communication bandwidth to render larger scenes. Using the currently idle compute capability of the I/O tiles is also an interesting possibility, for example to decompress treelets, or to sort and schedule rays as they are loaded in parallel with the heavier compute.
In terms of performance there is scope for optimisation. For example, tracing ray packets of size two and storing more than one triangle primitive per BVH leaf-node would drastically increase the scope for vectorisation and reduce BVH memory consumption. Combined with more judicious use of larger load/store instructions it should be
\begin{table}
\begin{tabular}{r|l l l}
**Image** & **Save+Load Time** & **Bytes per** & **Bandwidth)** \\
**Resolution** & **(milli-seconds)** & **Ray-Batch** & **(GiB/sec)** \\ \hline
360\(\times\)360 & 3.9 & 5443200 & 1.29 \\
720\(\times\)720 & 4.0 & 5443200 & 1.27 \\
1440\(\times\)1440 & 3.3 & 5806080 & 1.64 \\
2880\(\times\)2880 & 3.3 & 5806080 & 1.64 \\
5760\(\times\)5760 & 3.3 & 7257600 & 1.65 \\ \end{tabular}
\end{table}
Table 6: Achieved bi-directional DRAM ray transfer bandwidth when using 8 _rays-per-worker_ thread.
\begin{table}
\begin{tabular}{r|l l l}
**IPU** & **Sample-rate** & **Clock** & **Sample-rate** \\
**System** & **(paths/sec)** & **(GHz)** & **per GHz** \\ \hline Pod-4 Classic & 156.5 & 1.33 & 117.7 \\ Bow Pod-4 & 216.4 & 1.85 & 117.0 \\ \end{tabular}
\end{table}
Table 7: Scaling of neural path trace rate with clock-speed.
possible to increase FLOP intensity. Newer IPU hardware supports float8 and storing the HDR-NIF weights in this format would halve the memory required for weights and increase performance of the system's neural component. The BVH representation could also be compressed further using quarter-precision.
Finally, a more sophisticated material and shading system including textures and importance sampling would be a desirable addition. In current systems the lack of fine grained, random external DRAM reads for shade on hit texturing suggests a _shade before hit_ approach Fascione et al. (2018), or alternatively, fully neural material and geometry systems Park et al. (2019); Kuznetsov et al. (2021); Zeltner et al. (2023) may be more appropriate given that IPUs are predominantly AI processors.
Hopefully the work presented here encourages others to experiment with IPU hardware in this domain with confidence in the basic building blocks we have provided: all source code is made available here Graphcore (2023). It is probably not yet clear which types and sizes of neural networks rendering hardware should be optimised for, but we believe the IPU is a useful tool in evaluating the range of possibilities.
## 6 Acknowledgements
We would like to thank Deniz Beker, Yani Donchev, Andrew Fitzgibbon, and Carlo Luschi for their input and feedback.
|
2304.00098 | The Simons Observatory: Differentiating Thermal and Optical Effects in
Superconducting Transition-Edge Sensing Bolometers | The Simons Observatory aims to field 70,000 Transition-Edge Sensor (TES)
bolometers to measure the Cosmic Microwave Background. With so many detectors,
rapid but accurate validation of their properties prior to their integration
into telescopes is of particular importance. This paper describes an
exploration of a new method to improve the simultaneous characterization of TES
thermal parameters and bolometer optical efficiencies without significantly
increasing the data collection time. The paper uses a special-purpose data set
comprising current-voltage (IV) curves collected from thousands of TES
bolometers with a variety of different average bath temperatures and different
cold load temperatures. A subset of the bolometers were masked so they received
no optical power. The new method fits data from the bath temperature ramp and
cold load temperature ramps together as one set instead of fitting each
independently. This enables thermal parameter assessment of the unmasked
detectors without performing additional cooldowns of the cryostat, halving the
time necessary to obtain thermal characterization of all detectors. | Rita F. Sonka, Shannon M. Duff, Daniel Dutcher, Suzanne T. Staggs | 2023-03-31T19:46:05Z | http://arxiv.org/abs/2304.00098v1 | The Simons Observatory: Differentiating Thermal and Optical Effects in Superconducting Transition-Edge Sensing Bolometers
###### Abstract
The Simons Observatory aims to field 70,000 Transition-Edge Sensor (TES) bolometers to measure the Cosmic Microwave Background. With so many detectors, rapid but accurate validation of their properties prior to their integration into telescopes is of particular importance. This paper describes an exploration of a new method to improve the simultaneous characterization of TES thermal parameters and bolometer optical efficiencies without significantly increasing the data collection time. The paper uses a special-purpose data set comprising current-voltage (IV) curves collected from thousands of TES bolometers with a variety of different average bath temperatures and different cold load temperatures. A subset of the bolometers were masked so they received no optical power. The new method fits data from the bath temperature ramp and cold load temperature ramps together as one set instead of fitting each independently. This enables thermal parameter assessment of the unmasked detectors without performing additional cooldowns of the cryostat, halving the time necessary to obtain thermal characterization of all detectors.
Microwave detectors, Satellites and large arrays, Superconducting Detectors, Superconducting device testing, Temperature measurement, Thermal properties, Transition-edge sensors (TES) devices
## I Motivation
The Simons Observatory (SO) aims to observe, map and analyze the cosmic microwave background (CMB) in order to characterize the primordial perturbations (B-modes), measure the number of relativistic species and the mass of neutrinos, test for deviations from a cosmological constant, improve our understanding of galaxy evolution, and constrain the duration of reionization [1].
Due to the signal's tiny amplitude [2], SO must fabricate, characterize and deploy tens of thousands of optically-coupled, densely-packed Transition-Edge Sensors (TESes) in their telescopes to meet the science goals. SO will deploy \(\sim\)70,000 detectors packaged within 49 detector-readout focal plane modules distributed among four telescopes [3]. This paper describes a new method for simultaneously characterizing the detector's thermal parameters and optical efficiencies without requiring more than one time-consuming cooldown of the devices. The method could be used by other instruments that use TES bolometers.
## II Background: TES Operation and Parameters
Fig. 1 diagrams the coupled thermal and electrical circuit of a TES bolometer. In operation, \(I_{bias}\), the current along the bias line, is held constant. Since the shunt resistance, \(R_{shunt}\), is less than the (variable) TES resistance, \(R_{TES}\), within the TES superconducting transition, this effectively voltage-biases the TES. This combined with the thermal circuit creates negative electrothermal feedback that keeps the TES within the superconducting transition even as \(P_{opt}\), the optical power deposited on the bolometer, varies (until it is high enough to drive the TES normal) [4]. The current through the TES, \(I_{TES}\), is read out through a microwave multiplexed readout system (not pictured) [5][6]. The electrical power dissipated in the TES, \(P_{bias}\), can then be calculated from \(I_{bias}\), \(I_{TES}\), and \(R_{shunt}\). From there, \(P_{opt}\), the desired measurable quantity, can be calculated [4][7] from:
\[P_{th}=\frac{G}{nT_{c}^{n-1}}(T_{c}^{n}-T_{bath}^{n})=P_{b70}+\eta_{opt}P_{opt} \tag{1}\]
where \(P_{th}\) is the total thermal power vented into the bath; \(P_{b70}\) is \(P_{bias}\) at the point in the transition where \(R_{TES}=0.70R_{n}\), with \(R_{n}\) the normal resistance of the TES; \(G\) is the differential thermal conductance connecting the TES island and bath; \(T_{c}\) is the TES critical temperature, the temperature at which the TES transitions between normal resistance and
Figure 1: Diagram of the ideal TES coupled thermal (black) and electrical (green) circuit.
superconductivity; \(n\) is the power law index (1 + the thermal conductance exponent); and \(\eta_{opt}\) is the optical efficiency, the ratio of how much power the cold load delivers to the bolometer's coupled orthomode transducer (OMT) waveguide to how much power it absorbs.
The three TES thermal parameters (\(G\), \(T_{c}\), \(n\)) and the TES optical efficiency (\(\eta_{opt}\)) must be obtained in the laboratory before the detectors are fielded to confirm the detectors will meet noise specifications and to solve Eq. 1 for \(P_{opt}\).
## III Datasets for Obtaining TES Thermal Parameters and Optical Efficiency
As illustrated in Fig. 2, the detectors were installed in a dilution refrigerator (DR) above a cold load made of Eccosorb CR114 [8]. Two-thirds of the detectors were masked, preventing light from reaching them, while the rest were left unmasked to receive power from the cold load. After the detectors cooled below 100 mK, the following two data sets were taken (see Fig. 3 for example data from two detectors) over the course of about 8 hours each:
1. "Bath ramp" - the bath temperature \(T_{b}\) was repeatedly increased and allowed to reach equilibrium, while the cold load was kept at 9.5 K. At each bath temperature, current vs. voltage (IV) curves (see [11]) for all the detectors were taken.
2. "Cold load ramp" - the temperature of the cold load was repeatedly increased and allowed to reach equilibrium, while the bath temperature was feedback controlled to 80 mK, referenced off a thermometer on the copper module mount. At each cold load temperature, IV curves for all the detectors were taken. Also at each cold load temperature, the \(P_{opt}\) on every position on the module was estimated from the thermal design and geometry of the cryostat and cold load [10].
Figure 3: Independent fitting techniques examples. The top graph shows the bath ramp data and thermal fit results for one masked 90 GHz detector. The bottom graph shows that masked detector’s cold load ramp data (at the top) and the cold load ramp data for an unmasked 90 GHz detector positioned at a similar distance from the cold load center. Notice that the masked detector’s \(P_{70}\)s do decrease slightly with increasing cold load temperature; this is due to indirect thermal heating of the bath in that area from the cold load power.
Figure 2: Experimental setup illustrations. The top photograph (a) shows a sky-side view of three focal plane modules installed in the cryostat for cooldown. Their optical feedhoms point downwards, towards the cold load [8]. Two-thirds of the feedhoms are masked with copper shinn stock, preventing them from receiving light. The bottom left photograph (b) shows the impedance-matched Eccosorb CR114 cold load outside of its optical filter and thermal containment casing [9]. The bottom right photograph (c) shows the cold load inside thermal containment casing with its bandpass optical filter on top. The cold load is mounted beneath the modules, and is weakly linked to the 4K stage of the cryostat so its temperature can be ramped from 9.5 K to 18 K. The resulting optical power for each detector is estimated from the detector beams and the cold load geometry (see [10] for similar work).
## IV Fitting Techniques
### _The Independent Fits Technique_
A \(P_{b70}\) was extracted from each IV curve. Then Eq. 1 was fit to each masked (\(P_{opt}=0\)) detector's bath ramp data (only) to obtain its thermal parameters, as illustrated in the top plot of Fig. 3. Typically, \(\eta_{opt}\) for each unmasked detector would then be extracted from the cold load ramp data, using the masked detectors to correct for any wafer heating associated with changing the temperature of the cold load (see ex. [10]).
### _The Together-fits Technique_
The idea of the together-fits technique (see Fig. 4) is to fit the bath ramp and cold load ramp for a given detector together, with a new parameter, a dark heating coefficient, \(\xi_{therm}\):
\[P_{b70}=\frac{G}{nT_{c}^{m-1}}(T_{c}^{n}-T_{bath}^{n})-\eta_{opt}P_{opt}-\xi_ {therm}P_{opt}. \tag{2}\]
Although masked detectors' OMTs do not absorb optical power directly (so their apparent \(\eta_{opt}=0\)), their \(P_{b70}\) values do change when the cold load temperature changes due to parasitic heating from the cold load altering their individual bath temperatures [10]. The bath ramp and cold load ramp data for dark detectors are simultaneously fit for \(\xi_{therm}\), \(G\), \(n\), and \(T_{c}\).
Then, for the optical detectors, the bath ramp and cold load ramp data are fitted for \(G\), \(n\), \(T_{c}\), and \(\eta_{opt}\) with \(\xi_{therm}\) fixed as the average of the \(\xi_{therm}\) values for the masked detectors at similar distance from the cold load center.
### _Advantages of the Together-fits Technique_
This technique enables calculation of thermal parameters of unmasked detectors, and provides a more accurate estimate of the masked detectors' thermal parameters. Estimates of both \(\eta_{opt}\) and \(\xi_{therm}\) for all the detectors in a module can be used to disambiguate signal fluctuations due to atmospheric fluctuations in the field from those caused by bath temperature fluctuations. Additionally, it is useful for predicting on-sky simultaneous biasability of detectors associated with each of the 12 module bias lines [3, 12] because it allows incorporation of the impact of \(\eta_{opt}\) variations among the detectors.
## V Exploratory Data set and Example Results
To explore the power and validity of this technique, a special data set was taken in addition to the normal bath ramp and cold load ramp. It consisted of four smaller bath ramps, each at a different elevated cold load temperature; example results for one detector are shown in Fig. 4. The overlap of the cold load ramp-derived \(P_{b70}\)s with the different bath ramp \(P_{b70}\)s confirms that the method of approaching a given bath temperature/cold load temperature phase space point does not significantly affect the measurement.
Fig. 5 tests and showcases the together-fits technique. The four histograms on the left side show the results of fitting the thermal parameters for all the detectors in one module with the standard SO method (normally only applied to the masked detectors), which relies on a single bath ramp data set at fixed cold load temperature. The histograms include both masked and unmasked detectors. As expected, the \(P_{b70}\) values are smaller for the unmasked detectors.
The four histograms on the right side derive from the together-fits method on the same detectors. Note that now the masked and unmasked detectors show much better agreement for \(P_{b70}\) and \(G\). The convergence of \(T_{c}\) most strongly signals the accuracy, as \(T_{c}\) is the thermal parameter fabricated most consistently among the detectors (barring a potential small radial variation). This is because it is intended to be the same for all of them, and is finely tuned simultaneously for the entire wafer by a heating process [13] (and thus cannot suffer midway through the module fabrication from degradation in mechanical precision or operator attentiveness, as the other parameters can.). The difference in average fitted \(T_{c}\) between masked and unmasked detectors is 21 in the original method; 1.0 (95% decrease) when the together-fits method is applied to just the normal dataset; and 0.66 (97% decrease) when the together-fits method is applied to the special dataset.
## VI Going Forward
The next step for capitalizing on this together-fits method is to define a new way of collecting test data for SO modules that improves the fitting power without taking significantly longer than the standard method (\(\sim 8\) hrs). We will analyze the special dataset (which took \(\sim 36\) hrs to acquire) with more flexible models, such as allowing \(\chi_{therm}\) to vary with bath temperature. With those results in mind, we will further analyze the special dataset to select the optimal set of (bath-temperature, cold load-temperature) points consistent with the
Figure 4: Example of applying the together-fits technique to an unmasked 90 GHz detector in the special exploration data set. The standard data set would only include the normal bath ramp and normal collload ramp. BR= bath ramp, CLR = Cold Load Ramp.
time constraint. Note that applying the together-fits technique to just the first bath ramp and cold load ramp (not including the extra data in the special dataset) does still extract much more accurate thermal parameter values for the unmasked detectors than the independent-fits method. Further work will also explore use of the 36 'dark' bolometers fabricated on the SO detector wafers without any optical coupling, and other ways of estimating the dark heating term, and its usefulness in isolating bath temperature fluctuations from other signal variations in the field.
|
2309.11169 | Scaling of self-stimulated spin echoes | Self-stimulated echoes have recently been reported in the high cooperativity
and inhomogeneous coupling regime of spin ensembles with superconducting
resonators. In this work, we study their relative amplitudes using
echo-silencing made possible by a fast frequency tunable resonator. The highly
anisotropic spin linewidth of Er$^{3+}$ electron spins in the CaWO$_4$ crystal
also allows to study the dependence on spin-resonator ensemble cooperativity.
It is demonstrated that self-stimulated echoes primarily result from a
combination of two large control pulses and the echo preceding it. | Sebastian de Graaf, Aditya Jayaraman, Sergey Kubatkin, Andrey Danilov, Vishal Ranjan | 2023-09-20T09:28:52Z | http://arxiv.org/abs/2309.11169v1 | # Scaling of self-stimulated spin echoes
###### Abstract
Self-stimulated echoes have recently been reported in the high cooperativity and inhomogeneous coupling regime of spin ensembles with superconducting resonators. In this work, we study their relative amplitudes using echo-silencing made possible by a fast frequency tunable resonator. The highly anisotropic spin linewidth of Er\({}^{3+}\) electron spins in the CaWO\({}_{4}\) crystal also allows to study the dependence on spin-resonator ensemble cooperativity. It is demonstrated that self-stimulated echoes primarily result from a combination of two large control pulses and the echo preceding it.
In conventional magnetic resonance spectroscopy, stimulated echoes (STE) are known to occur when more than two control pulses are applied to spins. Stimulated echoes refocus the polarization grating stored on the longitudinal axis [1], in contrast to Hahn echoes which refocus the coherence generated on the transverse axes. In specific cases, Hahn echo emissions can themselves induce further evolution of spins and stimulate echo emissions. Although first observed in 1954 [2], so called self-stimulated echoes (SSEs) have recently received renewed attention [3; 4]. This is because applications such as high sensitivity electron spin resonance spectroscopy [5; 6; 7; 8; 9] and microwave quantum memories [10; 11; 12; 13; 14; 15] make use of spin ensembles strongly coupled to superconducting resonators [16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27], a regime where SSEs are prevalent.
The ensemble coupling of spins with a common resonator mode is quantified by the cooperativity \(C=4g_{\text{ens}}^{2}/\Gamma\kappa_{\text{tot}}\), where \(g_{\text{ens}}=g_{0}\sqrt{N}\), \(g_{0}\) the single spin-photon coupling strength, \(\Gamma\) the inhomogeneous spin linewidth, \(\kappa_{\text{tot}}\) the total loss rate of the resonator and \(N\) the number of spins. When \(C\ll 1\), emitted echo fields are dissipated from the resonator before they could interact with spin ensemble again. On the other hand, when \(C\gg 1\), a strong collective feedback effect of the emitted field on the spins, e.g. super-radiance [28] and radiation damping [29] can dominate the spin-dynamics. The intermediate regime of optimal impedance matching \(C=1\) is especially relevant for maximum efficiency quantum memories [30]. It is the purpose of this paper to experimentally study the scaling of self-stimulated echoes in these different regimes. Our study is, in particular, aided by the use a fast frequency tunable superconducting resonator [31] for controlled emission of radiation into the resonator [32].
Generation of SSEs can be understood using a simplified phase evolution in time, as proposed in Ref. [3] and schematically presented in Fig. 1(a). When strong inhomogeneities of Rabi angles of spins exist, a control pulse brings the spins to different points on the Bloch sphere, which for simplicity can be decomposed into a subset of ground state (\(g\)) and excited state (\(e\)) amplitudes. A second control pulse at a time \(\tau\) bifurcates the previous spin amplitudes into four subsets causing a refocusing at the time \(2\tau\) between two evolution trajectories, i.e. a conventional two-pulse Hahn echo. The emitted Hahn echo then itself acts like a pulse on spins such that new branches of spin evolution appear and additional refocusing events occur at a time \(3\tau\). Subsequent echoes create
Figure 1: Self-stimulated spin echoes (SSE). (a) A schematic of refocusing mechanism leading to self-stimulated echoes at \(3\tau,~{}4\tau,~{}5\tau...\) as originally described in Ref. [3]. (b) Measured magnitude of echo trains using two pulses of same amplitude, duration \(2\)\(\mu\)s and phase, and \(\tau=25\)\(\mu\)s. Two panels compare cases when the resonator is detuned to selectively suppress echo2 (top) or echo3 (bottom) emissions. Dashed curves in top (bottom) panels correspond to cases when the same resonator detuning pulse is applied between echo1 (echo2) and echo2 (echo3). Note that the dashed lines lie almost entirely on top of the solid lines, i.e. detuning in-between echoes has no effect. Measurements are done at \(C=3\).
more bifurcations and more refocusing events separated by \(\tau\).
We start our experimental studies by qualitatively verifying the sketch of Figure 1(a) which, in particular, illustrates that formation of SSEs requires phase evolution from all the pulses and echoes preceding it. Echo-trains measured using two control pulses of the same amplitude and phase are shown in Fig. 1(b). Note that the magnitude is plotted in the logarithmic scale. We observe that all subsequent echoes are suppressed when we detune the resonator frequency by an amount \(\Delta\omega\gg\kappa\) to suppress the emission of echo2 (top panel) [32]. Applying the same duration detuning pulses between the echoes (dashed curves) produces no change thus proving that the detuning pulses do not generate significant phase noise to cause a suppression of echoes. Same observation of subsequent echo suppression is made when echo3 (bottom panel) is silenced. These suggest that contribution of 2-pulse refocusing to SSE, e.g. from pulse1 and echo1 in echo3, is small. In the following, we expand on the preceding observations and semi-quantitatively study the relative amplitudes of SSEs using in-situ control of radiation fields in the resonator and spin-resonator cooperativity.
Our electron spins (with effective \(S=1/2\)) are provided by bulk doped Er\({}^{3+}\) substitutional ions in a CaWO\({}_{4}\) crystal with a nominal concentration of 50 ppm. The crystal is held with vacuum grease on a superconducting resonator of frequency \(\omega_{0}/2\pi=6.5\) GHz operating in the overcoupled regime with a loss rate of \(\kappa_{c}/2\pi=1.9\pm 0.1\) MHz. The bulk distribution of Er\({}^{3+}\) and narrow inductor width of 1 \(\mu\)m naturally result in extremely inhomogeneous Rabi angles and benefit the formation of SSE. Two additional properties are relevant to this study. Firstly the kinetic inductance of the superconducting resonators (film thickness = 50 nm, inductor width = 0.5 \(\mu\)m) made from NbN allows the resonance frequency to be rapidly tuned by passing a bias current through the inductor strip of the resonator [31]. Secondly, it is possible to access different cooperativity \(C\) in the same setup. This is because two isotopes of Er\({}^{3+}\), one without a nuclear spin \(I=0\) (77%) and the rest with \(I=7/2\)[33] couple with different number of spins at different transitions. Moreover, fine tuning of \(C\) is facilitated by the spin linewidth varying with the direction of the applied magnetic field as the highly anisotropic gyromagnetic tensor (\(\gamma_{ab}=117\) MHz/mT, \(\gamma_{c}=17\) MHz/mT) [34] responds to the charge-noise from crystal defects [35; 36]. The experiments are performed at the base temperature of a dilution refrigerator at 20 mK, with the magnetic field aligned with the \(c\)-axis (\(\phi\sim 0\)) unless mentioned explicitly. More details of the experimental setup can be found in Ref. [32].
The resonator tunability helps to control the back action of the echo field on the spins, that is to vary spin rotations during the echo emission, and study the amplitudes of subsequent SSE. As shown in the sketch of Fig. 2(a), two pulses of the same amplitude and phase are applied and the resonator detuned for 20 \(\mu\)s, a time longer than the echo duration, with varying \(\Delta\omega\) around echo1. Figure 2(b) shows the corresponding echo train traces (acquired at a large demodulation bandwidth of 100 MHz to account for the relatively large total loss rate \(\kappa_{\text{tot}}\sim 7\) MHz) near the \(I=0\) transition with \(C=3\) (see further below). The variation of echo1 magnitude versus normalized resonator detuning \(-\Delta\omega/\kappa_{\text{tot}}\) is plotted in Fig. 2(c) and the observed decay is well accounted for by the resonator filtering function \((\kappa_{\text{tot}}/2)/\sqrt{\Delta\omega^{2}+\kappa_{\text{tot}}^{2}/4}\)[32]. Similar to Fig. 1(b), subsequent echoes, echo2 and echo3 are progressively suppressed. To quantify their relative suppression, we plot
Figure 2: SSE response versus intra-cavity field. (a) An experimental sequence consisting of two control pulses of flip angles \(\beta_{i}\), and duration of 2 \(\mu\)s with a 20 \(\mu\)s long resonator detuning pulse across echo1 of varying \(\Delta\omega\). (b) SSE traces at different \(\Delta\omega\) and same flip angles \(\beta=\beta_{1}=\beta_{2}\). Larger noise floor is because of the larger measurement bandwidth \(BW\sim 100\) MHz compared to other plots acquired at \(BW\) of 2 MHz. (c) Measured (symbols) and theoretical (curve) echo amplitude against different resonator detunings. (d) Scaling of echo\(\{2,3\}\) amplitudes (measured: symbols, fits: lines) with corresponding changes in echo\(\{1\),\(2\}\). (e) SSE traces for different flip angles \(\beta_{1}\) of the first control pulse and fixed \(\beta_{2}\). The sequence is shown in the inset. (f) SSE magnitude decay for different \(\beta_{1}\) versus echo number. Solid lines are calculated from Eq. 1. For all plots \(C=3\).
the amplitude of echo2 and echo3 as a function of echo1 and echo2, respectively, in Fig. 2(d). A linear dependence (proportionality constant 0.16 and 0.12, respectively) describes the echo2 and echo3 data well.
Full quantitative understanding of the scaling of SSE is challenging due to the lack of knowledge of exact spin frequency detuning and coupling strength distribution. Here, we use a minimalist model to explain the scaling of echo2 and echo3 using classical Bloch theory. Three pulses with arbitrary flip angles \(\beta_{i}\) produce a STE with an amplitude proportional to \(\sin(\beta_{1})\sin(\beta_{2})\sin(\beta_{3})\)[1], where we assume pulse delay \(\tau\ll T_{2},\ T_{1}\). Using control pulses of the same Rabi angle \(\beta\) and the fact that resulting echo1 fields are relatively much smaller, the resulting spin rotations from back action is \(\sin(\theta_{1})\approx\theta_{1}\). Then the STE contribution of echo2 is equal to \(\theta_{1}\sin^{2}(\beta)\), where \(\theta_{i}\) denotes the much smaller rotation angle from echo back action. Similarly, the 2-pulse Hahn echo contribution of echo2 (from pulse2 and echo1) is proportional to \(\theta_{1}^{2}\sin(\beta)\). The latter is smaller in magnitude than the STE contribution as long as \(\beta\gg\theta_{1}\). Thus linear scaling of echo2 with echo1 can be established. Similar arguments can be made for echo3 to show that the dominating contribution comes from a 3-pulse STE from pulse1, pulse2 and echo2, with a resulting echo3 proportional to \(\theta_{2}\sin^{2}(\beta)\). The proportionality constant extracted from slopes in two cases is found to be similar, 0.16 and 0.12, as expected from the model.
Overall our observations in Fig. 2(d) suggest that a SSE primarily consists of a 3-pulse STE from two large control pulses and the weak echo field preceding it. Barring common prefactors, we can thus quantify the magnitude of the \((i+1)^{\rm th}\) echo in the limit of \(\tau\ll T_{1}\) as
\[A_{\rm echo}^{i+1}\equiv\eta A_{\rm echo}^{i}{\rm sin}(\beta_{1}){\rm sin}( \beta_{2}), \tag{1}\]
where \(i>0\) is a positive integer, and a scaling factor \(\eta^{2}\) captures the fraction of power transferred to spins during the formation of an echo. To verify this equation further, we acquired SSE traces by varying the flip angle of the first control pulse \(\beta_{1}\) (Fig. 2(e)), while keeping \(\beta_{2}\) fixed. Their decay is plotted in Fig. 2(f). It has been previously shown that for strongly inhomogeneous Rabi angles in spin systems coupled to small mode volume resonators, spins for which pulse amplitudes amount to \(\pi/2\) and \(\pi\) contribute maximum to the Hahn echo [36; 37; 38]. This allows us to set \(\beta_{2}=90^{\circ}\), and proportionally vary \(\beta_{1}\) using the ratio of pulse amplitudes \(\beta_{2}/\beta_{1}\). The SSE decays calculated from Eq. 1 are plotted as solid lines in Fig. 2(f), and show an excellent agreement with measurements using the same scaling parameter \(\eta=0.21\) across the entire dataset. Moreover, \(\eta\sin^{2}(\beta)\approx 0.21\) is close to the measured slope in Fig. 2(d) acquired under the same experimental conditions.
We now study the dependence of SSE amplitudes on spin ensemble-resonator cooperativity. To this end, we identify two transitions in the spectrum [Fig. 3(a)] belonging to nuclear spin isotopes \(I=0\) and \(I=7/2\) (\(m_{I}=7/2\) is the nuclear spin projection on the magnetic field axis). From fits performed to \(\kappa_{\rm tot}=\kappa+g_{\rm ens}^{2}\Gamma(\Delta\omega_{s}^{2}+\Gamma^{2}/4)\)[18], we find coupling strengths \(g_{\rm ens}/2\pi=10\pm 1\) MHz and \(1.2\pm 0.1\) MHz, and spin linewidths \(\Gamma/2\pi=76\pm 5\) MHz and \(15\pm 1\) MHz, and corresponding cooperativity \(C=3\) and \(0.2\), respectively. Here \(\Delta\omega_{s}\) is the magnetic field dependent detuning of the spin transition frequency from the resonator. The difference in number of spins is consistent with the isotope and seven sub-level ground state populations in the \(I=7/2\) manifold. Echo response measured using the same control pulses (2 \(\mu\)s in duration, such that pulse bandwidth \(\ll\Gamma\)) at two transitions [Fig. 3(b)] shows strongly suppressed or absent SSE for the case of \(C\ll 1\), and supports similar observations made in Ref. [4].
Figure 3: SSE response versus spin-resonator cooperativity. (a) Continuous wave spectroscopy near two Er\({}^{3+}\) transitions \(I=0\) and \(I=7/2\), \(m_{I}=7/2\) at zero angle (measured: symbols, fit:lines). (b) Spin energy relaxation (measured: symbols, fit:lines) using inversion recovery sequences for the two transitions at zero angle. (c) echo response using the control pulses with same power. (d) Spin resonance position (left axis, measured: symbols, theory: line) and spin linewidth (right axis) for the \(I=0\) transition extracted from continuous wave spectroscopy. The magnetic field angle \(\phi\) is relative to the \(c\)-axis of CaWO\({}_{4}\). (e) Decay of SSE magnitudes for different cooperativity, but similar \(\kappa_{\rm tot}\), obtained at different \(\phi\) for the \(I=0\) transition. Solid lines are calculated using Eq. 1. (f) scaling of the extracted scaling parameter \(\eta\) as a function of \(C\). Dashed line is a guide to the eye.
To investigate differences of spin dynamics between two Er isotopes, the spin-relaxation time is measured using an inversion recovery sequence (Fig. 3(c)). For \(I=7/2\), we observe an exponential recovery with a decay constant \(T_{1}=440\pm 11\) ms, a value consistent with a direct phonon process [32, 39]. In contrast, we observe a bi-exponential recovery for \(I=0\), with decay constants \(T_{1}^{\text{fast}}=4.7\pm 0.6\) ms and \(T_{1}^{\text{slow}}=97\pm 12\) ms. However, neither of the two values is compatible with a direct-phonon process (scaling as \(1/B^{5}\)), suggesting that a combination of strong collective radiative effects, i.e. superradiance and spatial spin diffusion across the low mode volume resonator [22] could be responsible. The role of incoherent radiation from enhanced spin relaxation towards formation of SSEs can, however, be ruled out as resonator detuning pulses of duration 20 \(\mu\)s applied in-between the echoes (dashed curves in Fig. 1(b)) do not alter the subsequent echoes. We also measure spin coherence times \(T_{2}\) at two transitions and find the contrasting SSE amplitudes to not be related to the relative \(T_{2}\) times. In fact, \(T_{2}=2.5\) ms for \(I=7/2\) is four times longer compared to that for \(I=0\) and possibly limited by instantaneous diffusion [40, 41].
Another control of cooperativity is achieved by different \(\Gamma\) of the spin ensemble obtained when rotating the applied magnetic field with respect to the \(c\)-axis of the crystal. Figure 3(d) shows measured magnetic field \(B_{\text{res}}\) at which the \(I=0\) transition is resonant with the resonator (left axis) and the extracted spin linewidth \(\Gamma\) (right axis). The \(B_{\text{res}}\) positions agree with the spin Hamiltonian of Er\({}^{3+}\) with a reasonable misalignment angle of \(2.5^{\circ}\) from the true \(c\)-axis. We observed a change in \(\Gamma\) from 30 MHz at \(\phi=2.5^{\circ}\) to 210 MHz at \(\phi=21^{\circ}\). Similar observations have been made previously [35, 36] and attributed to a combination of local electric fields from charge defects, charge compensation and lack of inversion symmetry at the substitutional Ca\({}^{2+}\) sites. On the other hand, the extracted ensemble coupling strength \(g_{\text{ens}}\) decreases by only 10% in this \(\phi\) range. The small variation in \(g_{\text{ens}}\) is consistent with \(g_{0}\) calculated from the anisotropic gyromagnetic tensor, and the fact that the same number of spins are coupled to the resonator due to the bulk distribution of Er\({}^{3+}\).
For SSE measurements, we choose slightly off-resonant \(B\) fields at different \(\phi\) to achieve a maximum echo amplitude [41] and somewhat similar \(\kappa_{\text{tot}}\) values (between 2.8 MHz and 4 MHz) for better comparison. The SSE magnitudes measured with the same control pulses and delay \(\tau=25\)\(\mu\)s are plotted as a function of echo number in Fig. 3(e) for different cooperativity \(C\). We note that the off-resonant \(C\) is extracted by comparing the intra-cavity field measured at a repetition rate \(\gamma_{\text{rep}}\ll T_{1}\) (spins saturated) with that taken at \(\gamma_{\text{rep}}\gg T_{1}\) (spins polarized) [12, 14]. For all values of \(C\), we observe an exponential decay of echo amplitudes, similar to Fig. 2(e) and Ref. [4]. For extracting \(\eta\) using Eq. 1, once again we set \(\beta_{1,2}=90^{\circ}\) to select the spins that maximally contribute to SSE amplitudes. The calculated SSE decays and corresponding \(\eta\) for different \(C\) are plotted in Fig. 3(e, f). Interestingly, the scaling parameter \(\eta\) increases with \(C\) in an apparent linear fashion. In contrast, \(\eta=0.21\) extracted in Fig. 2(f) at a larger \(C=3\) is smaller than \(\eta=0.3\) for \(C=1.5\) in Fig. 3(e), suggesting the role of larger \(\kappa_{\text{tot}}\) in the smaller spin rotation during echoes, crudeness of the model and a more complex dependence of \(\eta\) on \(C\).
In conclusion, we have used control of intra-cavity field, in particular through echo-silencing, and cooperatively tuning to study scaling of self-stimulated echoes in a strongly inhomogeneously coupled spin ensemble to a small mode volume superconducting resonator. Our results demonstrate that the amplitude of a self-stimulated echo primarily arises from a three pulse stimulated echo using two large control pulses and the preceding echo field. Further studies will target a larger range of \(C\), especially at a fixed \(\kappa_{\text{tot}}\), to map out the scaling and decay of SSE amplitudes against \(C\). STE and SSE in combination with phase imprinting [42, 32] could also be used to implement selective in situ magnetic resonance techniques such as diffusion spectroscopy and imaging [43].
We acknowledge the support from the UK Department for Science, Innovation and Technology through the UK national quantum technologies program. S.D.G. acknowledges support by the Engineering and Physical Sciences Research Council (EPSRC) (Grant Number EP/W027526/1). The Chalmers group acknowledges the support from the Swedish Research Council (VR) (Grant Agreements No. 2019-05480 and No. 2020-04393), EU H2020 European Microkelvin Platform (Grant Agreement No. 824109), and from Knut and Alice Wallenberg Foundation via the Wallenberg centre for Quantum Technology (WACQT). This work was performed in part at Myfab Chalmers.
|
2309.00064 | Ethical Framework for Harnessing the Power of AI in Healthcare and
Beyond | In the past decade, the deployment of deep learning (Artificial Intelligence
(AI)) methods has become pervasive across a spectrum of real-world
applications, often in safety-critical contexts. This comprehensive research
article rigorously investigates the ethical dimensions intricately linked to
the rapid evolution of AI technologies, with a particular focus on the
healthcare domain. Delving deeply, it explores a multitude of facets including
transparency, adept data management, human oversight, educational imperatives,
and international collaboration within the realm of AI advancement. Central to
this article is the proposition of a conscientious AI framework, meticulously
crafted to accentuate values of transparency, equity, answerability, and a
human-centric orientation. The second contribution of the article is the
in-depth and thorough discussion of the limitations inherent to AI systems. It
astutely identifies potential biases and the intricate challenges of navigating
multifaceted contexts. Lastly, the article unequivocally accentuates the
pressing need for globally standardized AI ethics principles and frameworks.
Simultaneously, it aptly illustrates the adaptability of the ethical framework
proposed herein, positioned skillfully to surmount emergent challenges. | Sidra Nasir, Rizwan Ahmed Khan, Samita Bai | 2023-08-31T18:12:12Z | http://arxiv.org/abs/2309.00064v1 | # Ethical Framework for Harnessing the Power of AI in Healthcare and Beyond
###### Abstract
In the past decade, the deployment of deep learning (Artificial Intelligence (AI)) methods has become pervasive across a spectrum of real-world applications, often in safety-critical contexts. This comprehensive research article rigorously investigates the ethical dimensions intricately linked to the rapid evolution of AI technologies, with a particular focus on the healthcare domain. Delving deeply, it explores a multitude of facets including transparency, adept data management, human oversight, educational imperatives, and international collaboration within the realm of AI advancement. Central to this article is the proposition of a conscientious AI framework, meticulously crafted to accentuate values of transparency, equity, answerability, and a human-centric orientation. The second contribution of the article is the in-depth and thorough discussion of the limitations inherent to AI systems. It astutely identifies potential biases and the intricate challenges of navigating multifaceted contexts. Lastly, the article unequivocally accentuates the pressing need for globally standardized AI ethics principles and frameworks. Simultaneously, it aptly illustrates the adaptability of the ethical framework proposed herein, positioned skillfully to surmount emergent challenges.
## 1 Introduction
Artificial intelligence (AI) has shown immense promise in transforming healthcare through the application of advanced technologies like machine learning (ML) and deep learning (DL) [1, 2]. These techniques enable the processing and analysis of vast amounts of medical data, leading to improved patient care through pattern identification and valuable insights extraction. However, despite these advancements, certain limitations hinder the widespread adoption and integration of AI into clinical practice, particularly in terms of explainability and interpretability [3, 4].
Unlike traditional computer programs that follow a predefined set of instructions, ML and DL models learn from data to identify patterns and make predictions [5]. This self-programming ability, coupled with the immense computing power of modern computers, allows models to process vast amounts of healthcare data quickly. While simple tasks can be automated with conventional programs, it is particularly suited for complex cognitive tasks in healthcare, such as natural language translation, predictive maintenance of medical equipment, and large-scale image analysis for object recognition [6]. In low-stakes applications, these models can autonomously make decisions. For example, they can assist healthcare professionals in diagnosing diseases or recommend personalized treatment plans. In high-stakes situations, these models can significantly enhance the efficiency and accuracy of human decision-making [7]. However, it is crucial to ensure that these systems support human users and operators who bear the final responsibility for decision-making. The models should also act as "super-assistants," providing an additional layer of insight and analysis to aid healthcare professionals in their decision-making process [8]. Yet with benefits, limitations of AI can not be ignored [9].
The size and complexity of models, particularly deep neural networks (DNNs), have increased in pursuit of better predictive performance. However, there has been criticism of solely focusing on predictive accuracy. Relying on large, opaque models raises concerns about the lack of transparency in
decision-making processes. In the healthcare context, the lack of interpretability [10] and explainability [11] in ML and DL models can lead to ethical issues and a loss of trust [12, 13]. Understanding why a particular decision is made is just as crucial as knowing what the decision is. Insufficient interpretability hinders the widespread and responsible adoption of ML, especially in high-stakes domains where transparency and accountability are paramount. However, addressing the issue of interpretability can open doors to valuable future applications of ML in healthcare [14].
Therefore, there is a growing need for explanations that help healthcare professionals understand and trust AI systems. Explanations can provide insights into the reasoning behind ML predictions, ensuring that the learned patterns align with expectations and intentions. Transparent and interpretable ML models are essential for fostering trust, ethical decision-making, and responsible adoption of AI in healthcare. Additionally, the lack of transparency in traditional AI algorithms, particularly DL models, leads to them being seen as enigmatic systems with concealed internal operations. This lack of clarity presents notable obstacles when it comes to establishing trust and gaining approval from healthcare professionals, patients, and regulatory bodies. However, these challenges can be overcome by providing explanations and interpretations that shed light on the inner workings and decision-making processes employed by the model to achieve its results [4].
Interpretability refers to the ability to understand the inner workings or mechanics of a ML model, particularly in the context of predicting temperature over time in a normal regime. It allows us to gain insights into how the model generates its predictions without necessarily understanding the underlying reasons or causality behind those predictions [15, 16]. In other words, interpretability is a desirable component for explainability, but not all interpretable models are explainable [17]. Additionally, it deals with counterfactual cases, enabling us to examine what would have changed if certain features or values had been different. It aims to develop a comprehensive understanding of the ML system, including both observed and unobserved factors, towards creating a global theory. The black-box nature of AI models restricts the transparency of their outputs, making it difficult to comprehend the reasoning behind their decisions [18] as illustrated in Figure 1. Nevertheless, it focuses on comprehending the operational aspects of the model while explainability goes beyond interpretability by encompassing the ability to address questions related to the model's behavior in scenarios involving new data. It involves understanding the consequences of specific actions or changes in the model's predictions as shown in Figure 2. In healthcare, where decisions can have life-and-death consequences, it is crucial to have a clear understanding of how AI algorithms arrive at their recommendations. Without explainability, healthcare professionals may find it challenging to trust and validate the outputs of AI systems, leading to resistance to adopting these technologies [19].
Moreover, the lack of interpretability hampers the ability to identify and mitigate biases in the data or algorithms used by AI models. Biased data or biased model predictions can have serious consequences in healthcare, resulting in inaccurate diagnoses, suboptimal treatments, and potential harm to the patients. Without the ability to understand and explain the reasoning behind AI-generated recommendations, it becomes difficult to identify and address these biases effectively. The absence of explainability in AI systems also raises ethical concerns. In healthcare, it is essential to provide justifications and explanations for decisions that impact patients' well-being. When AI systems cannot provide transparent and interpretable explanations for their recommendations, it becomes challenging
Figure 1: Machine Learning from data collection to its interpretability and explainability to humans
to understand and justify the basis of these decisions. This lack of transparency may result in ethical dilemmas and undermine the accountability of AI systems in healthcare [20]. For example, in [21] the authors discussed that, GPT-4 as an AI chatbot for medicine has benefits such as providing accessible medical information, availability round-the-clock, reducing healthcare professionals' workload, and aiding patient education and language translation. However, it has certain limitations including the inability to understand the full context, lack of clinical judgment, and reliance on training data. Risks involve potential misinformation, over-reliance on AI, ethical and legal concerns, and the absence of emotional support. It should be used cautiously to complement human expertise rather than replace it entirely. Hence, the question arises; What is an intelligent system, and what does it entails? AI involves the pursuit of imbuing machines with intelligence, wherein intelligence refers to the capability of an entity to operate effectively and with foresight within its surroundings [22].
To overcome these limitations, the development and adoption of Explainable Artificial Intelligence (XAI) techniques in healthcare are imperative. XAI aims to enhance the transparency and interpretability of AI models, providing insights into the decision-making process and enabling clinicians to understand the reasoning behind AI-generated recommendations [23].
Various approaches and techniques have been proposed to achieve explainability in AI models. One approach is to use interpretable algorithms that provide explicit rules or representations of the decision-making process. These models offer transparency by explicitly showing how specific inputs lead to certain outputs [24]. Another approach is to develop post-hoc explainability methods that provide explanations after the AI model has made its predictions. These methods shed light on the internal workings of black-box models by generating explanations as feature importance, saliency maps, or textual descriptions. Techniques such as LIME [25] and SHAP [26] are examples of post-hoc explainability methods commonly used in healthcare.
Furthermore, advancements in DL have led to the emergence of techniques specifically designed to enhance explainability in DNNs. Methods like attention mechanisms and gradient-based techniques such as GradCAM [27] provide insights into the areas of focus and decision-making process of the models [28]. The benefits of XAI in healthcare are numerous. By integrating transparency and interpretability into AI models, these systems can instill trust and confidence among healthcare professionals and patients. Clinicians can better understand and validate the outputs of AI algorithms, leading to increased adoption and collaboration between human experts and AI systems. Moreover, it can also facilitate regulatory compliance by enabling audits and assessments of AI models to ensure fairness, accountability, and compliance with ethical standards [29, 30]. In medical imaging, for example, radiologists can benefit from such systems that provide explanations for their findings, aiding in the detection and diagnosis of diseases. Clinical decision support systems (CDSS) can also provide justifications for treatment recommendations, empowering healthcare professionals to make informed decisions based on a better understanding of the underlying reasoning. To address these limitations, there is a growing need for the development and adoption of XAI techniques in healthcare [31]. By fostering explainability, AI systems can instill trust, improve regulatory compliance, and enhance collaboration between human experts and AI algorithms as they provide insights into the decision-making process and enable clinicians to understand the reasoning behind AI-generated recommendations.
The rapid evolution of AI technologies has outpaced the formulation of comprehensive ethical guidelines, raising concerns about the potential misuse and unintended consequences of these powerful tools [32, 33]. In the realm of healthcare, these concerns encompass a wide spectrum of issues, including data privacy and security, algorithmic bias, accountability, and the potential dehumanization of patient care. The design and implementation of AI systems should be underpinned by ethical considerations that prioritize transparency, fairness, and the responsible handling of patient data. The need for an ethical framework becomes even more pronounced as AI is poised to extend its influence beyond healthcare into various domains, such as education, transportation, and governance [34].
While the benefits of AI in healthcare are evident, a critical examination of its ethical implications is crucial to avoid unintended negative consequences. The potential for biases embedded within algorithms to exacerbate health disparities and perpetuate social injustices underscores the necessity of vigilance in algorithm development. Striking a balance between innovation and ethical reflection necessitates interdisciplinary collaboration involving AI developers, healthcare professionals, ethicists, policymakers, and patients [35]. By fostering open dialogues among these stakeholders, it becomes possible to create a dynamic ethical framework that adapts to the evolving landscape of AI applica
tions and promotes responsible AI development [36]. The application of AI in healthcare brings forth several limitations that need to be addressed for its successful integration into clinical practice [37].
This paper comprehensively examines the ethical dimensions of AI development, with a focus on healthcare. It outlines a framework emphasizing transparency, fairness, and human-centricity while delving into the challenges of AI integration in healthcare. The paper acknowledges AI's limitations, highlighting the need for ongoing research, preserving human empathy, and fostering responsible AI practices. It underscores the significance of education, advocates for global ethical standards, and emphasizes the adaptive nature of the framework. Ultimately, this paper offers a profound exploration of AI ethics, guiding its responsible advancement amidst complex societal considerations.
## 2 Black Box and Lack of Transparency
In the field of AI, black box models refer to ML algorithms or DNNs that produce accurate predictions or decisions but lack explainability. These models can provide remarkable results in tasks such as image recognition, natural language processing, or autonomous driving. However, the inner workings of these models are often complex and challenging to explain in human terms [38, 39]. This lack of interpretability can limit our ability to understand why certain decisions are made, leading to potential biases, discrimination, or errors that go unnoticed [25].
ML has the ability to develop algorithms for diagnosis, prognosis, and outcome prediction by utilizing various features. However, the lack of transparency regarding the reasoning/understanding behind these algorithms creates a "black box" situation, where the inputs and outputs are not easily explainable. While this may be acceptable in certain fields such as business decisions or human behavioral studies, it becomes problematic in clinical management, where decisions often involve critical, life-or-death situations [40].
Clinicians facing such scenarios are understandably concerned about trusting the model's appropriateness for their patients, its accuracy in guiding clinical judgment, and its alignment with existing knowledge of human disease. There are three key aspects to address: trust, consistency, and explanation [41].
Conventional medical decisions are based on a thorough understanding of pathophysiological mechanisms, supported by cell-based experiments, genomic and metagenomic analysis, animal studies, histopathological observations, clinical trials, and cohort observations. Evidence-based medicine has long been the gold standard for treatment strategies. The questions which arise in [42, 43] are as follows:
1. Can clinicians make vital decisions without grasping the fundamental reasoning behind them?
2. Can patients undergo treatment, surgery, or cancer therapy without fully understanding the rationale behind the chosen course of action?
3. Do doctors have the capability to adequately elucidate to patients why they prefer a specific treatment over other alternatives?
4. How do we convince clinicians and scientists that ML models can excel in clinical decision-making beyond evidence-based diagnosis and treatment?
Figure 2: (a)Explainabilty vs (b) Interpretability.
. The lack of trust in ML algorithms poses a considerable challenge in implementing AI medicine. In case of adverse outcomes, who bears responsibility for the resulting consequences?
Many AI algorithms, such as DNNs, operate as black-box models, making it challenging to understand the reasoning behind their decisions. While these models may produce accurate results, their lack of transparency raises concerns about trust, interpretability, and accountability. The inability to explain how decisions are made hinders the acceptance and adoption of AI in critical healthcare scenarios [4].
To illustrate the challenges posed by black-box models in healthcare, let's consider an example of using DNNs for diagnosing diseases from medical images, such as chest x-rays [44]. DL models, particularly convolutional neural networks (CNNs), have shown remarkable performance in detecting various diseases from images accurately. However, understanding how these models arrive at their predictions is often elusive due to their black-box nature. In the case of chest x-ray diagnosis, a DL model is trained on a large dataset of labeled images to learn patterns and features indicative of different diseases, such as pneumonia [45] or lung cancer. The model goes through numerous iterations, adjusting its internal parameters until it can accurately classify the images based on the training data. Once trained, the black-box nature of the model becomes apparent. When a new chest x-ray image is presented to the model for diagnosis, it produces a prediction, such as "pneumonia present" or "no pneumonia." However, the model does not provide explicit information about which regions or features in the image led to that specific prediction. This nature of results is not acceptable by clinicians, hence a surrogate model is applied to explain and verify the validity of the results obtained as exhibited in Figure 3.
Similarly, breast cancer is a prevalent disease, and early detection is crucial for successful treatment [46], and CNNs have also shown promising results in accurately identifying malignant tumors in mammogram images [47].
However, their black-box nature presents challenges in understanding their decision-making process. When training a DL model for breast cancer detection, a large dataset of labeled mammogram images is used to train the model to recognize patterns indicative of cancerous lesions. The model learns to analyze various features, such as the shape, texture, and density of potential tumors, in order to make predictions. Once trained, the black-box nature of the model becomes apparent during the inference phase. When a new mammogram image is provided to the model for evaluation, it produces a prediction indicating whether a malignant tumor is present or not. However, it does not explicitly provide information on which regions or characteristics of the mammogram contributed to that particular prediction [48].
Moreover, the same can be observed in AI technologies that have demonstrated their potential in addressing various aspects of autism. ML algorithms can analyze large datasets to identify patterns and markers associated with Autism spectrum disorder (ASD), aiding in early diagnosis and personalized treatment plans. AI-driven virtual reality environments and social communication tools offer innovative platforms to develop and practice social skills in a controlled and supportive setting [49]. ASD is a highly heterogeneous condition, and interventions need to be tailored to an individual's unique needs. Black box models might struggle to explain why a particular intervention is recommended for a specific
Figure 3: Example of explainability of deep learning model: Pneumonia detection using GradCAM
individual, making it challenging to adapt and personalize interventions effectively [50].
This lack of interpretability can be a significant barrier to the adoption of AI in healthcare. Clinicians need to understand the rationale behind the model's decision in order to validate its predictions and make informed treatment decisions. They may want to identify specific areas of concern within the images that influenced the model's classification. While patients may also desire transparency and explanations for the model's predictions to gain confidence in the diagnosis and treatment recommendations [51]. Additionally, patients may feel uneasy about relying on AI systems if they cannot comprehend how the diagnosis was made. They may have concerns about the transparency of the process, the reliability of the model, and the potential biases in the data used for training [52].
## 3 Bias and Fairness
The increasing adoption of DL and AI in healthcare holds significant promise for improving diagnostics, treatment, and healthcare delivery. However, the presence of bias regarding AI presents substantial challenges to achieving fairness in healthcare. Bias can be categorized as statistical or social, and it can lead to unequal outcomes and hinder equitable healthcare provision [53, 54]. For instance, [55] highlighted the significance of this issue through their study, which revealed racial bias in a commonly used algorithm in the U.S. healthcare system. The algorithm relied on healthcare spending to determine patients requiring extra medical attention. While spending might serve as a reasonable indicator of illness severity and the need for additional care, the authors showed that implementing the algorithm would result in a reduction of over 50% in the number of Black patients identified as needing extra care. Similar racial and gender bias can be observed in an online AI image creator tool i.e. Playground AI [56] as exemplified in Figure 4.
Data limitations pose a critical challenge in healthcare AI, as biased datasets can result in biased algorithmic outputs. Statistical bias occurs when a dataset does not accurately represent the true distribution of a population, potentially leading to algorithmic outputs that deviate from actual estimates. In the biomedical field, where large datasets are crucial, advanced information extraction methods are required to handle the complexity of the data. However, if the dataset used to train AI algorithms is not diverse and representative, it can introduce statistical bias and compromise the accuracy and generalizability of the algorithms. A notable example can be seen in cardiology [57], where heart attacks are frequently misdiagnosed in women. Prediction models for cardiovascular disease that claim to predict heart attacks several years in advance are often trained on datasets predominantly comprising male subjects [58]. However, cardiovascular disease manifests differently in men and women, and an algorithm primarily trained on male data may not accurately diagnose female patients [59].
Social bias in healthcare AI refers to inequities resulting in sub-optimal outcomes for specific groups within the population. This type of bias is particularly concerning, as it unintentionally discriminates against vulnerable groups, perpetuating disparities [60, 61]. For example, in dermatology, CNNs used to classify skin lesions are often trained on datasets predominantly composed of samples from patients with fair skin tones. This limited representation of patients with darker skin tones and its variations
Figure 4: Machine Learning systems (Playground AI) exhibiting gender and racial bias
leads to lower diagnostic accuracy when these networks are tested with images of patients with darker skin tones. This disparity is especially problematic given the higher mortality rate for melanoma in patients with darker skin tones compared to Caucasian patients [62]. Thus, leading to misdiagnosis and sub-optimal care.
The impact of data bias is not only limited to specific medical domains but extends across various areas of healthcare. For example, automated sleep scoring algorithms trained on data from young and healthy individuals struggle to accurately diagnose sleep disorders in older patients. Despite progress in developing these algorithms, achieving high performance in clinical routines remains challenging. Cognitive biases and the lack of diversity in the training data contribute to inter- and intra-scoring disagreement, posing limitations [63].
Moreover in drug development and clinical trials, bias in AI algorithms can have significant consequences. Clinical trials often have predominantly male participants from limited age groups and similar ethnic backgrounds, leading to gender and ethnic biases. These biases during the pre-clinical stages can impact how women respond to newly developed drugs. This means that the results obtained from these early studies could influence the datasets used to train AI algorithms [64, 65].
While data bias is a critical concern, it is important to note that bias in clinical datasets is not the sole source of bias in healthcare AI. Researchers and clinicians can inadvertently introduce unconscious judgments and biases into their research, which may manifest in biased AI algorithms. These biases can further perpetuate health inequities and hinder the potential benefits of AI in healthcare. Fairness is fundamental, its principles include fair treatment for all individuals, group fairness (avoiding group-based disparities), statistical parity (equalizing outcomes across groups), equal opportunity (balanced true positive rates), and fairness through unawareness (ignoring certain attributes). Achieving fairness requires addressing biased training data, navigating complex interactions, and making trade-offs between fairness and accuracy [66].
Therefore, data can be susceptible to various types of bias, which can significantly impact healthcare outcomes while fairness in AI focuses on ensuring that the outcomes and decisions produced by AI systems do not discriminate against specific groups or individuals based on their inherent characteristics, such as race, gender, age, or socioeconomic status. The goal is to mitigate bias and achieve equitable treatment across diverse demographics.
### Data-Driven Bias
Despite the independent process of analysis conducted by ML algorithms, the potential for bias remains present. The data bias could stem from either the dataset employed for data analysis or the data itself, such as datasets that lack diversity. For instance, in a study conducted by [55] and colleagues, racial bias was identified in the initial data, leading to the erroneous conclusion that Black patients are more medically intricate and costly to the healthcare system compared to White patients. This oversight occurred because the model failed to consider issues of access. The subsequent problem has been clarified in [67], and statistics are exhibited in Table 1. Similarly, Amazon's attempt to create an AI-driven tool for recruitment faced problems as the algorithm exhibited a negative bias against women due to the predominantly male-oriented data used for training purposes [70].
_Measurement bias:_ occurs when proxy variables are used to measure certain features, resulting in inaccurate assessments. For instance, in healthcare, relying solely on body mass index (BMI) as a measure of overall health can introduce measurement bias, as BMI may not accurately capture individual variations in body composition and health risks [71].
_Omitted variable bias:_ occurs when important variables are excluded from the model, leading to incomplete predictions. For example, if an AI algorithm predicts readmission rates after surgery but overlooks socioeconomic status as a crucial factor, it may underestimate the influence of social
\begin{table}
\begin{tabular}{|l|l|l|l||l|l||l|l||l|l|l|l|} \hline DATASET & 1 & II & III & IV & & & & & & & \\ \hline PPB [67] & 6.22\% & 79 & 34.02\% & 432 & 12.83\% & 163 & 4.25\% & 54 & 18.27\% & 232 & 24.41\% & 310 & 1270 \\ \hline IJB-A [68] & 2.60\% & 13 & 33.00\% & 165 & 44.00\% & 220 & 12.60\% & 63 & 2.00\% & 10 & 5.80\% & 29 & 500 \\ \hline Adience [69] & 3.14\% & 69 & 60.94\% & 1337 & 22.15\% & 486 & 8.07\% & 177 & 4.38\% & 96 & 1.32\% & 29 & 2194 \\ \hline \end{tabular}
\end{table}
Table 1: Skin tone drive-bias exhibited in different datasets [67]
determinants of health on readmissions, resulting in incomplete predictions [72].
_Representation bias:_ arises from non-representative sampling during data collection, leading to gaps and anomalies. In healthcare, if clinical trials predominantly recruit participants from urban areas and under-represent rural communities, the resulting algorithm may fail to address the specific health needs and disparities faced by rural populations [71].
_Aggregation bias:_ occurs when assumptions about individuals are made based on population-level observations. In healthcare, an algorithm that analyzes average treatment response without considering individual variations may overlook subgroups of patients who respond differently to specific treatments, leading to biased assumptions about treatment effectiveness [71].
_Longitudinal data fallacy:_ occurs when cross-sectional analysis is used instead of longitudinal analysis, leading to different conclusions. In healthcare, if an AI algorithm analyzes a single snapshot of patient data without considering their longitudinal health history, it may miss important patterns and provide inaccurate predictions or diagnoses [73].
_Linking bias:_ emerges when network attributes misrepresent user behavior, often due to biased network sampling or overlooking certain user groups. In healthcare, this bias can manifest in social network analyses of patient interactions, potentially skewing assessments of information dissemination or disease spread if certain patient groups are underrepresented or their interactions are not adequately captured [73].
Explainability can play a vital role in addressing biases in healthcare data by providing transparency, interpretability, and insights into the decision-making process of AI models. Its techniques can help researchers understand the factors that contribute to biases and enable them to make necessary adjustments such as feature importance analysis. It can help to identify variables with limitations or biases in measurement and longitudinal data, allowing researchers to refine their measurement strategies. Methods like subgroup analysis and counterfactual explanations help uncover biases in data representation, enabling researchers to identify disparities and take corrective actions. Furthermore, aggregation bias can be addressed by providing individual-level insights (Individual Conditional Expectations (ICE) Plots) and estimating treatment effects on specific subgroups, thus reducing bias arising from aggregating data [74]. Additionally, techniques like model-agnostic explanations and fairness-aware methods can also aid in mitigating biases in sampling. These approaches help researchers identify biases introduced during data collection or sampling processes and ensure more representative and fair predictions. Visualization can also facilitate uncovering various biases as well [75].
### Systematic bias
It refers to biases that are inherent in the system or process itself and consistently impact outcomes in a particular direction. In the context of algorithms, systematic biases are the result of design choices, user behaviors, or other factors that consistently introduce biases into algorithmic decision-making or recommendations.
In healthcare, systematic biases in algorithms can have far-reaching effects on patient care, health outcomes, and equitable access to healthcare services. They can perpetuate disparities, reinforce existing biases, and contribute to unequal treatment or representation of certain demographic groups. Recognizing and addressing systematic biases is crucial for promoting fairness, accuracy, and transparency in algorithmic systems within the healthcare domain.
_Algorithmic Bias:_ refers to biases that are introduced by the algorithm itself and are not present in the input data. In healthcare, algorithms used for medical diagnosis or treatment recommendations can exhibit algorithmic bias, leading to biased outcomes. For example, if an algorithm is trained on biased or incomplete medical data, it may disproportionately impact certain demographic groups, resulting in disparities in diagnosis or treatment recommendations [76].
_User Interaction Bias:_ occurs when users, including patients and healthcare providers, exhibit their own biased behaviors and preferences while interacting with algorithmic recommendations. In healthcare, this bias can manifest in various ways. For instance, a patient may have preconceived notions or personal biases that influence their acceptance or rejection of algorithmic medical advice, leading to sub-optimal healthcare decisions [77].
_Presentation Bias:_ refers to biases that arise from how information is presented. In healthcare platforms, certain medical information or treatment options may be prioritized or overlooked based on
how they are presented to users. This bias can influence user behavior by shaping their understanding, perception, and decision-making regarding healthcare options [77].
_Ranking Bias:_ occurs when search results or recommendations in healthcare applications are influenced by biases in the ranking algorithm. This bias can lead to the overemphasis of certain medical treatments or information based on their perceived relevance or popularity, potentially influencing user behavior towards specific healthcare choices [77].
_Popularity Bias:_ arises when more popular medical interventions or treatments are favored and promoted in healthcare systems, while potentially effective but less popular alternatives are neglected. This bias can impact user behavior by directing attention towards widely known or commonly used interventions, potentially overlooking personalized or innovative healthcare options [78].
_Emergent Bias:_ arises as a result of changes in population, cultural values, or societal knowledge over time, which can affect the outcomes and relevance of algorithmic recommendations in healthcare. For example, as societal values evolve, the preferences or priorities of patients and healthcare providers may change, leading to biases in algorithmic decision-making or recommendations [79].
_Evaluation Bias:_ occurs during the assessment of healthcare algorithms. Biased evaluation benchmarks or inappropriate metrics may lead to unfair assessments and inadequate representation of diverse populations. This bias can affect the development and deployment of algorithms, potentially influencing user behavior if biased algorithms are considered trustworthy or reliable. For instance, it may overlook considerations such as sensitivity to detecting rare conditions or false-positive rates, which are critical for patient safety and overall healthcare quality [71].
### Generalization Bias
One of the critical challenges in deploying AI systems in healthcare is the limited generalization of models and ensuring their safety. Limited generalization refers to the inability of AI models to perform accurately and reliably on data that differs significantly from the training data for instance the data collected in [80] does not contain the images from different ethnicities which will hinder models generalization. This limitation can have serious consequences in healthcare, where diverse patient populations, evolving medical practices, and complex physiological conditions exist. In addition to limited generalization, ensuring the safety of AI systems is paramount to prevent harm to patients and maintain trust in AI technologies. AI models trained on specific datasets may struggle to generalize their knowledge to new, unseen data in healthcare settings [81]. There are several reasons for limited generalization:
_Data Distribution Shift:_ if the distribution of the data used for training and the real-world data encountered during deployment differ significantly, models may struggle to perform well. For example, a model trained on data from one hospital may not generalize effectively to data from a different hospital due to variations in patient demographics, equipment, and clinical practices [82].
_Sampling bias:_ arises when subgroups are non-randomly selected, affecting the generalizability of findings. For instance, if a study on the effectiveness of a new medication primarily includes younger adults and excludes older adults, the findings may not accurately reflect the medication's efficacy and safety in the elderly population [83, 84]. Moreover, AI models may have limited exposure to rare medical conditions or unique patient cases during training. Consequently, their ability to accurately diagnose or treat such cases may be compromised. In healthcare, it is essential to consider the tail-end distribution of data to ensure models can handle rare and critical scenarios [85].
### Human Bias
It refers to the presence of a subjective and unfair or discriminatory outlook within the information collected, processed, and employed to train algorithms, models, or systems. There are several types of human bias that can manifest in different contexts:
_Historical Bias:_ refers to biases that exist in the world and are reflected in the data generation process. In healthcare, historical bias can be seen in the underrepresentation of certain demographic groups in clinical trials or medical research. For example, if a particular medication is primarily tested on a specific population (e.g., predominantly White males), there may be limited evidence on how it affects other populations, leading to biased healthcare decisions for those groups [71].
_Population Bias:_ occurs when the user population of a platform differs significantly from the target population. In healthcare, population bias can be observed in studies or datasets that primarily include data from specific regions or communities, leading to limited generalizability of findings. For instance, if a study on diabetes management predominantly includes data from urban areas, the findings may not accurately represent the experiences and needs of rural populations [86].
_Self-Selection Bias:_ occurs when individuals voluntarily choose to participate in a study or engage with a particular platform. In healthcare, this bias can arise in self-reported surveys or patient-generated data. For example, if an online health forum primarily attracts individuals with a particular condition who are actively seeking information and support, the data collected from that platform may not reflect the experiences and perspectives of individuals who do not actively engage in online discussions [87].
_Social Bias:_ refers to the influence of others' actions on our own judgment. In healthcare, social bias can manifest in the form of ratings, reviews, or recommendations influencing healthcare decisions. For instance, if a healthcare provider relies solely on patient ratings and reviews to select a specialist, they may unknowingly favor providers who have received more positive feedback, potentially overlooking highly competent specialists who have fewer online ratings [88, 89].
_Behavioral Bias:_ arises from differences in user behavior across platforms or datasets. In healthcare, behavioral bias can be seen in digital health applications or wearable devices that collect user-generated health data. For example, if a mobile health app is primarily used by individuals who are already health-conscious and actively managing their well-being, the data collected may not accurately represent the broader population's health behaviors and needs [86, 90].
_Temporal Bias_: arises from differences in populations and behaviors over time. In healthcare, this bias can be observed in longitudinal studies tracking health outcomes or disease trends. For instance, if a study investigates the effectiveness of a particular treatment based on data collected from a specific time period, the findings may not reflect the current healthcare landscape or advancements in medical practices [91].
_Content Production Bias:_ stems from structural, lexical, semantic, and syntactic differences in user-generated content. In healthcare, content production bias can be seen in patient health records or online health forums. For example, if electronic health records (EHR) predominantly use medical terminology and abbreviations, the language used in those records may not accurately capture patients' subjective experiences or provide a comprehensive understanding of their health conditions [92].
## 4 Human-Centric AI
Human-centric AI refers to the design, development, and deployment of AI systems that prioritize and enhance the well-being, needs, and values of humans. The concept stems from the recognition that AI technologies are becoming increasingly integrated into various aspects of our lives and society. Therefore, there is a need to ensure that these technologies serve human interests, rather than solely pursuing technological advancement for its own sake the idea has been illustrated in Figure 5.
### Contextual Intelligence
Contextual intelligence, also known as contextual understanding or contextual reasoning, refers to an AI system's ability to comprehend and interpret information within its context, considering relevant factors, background knowledge, and the broader situational understanding [93]. This contextual awareness enables the AI to make more accurate and relevant decisions, predictions, or recommendations based on the specific circumstances it encounters [94].
The limitation of contextual intelligence in AI arises from the fact that creating an AI system that fully comprehends and interprets context in the same way as humans do, is a complex challenge. While AI has shown remarkable advancements in various domains, achieving human-level contextual understanding remains elusive [95].
Here are some key reasons why contextual intelligence can be a limitation in AI:
_Complexity of Context:_ Context in real-world scenarios can be complex and multi-faceted. Human cognition has evolved to understand subtle nuances and abstract concepts. However, AI systems
typically rely on patterns and correlations within the data they are trained on. It is still challenging to encode the intricacies of context in AI algorithms [96].
_Lack of Common Sense:_ Humans often use common sense to fill in gaps or infer information not explicitly stated. AI models, particularly those based on statistical learning, might struggle with common-sense reasoning, making them less effective in handling unfamiliar or ambiguous situations [97].
_Adaptation to Dynamic Environments:_ Contexts can change rapidly, especially in real-time scenarios. AI models may not be able to adapt quickly enough to these dynamic environments, leading to sub-optimal performance and potential errors [98].
For instance, some incidents show that contextual intelligence is still a limitation in AI. In 2018, an AI-powered facial recognition system was used by the Metropolitan Police in London to identify suspects, 104 previously unknown people who were suspected of committing crimes, out of which only 2 were accurately predicted [99]. So, the system was found to be biased against people of color. This happened as the system was trained on a dataset that was not representative of the population of London. In 2020, an AI-powered self-driving car was involved in a fatal accident. The car was unable to understand the context of the situation and it collided with a pedestrian [100]. AI systems need to be able to understand the deeper meaning of a situation in order to avoid making mistakes. As AI systems become more sophisticated, we can expect to see improvements in contextual intelligence. However, it is likely that contextual intelligence will always be a limitation in AI, as AI systems will never be able to fully understand the world in the same way as humans do.
### Equity and Access
Equity and access play vital roles in the advancement and implementation of AI technologies [101], ensuring a fair distribution of benefits and opportunities for everyone. Nevertheless, AI encounters various challenges in upholding these principles. Biases ingrained in historical data can lead to prejudice in algorithms, perpetuating the existing inequalities in areas like hiring and loan approvals [102]. Moreover, the lack of diversity within AI development teams may result in models that fail to adequately address the diverse needs of the users. The digital divide further exacerbates the issue, creating discrepancies in accessing AI services and information, particularly for disadvantaged populations [103]. Additionally, cost and affordability pose barriers, hindering access to AI advancements for those with limited financial resources. Cultural and language barriers can also restrict the inclusivity and effectiveness of AI systems. In the domain of healthcare, disparities in accessing advanced AI diagnostics can impact patient outcomes [104, 105, 106]. For instance, in 2019, Amazon's Alexa was criticized for not being accessible to people with disabilities. The voice assistant's responses were not always clear or understandable, and the assistant's features were not always compatible with screen readers [107].
The user-centered design can enhance the usability of diverse user groups. Bridging the digital divide is crucial to making AI technologies accessible to underserved communities [108]. Implementing
Figure 5: Human Centered Artificial Intelligent systems
transparent and XAI models fosters trust and accountability. Collaborative partnerships involving AI developers, policymakers, community representatives, and advocacy groups can identify and address potential biases and inequities. By proactively addressing these challenges and promoting equitable access to AI, we can harness its potential to benefit all members of society and avoid exacerbating existing disparities.
#### 4.2.1 Human-AI interaction
Human-AI interaction refers to how humans interact with AI systems. Although progress has been made in this area, there are still limitations and challenges that affect the effectiveness and user experience. These challenges include AI's ability to understand complex language, grasp contextual nuances, and provide transparent explanations for its decisions [109, 110]. AI's limited learning from user interactions and difficulty in handling uncertainty and ambiguity also pose obstacles. Users may develop over-reliance on AI, leading to frustration and reduced trust when AI fails to meet their expectations. It is significant for AI to ensure cultural sensitivity and prioritize users' welfare [110].
One of the real incidents that exemplifies the challenges in AI's understanding of complex language occurred in 2016 when Microsoft launched the AI-powered chatbot, "Tay," on Twitter. Within hours of interacting with users, Tay started posting offensive and inappropriate tweets, learning from harmful interactions. The incident exposed the difficulties in teaching AI to grasp contextual nuances and provided a stark reminder of the importance of robust moderation mechanisms to avoid unintended behavior [111, 112].
Another real-life challenge stems from biases in AI systems. In 2018, Amazon's facial recognition system, Rekognition, came under scrutiny for exhibiting racial and gender biases. It significantly had higher error rates for identifying darker-skinned individuals and females. This incident underscored the need for addressing bias in AI models, ensuring fairness and transparency, and using diverse and representative training data [113, 114].
Human-AI interaction, particularly in the medical domain, is closely linked between human expertise and AI algorithms. An exemplary instance of this synergy is the field of medical imaging for example multi-spectral image datasets [115]. Consider radiology, where AI assists radiologists in analyzing vast volumes of medical images where algorithms can quickly identify anomalies in x-rays or MRIs, flagging potential issues and requiring closer inspection [116]. Thus, the radiologists' years of training and experience are essential in validating the findings, understanding clinical context, and making informed diagnostic decisions. This collaborative process not only enhances accuracy but also exemplifies how AI serves as a powerful tool for skilled practitioners [117, 118].
In the development of AI-driven medical treatments, human-AI interaction is equally pivotal [119, 120]. For example, in personalized medicine, AI algorithms can analyze a patient's genetic and molecular data to tailor treatments with higher precision. Yet, the medical professionals who interpret these insights, weigh them against individual patient history and overall health. In this context, AI accelerates the discovery of potential treatments, but the final decision-making rests with the medical experts who understand the holistic picture of the patient's well-being [121, 122].
Ethical considerations underscore the importance of human-AI collaboration in medical contexts [123]. In organ transplantation, AI could assist in matching donors with recipients efficiently. However, ethical nuances such as patient preferences, medical histories, and urgency require human judgment [124, 125]. AI's role is to facilitate and augment the decision-making process, but it cannot replace the ethical reflection and empathy that human professionals bring to the table.
### Erosion of Human Connection and Empathy
The increasing integration of AI in healthcare has raised concerns about the potential erosion of human connection and empathy in patient care. While AI offers valuable advancements and efficiencies, it also presents limitations that can impact the human aspect of healthcare [126].
Emotional intelligence (EI) refers to the ability to recognize, understand, manage, and use emotions effectively in oneself and others. It plays a crucial role in human interactions, decision-making, and overall health [127, 128]. As AI continues to advance, the question arises: "Can AI develop emotional intelligence, and what are the implications of such development?"
AI-driven healthcare platforms can utilize facial recognition and voice analysis to detect emotions in patients during interactions [129]. For instance, an AI-powered virtual nurse could assess a patient's emotional state during a telehealth appointment, recognizing signs of distress or anxiety. This information can be used to tailor the conversation, providing compassionate, and empathetic responses. If the AI detects anxiety, it might say, "I understand this can be stressful.
Another example is the use of AI-powered companion robots in elder care facilities. These robots can sense emotions in residents through facial expressions and gestures [130]. If a resident appears sad, the robot might engage in activities like playing soothing music or sharing cheerful anecdotes to uplift their mood. While the robot's responses are programmed, the integration of EI principles will enhance the user experience [131].
One primary concern is the growing use of AI-powered virtual assistants and chatbots [132] in patient interactions. While these tools can streamline communication and provide quick responses, they lack the emotional depth and understanding of human healthcare providers. Patients may feel a sense of detachment and isolation when their interactions become solely AI-mediated, potentially reducing the personalized and empathetic care they receive. Overreliance on AI for emotional support could have negative consequences, especially for individuals with severe mental health conditions.
One incident that gained attention was related to a mental health chatbot called "Woebot." Woebot is an AI-powered virtual assistant designed to provide emotional support and therapeutic interventions to users experiencing symptoms of depression and anxiety. While many users found it helpful to have a tool they could access anytime [133]. It was also reported feeling disconnected and frustrated with the lack of genuine human interaction. Some users expressed that the chatbot's responses, although based on evidence-based techniques, felt robotic and impersonal, leading to a sense of isolation [134]. Hence, AI lacks true empathy, relying on algorithms and data patterns to assess patient conditions and recommend treatments. Although AI can assist in diagnosing medical conditions, it may not fully grasp the emotional and psychological aspects of a patient's well-being.
Thus, the lack of emotional intelligence in AI systems is another obstacle to achieving effective human-AI interaction. Instances of AI failing to recognize and respond appropriately to human emotions have been witnessed, hampering the ability to form empathetic connections with users. Google's Duplex, an AI system capable of making phone calls and scheduling appointments on behalf of users, faced criticism for not always disclosing its AI identity during interactions [135, 136], highlighting the importance of transparency and ethical considerations in AI development. Furthermore, the idea of AI having emotional capabilities raises philosophical questions about consciousness and what it truly means to experience emotions. Can a machine truly feel emotions, or is it simply simulating responses based on patterns and algorithms? This touches on the broader debate about the nature of human consciousness and the limitations of AI. Moreover, if, AI can mimic certain aspects of emotional intelligence [137, 138], the ethical and societal implications are noteworthy. One concern is the potential for AI to manipulate human emotions. If AI systems can detect emotions, can they be used to tailor content or advertisements to provoke desired emotional responses? This raises questions about consent, privacy, and the potential for emotional exploitation. This limitation can hinder its ability to respond effectively to patients' emotional needs and concerns, potentially leaving them feeling unheard or emotionally unsupported.
### Trust in AI
Reliability and accountability are crucial aspects when implementing AI in healthcare due to the potential impact on patient outcomes and safety. While AI has shown promising results in various medical applications, ensuring its reliability and holding it accountable for its decisions are paramount to maintaining patient trust and upholding ethical standards [139, 140].
Reliability in AI healthcare systems refers to the consistent and accurate performance of the technology across different scenarios and datasets. One example of reliability is in diagnostic imaging, where AI algorithms can assist radiologists in detecting abnormalities from medical images like x-rays, MRIs, and CT scans. For instance, AIdoc helps radiologists identify critical findings like intracranial hemorrhages, fractures, and pulmonary embolisms. This technology demonstrates reliability by consistently highlighting relevant findings and aiding radiologists in providing timely and accurate diagnoses. However, ensuring that such AI systems are validated across diverse patient populations, equipment
variations, and clinical settings is essential to maintain their reliability.
Accountability involves holding AI systems responsible for their decisions and outcomes. In the context of healthcare, this means understanding how an AI system arrived at a particular diagnosis or treatment recommendation. An illustrative example is IBM's Watson for Oncology, which assists oncologists in suggesting personalized treatment plans for cancer patients. While it aims to provide evidence-based recommendations, reports have highlighted instances where Watson for Oncology suggested treatments that contradicted medical guidelines [141]. This highlights the importance of transparency and accountability, ensuring that AI-driven decisions are explainable, justifiable, and aligned with established medical practices.
To enhance reliability and accountability, continuous monitoring and feedback mechanisms are crucial. Take the case of predictive analytics in predicting patient deterioration. Hospitals like Mount Sinai in New York have implemented AI systems that analyze patient data to predict potential deterioration hours before it occurs, allowing medical teams to intervene promptly [142]. Incorporating AI into healthcare also requires legal and regulatory frameworks that ensure both patient safety and accountability. The Food and Drug Administration (FDA) regulatory approach to AI in healthcare, for example, seeks to ensure that AI applications meet the same safety and effectiveness standards as traditional medical devices. This approach ensures that AI systems, such as those used for diagnosing diseases or making treatment recommendations, are held accountable for their performance and outcomes [143].
## 5 Ethical Concerns and Value Alignment
AI is revolutionizing healthcare by bringing about unprecedented advancements in medicine and addressing major global healthcare challenges. For instance, AlphaFold, an AI-powered algorithm, successfully solved the long-standing problem of protein folding, which had hindered progress in biology and medicine for decades. The potential applications of AI in healthcare are vast, including rapid diagnosis, personalized care, and reduction of unnecessary outpatient visits, resulting in significant cost savings [144]. The ethical principles of beneficence, non-maleficence, autonomy, and justice require that healthcare decisions be made with proper justification and accountability. Without XAI, it becomes challenging to ensure ethical decision-making, informed consent, and accountability for AI-driven recommendations. Healthcare datasets primarily consist of patient information, and strict regulation of medical privacy, encompassing the security of medical records and the confidentiality of conversations between healthcare professionals [145, 146].
Modern concerns include managing the disclosure of information to insurance companies, employers, and third parties. With the advent of patient care management systems (PCMS) and EHR, new privacy challenges have emerged, which must be balanced with efforts to reduce duplicated services. Several countries have enacted privacy protection laws, including Australia, Canada, Turkey, the United Kingdom (UK), the United States (US), New Zealand, and the Netherlands. However, the effectiveness of these laws in practice varies. The Health Insurance Portability and Accountability Act (HIPAA) [147] was passed in 1996 to strengthen healthcare data protection by the US government.
In 2018, The General Data Protection Regulation (GDPR) [148] replaced the Data Protection Directive (DPD) in the European Union (EU), establishing comprehensive data protection and privacy regulations. The GDPR grants EU residents the right to request search engines to remove personal information associated with their names from search results. GDPR applies not only within the EU and European Economic Area (EEA) but also to the transfer of personal data outside these regions. Similar concerns are also discussed in [149], which encompass patient privacy and confidentiality, challenges in obtaining informed consent, limited data ownership, potential inaccuracies, biases, commercialization risks, and security vulnerabilities. To address these concerns through transparency, privacy protection, informed consent, equitable practices, and robust security measures is essential for responsible and ethical utilization of EHR data beyond direct patient care [150]. An overview of GDPR and HIPAA relevant to data regulations are highlighted in Figures 6 and 7 respectively.
### Patients Consent for Data
From the viewpoint of healthcare, securing valid consent is crucial when engaging in AI-powered data analytics and profiling, as outlined by GDPR Article 6. The traditional data protection approach is built upon the "notice and consent" model, where obtaining consent from the data, subject is central. This model is designed to protect the right to "informational self-determination," enabling individuals to either approve or decline the processing of their data after being adequately informed. Nevertheless, this notification process might lose its significance when the data subject lacks awareness or control over current or future data processing [151]. The Article 29 Working Party's guidelines regarding consent under Regulation 2016/679 stipulate three essential requirements that must be fulfilled:
1. Specific Consent Criterion: Consent must allow further processing only if it aligns with a legal basis and remains consistent with the original purpose for data collection.
2. Granularity of Consent Criterion: Consent for profiling must be distinct and separate from granting access to a service.
3. Freedom of Consent Criterion: Consent cannot serve as a valid legal basis for personal data processing when a significant imbalance exists between the data subject and the controlling entity. When considering data analytics and profiling conducted through AI processing, it is imperative to adhere to all of these prerequisites [152].
Figure 6: Overview of GDPR articles relevant to data protection
Similarly, HIPAA is a pivotal US law that safeguards individuals' health data. One of the key ethical considerations within HIPAA is the emphasis on patients' consent for the usage of their protected health information (PHI). This emphasis is outlined in various sections of HIPAA, most notably in Article 164.508, which pertains to the conditions for disclosure of PHI.
Under HIPAA's provisions, healthcare providers and entities, known as "covered entities," are required to obtain explicit and informed consent from patients before disclosing or using their PHI for purposes beyond direct treatment, payment, and healthcare operations. The informed consent process serves as a critical ethical safeguard, ensuring that patients are fully informed about the potential uses of their health data and the entities that will access it. This aligns with the principles of transparency, autonomy, and respect for individuals' rights to control their PHI.
SS164.508 not only emphasizes the necessity of obtaining patients' consent but also lays out the specifics of what this consent should entail. It requires that consent forms be written in plain language, clearly explaining the purpose of data usage and disclosing the individuals or entities that may access the data. This promotes transparency and allows patients to make informed decisions about the use of their health data in AI applications and other healthcare contexts [153].
Within the landscape of AI-powered healthcare, where AI algorithms analyze patients' data for predictive analytics and treatment recommendations, adhering to HIPAA's emphasis on patients' consent is crucial. Furthermore, HIPAA grants patients the right to access and amend their health information under SS164.524. This aligns with the ethical principle of individual control and participation in data governance, empowering patients to play an active role in managing their health data, even in the context of AI applications [154].
Figure 7: Overview of HIPAA clauses relevant to data protection
### Data Privacy and Security
Automated decision-making, profiling, and the utilization of ML techniques are reshaping data processing, yet they bring forth concerns of bias and privacy invasion. In response, the GDPR emerges as a crucial framework to ensure equitable data handling, particularly in the context of AI systems. Under GDPR's Article 5 [155], 5-1a, the focus is on transparent and lawful data processing. When it comes to AI, transparency becomes a complex endeavor, necessitating individuals to be accurately informed about how their data is being manipulated while also ensuring equitable automated decisions. GDPR Article 5-1b draws a vital link between the purpose of data processing and its legality. However, as data finds new uses in AI applications, the challenge of re-purposing data without explicit consent emerges. This becomes a prominent concern, given that individuals may not have provided consent for their data to be used in the context of future automated processing. The principle of data minimization, as outlined in GDPR Article 5-1c, comes into play when AI is applied to extensive data analytics. This principle underscores the importance of utilizing the necessary and relevant data for the intended purposes. In the domain of AI, where data sets are expansive, understanding the collective impact on groups of data subjects becomes pivotal, as anonymization and pseudonymization techniques might not fully address the shared interests of individuals connected through correlated attributes. Data accuracy, an essential facet emphasized in GDPR Article 5-1d, has become paramount in the AI landscape. Ensuring the precision and timeliness of data is crucial to prevent detrimental profiling and erroneous decision-making based on inaccurate information. GDPR Article 5-1e stresses the necessity of limiting the retention of personal data to what is essential for processing. However, in the context of AI and big data, data storage for archival, research or statistical purposes may extend beyond the original processing scope. The challenge lies in striking a balance between maintaining data for legitimate purposes and adhering to the principle of storage limitation. Furthermore, GDPR Article 5-1f introduces the security principle, mandating the safeguarding of data integrity and confidentiality during processing. This principle aligns with the essence of AI, where data security is integral to maintaining public trust in algorithmic decision-making systems. Crucially, GDPR's accountability principle, detailed in Article 5-2, extends its reach to AI applications. This principle necessitates controllers to not only adhere to data protection regulations but also demonstrate compliance. Instances like the well-known Cambridge Analytica case serve as a stark reminder that stakeholders, policymakers, technology giants, and governments need to address citizens' concerns about the reasoning and consequences behind algorithmic decisions [156]. In this evolving landscape, GDPR's principles cast a spotlight on the need for responsible and transparent AI practices, striking a balance between innovation and the protection of individual rights and interests.
HIPAA's Privacy Rule SS164.514 establishes guidelines for the handling of PHI in the US healthcare system. Covered entities are generally prohibited from using or disclosing PHI without individual authorization, except in specific cases outlined by the rule. These exceptions include treatment, payment, healthcare operations, public health activities, and law enforcement purposes. The rule emphasizes the principle of using only the minimum necessary information for a given purpose, and it addresses incidental uses and disclosures. Covered entities must have contracts with their business associates, who handle PHI on their behalf. Research use of PHI is allowed under certain conditions, with safeguards such as Institutional Review Board (IRB) approval. Individuals have the right to access their PHI, request amendments, and receive an accounting of disclosures. While HIPAA provides the framework for PHI protection, details can vary based on the roles and circumstances, and it's important to refer to up-to-date regulations and guidance [157].
The HIPAA Privacy Rule aims to protect patients by ensuring that their health information is not used or disclosed without their consent or legal justification. This rule defines Protected Health Information (PHI) as any data, in any format, that can identify an individual based on their health, care, or payment history SSSS164.501 & 160.103. It also covers both private and public entities. It generally takes precedence over state laws regarding health information privacy. However, if a state law offers stricter privacy protections, then both the state law and HIPAA must be followed SS160.203.
One key principle of the law is the "Minimum Necessary Standard." This principle dictates that any communication about a patient should only include the least amount of information needed for its purpose SS164.502[b-1]. However, there are exceptions, for instance, the rule does not apply when the information is used for treatment, payment, or healthcare operations, among other scenarios. The suggested practice is to use the minimum necessary information. If all identifying details are stripped
from the data, it is no longer considered PHI, and the restrictions do not apply SS164.514[a]. The law also entails that patients must be given a Notice of Privacy Practices (NPP), which outlines how their information can be used, especially for treatment, payment, and health care operations SS164.520[b-1][ii-A]. Healthcare providers can use or disclose PHI for these purposes, and while they can obtain consent for each disclosure related to treatment, it is not mandatory SS164.506[b-1][158].
Moreover, it also guarantees that privacy, accuracy, and accessibility of all electronic PHI under a covered entity's care, transmission, or storage defends against foreseeable misuses or exposures of ePHI, and ensures that its staff complies with these standards SS164.306[b][159].
In the context of AI-powered healthcare systems, this provision assumes greater significance. AI algorithms often process vast amounts of health data to generate insights and recommendations. By applying the minimum necessary standard, healthcare providers, and AI developers can ethically manage patient data, ensuring that AI systems can only access the data required for specific tasks while preserving patients' privacy rights.
### Patients' Rights to the Data
GDPR ensures the protection of data subjects by outlining a set of rights. However, when applied to AI-based processes, interpreting the implications of these rights becomes intricate. Article 15 grants individuals the right to access information about their data processing, including details about automated decision-making logic and its consequences. The exact extent of disclosing "logic involved" remains unclear, whether it pertains to general methods or specific applications. Article 17's right to erasure, allowing data removal, can impact an algorithmic model's credibility [160].
Similarly, Article 20's right to data portability, permitting the transfer of personal data, can affect algorithmic accuracy and confidentiality, potentially influencing the larger dataset. Article 21's right to object enables individuals to halt data processing, ensuring data minimization and protecting against data misuse. Article 22, crucial in the context of AI, prohibits automated decisions if they produce significant effects. However, this might be misinterpreted as AI decisions are primarily but not exclusively automated. The notion of "legal effects" or significant impact, including work performance, economic situation, health, etc., might not be clear to data subjects. Article 22-2 provides exceptions, allowing automated decisions if they are contractually necessary, legally authorized with safeguards, or based on explicit consent. Article 22-4 prohibits using sensitive data for automated decisions, with an exception. Yet, AI can infer sensitive data from non-sensitive data, potentially leading to unauthorized processing. The impact of the GDPR on AI reveals that "sensitive" data can be deduced from "non-sensitive" data, posing challenges. For instance, sexual orientation can be inferred from seemingly unrelated data like activity or likes. Similarly, non-sensitive data can act as proxies for sensitive data, leading to possible unlawful discrimination [161].
Subsequently, under HIPAA's framework, patients are granted substantial rights over their health data, most notably through HIPAA's ethical framework places a strong emphasis on patients' rights to access and control their health information, as detailed in SS164.524 [162, 163]. This provision explicitly outlines the rights granted to individuals concerning their PHI and underscores the ethical principles of transparency, individual autonomy, and active participation in data governance.
Under HIPAA SS164.524[a], patients are granted the authority to access their own health records held by covered entities. This right ensures that individuals have the ability to review the information that is collected and maintained about their health. By allowing patients to access their health records, HIPAA promotes transparency in healthcare practices and upholds the ethical principle that individuals should have knowledge and control over their own personal health data.
HIPAA goes further to affirm patients' rights to control their health information through SS164.524[b], which grants individuals the right to request amendments to their health records. If a patient believes that their health information is incorrect or incomplete, they can formally request that the covered entity make necessary corrections. This provision aligns with the ethical principle of accuracy and respect for individual autonomy, ensuring that patients have the authority to maintain accurate and up-to-date health records.
HIPAA also acknowledges patients' interest in understanding who has accessed their health information and for what purpose. SS164.524[c] outlines the right to receive an accounting of disclosures, which provides patients with information about how their health records have been shared. This trans
parensory empowers patients to monitor the usage of their health information, contributing to ethical accountability and reinforcing the principle of individual control [164].
Moreover, HIPAA and GDPR do not directly mention AI, several of its clauses have relevance to AI applications and can be challenged by new ways that AI processes personal data [165, 166].
## 6 Way forward
In the rapidly evolving landscape of AI, the need for a robust ethical framework has arisen due to the ethical challenges posed by AI. This framework is designed to regulate AI's impact on individuals' lives and interactions, ensuring societal benefit, protection of human rights, and respect for individuals' privacy and autonomy. By integrating various components such as governance, ethics, human oversight, education, and global standards, this framework provides a comprehensive roadmap for the responsible development and deployment of AI technologies.
### Safe AI
AI continues to permeate various aspects of our lives, and the ethical considerations surrounding its design and deployment have gained paramount importance. We present a meticulous examination of a novel ethical framework comprising six pillars - Sensitivity, Evaluation, User-Centricity, Responsibility, Benefice, and Security - that collectively lay the foundation for a holistic and safe AI ecosystem illustrated in Figure 8. Each pillar is dissected, analyzed, and contextualized within the broader landscape of AI ethics, providing valuable insights for researchers, practitioners, and policymakers to navigate the complex ethical challenges associated with AI technologies. Let's delve into each pillar, unraveling its essence within the vast tapestry of AI ethics.
At the core lies sensitivity, the bedrock of ethical AI. It promotes recognizing and respecting diverse user groups. With AI's potential to perpetuate biases, this pillar advocates systems to perpetuate biases and stereotypes, this pillar supports continuous monitoring and mitigation of biases through data curation, algorithm design, and stakeholder diversity. By fostering sensitivity, AI creators prevent
Figure 8: Safe AI Ethical framework
inadvertent reinforcement of societal prejudices, cultivating a digital realm that embraces inclusivity and fairness.
The evaluation pillar patrons meticulous test and assessment methodologies to ascertain the ethical soundness of AI systems. In a bid to ensure ethical AI, fairness and unbiased outcomes stand paramount. This pillar proposes a comprehensive evaluation approach, encompassing technical prowess, societal implications, and risk assessment. By incorporating risk assessment, it aids in identifying and mitigating potential threats and vulnerabilities in AI systems. Thus, AI practitioners, equipped with a robust evaluation, can identify and rectify biases, ensuring that the technology is steered onto an ethical trajectory.
User-centricity emerges, accentuating AI designs attuned to user needs, experience, and emotions. Through the integration of contextual intelligence and emotional intelligence, AI technologies can better understand user intentions, preferences, and emotions, leading to more personalized and empathetic interactions. A user-focused paradigm beckons, where AI enriches human capabilities while safeguarding autonomy and privacy, ensuring an ethical horizon.
The responsibility pillar encapsulates the principles of transparency and accountability in AI development and deployment. This pillar recommends transparent system design, explainability, and traceability to instill public trust and address concerns about algorithmic opacity. Furthermore, developers should embrace their accountability for the outcomes of AI systems, fostering a culture of responsible innovation that considers both short-term benefits and long-term consequences.
Beneficence embodies the moral obligation to ensure that AI technologies promote the prosperity of individuals and society at large. This pillar supports the integration of sustainability, resilience, and reliability into the system. By adopting a long-term perspective, AI developers can anticipate potential harms, design for system robustness, and establish mechanisms for adaptive responses to changing circumstances, thus upholding the moral precept of benefcience.
Security focuses on safeguarding AI systems against adversarial attacks, unauthorized access, and unintended consequences. Through rigorous audits and adversarial tests, developers are required to identify vulnerabilities and weaknesses that malicious actors could exploit. A commitment to security mitigates potential risks associated with AI technologies, and a pledge to security diminishes risks, shielding user data and AI's very essence.
The integration of ethical considerations is essential to harness the full potential of AI technologies while minimizing their potential harms. By embracing these principles, stakeholders can collectively foster a more equitable, transparent, and responsible AI ecosystem that aligns with societal values and aspirations.
### Governance and Collaboration
With the advancement of AI and its integration into various aspects of society, the need for effective governance and collaboration mechanisms is paramount. The development of AI systems, particularly those designed for governance and collaboration, presents unique challenges that necessitate a comprehensive framework presented in Figure 9. The intricate relationships between Human Oversight and Intervention (HOI), Multi-Stakeholder Engagement (MSE), Privacy by Design (PbD), Safe AI, Ethical Governance Council (EGC), Awareness Programs, Continuous Improvement and Adaptability (CIA), and Inter-Generational Considerations (IGC) form the backbone of a comprehensive AI governance and collaboration framework. These components are interdependent, each influencing and enriching the others, thereby creating a resilient and ethically sound AI ecosystem that aligns with societal values and needs across generations.
HOI serves as a pivotal element in the development and deployment of AI systems. It involves the active participation of human experts in monitoring and controlling AI actions, especially in critical decision-making scenarios (feedback loop). This human involvement mitigates the risks of biased or erroneous outcomes that AI systems might produce. In parallel, HOI also relates closely to PbD. Privacy concerns emerge due to the sensitive nature of data processed by AI systems. A symbiotic relationship between HOI and PbD ensures that privacy considerations are woven into the very fabric of AI systems, with human oversight acting as a check on potential privacy breaches.
MSE recognizes the diverse interests and perspectives of various stakeholders in the AI domain. It promotes inclusivity, bringing together experts, policymakers, industry leaders, and civil society
representatives to collectively steer the development of AI governance (Legislation) mechanisms as well as interdisciplinary engagement. Along with this engaging the public, end-users, or those affected by AI systems in ethical discussions and decision-making processes is a crucial step toward democratizing AI. Their voices and concerns can provide invaluable perspectives that might be missed by technologists or policymakers. This engagement also affects safe AI, as stakeholders' input enhances the identification of potential risks and safeguards that need to be integrated into AI systems. In turn, safe AI contributes to the reliability of AI systems, which shared with stakeholders through MSE, builds mutual trust and transparency.
PbD is a proactive approach to privacy that integrates privacy considerations into the foundation of technology, systems, and processes. In the context of AI, this approach is crucial due to the processing of personal and sensitive data. Guiding principles include data minimization, obtaining user consent and control, anonymization and pseudonymization of data, implementing strong security measures, user-focused design, lifecycle considerations, transparency, accountability, and cross-disciplinary collaboration. It is both a legal requirement and an ethical imperative in AI development, aiming to build systems that respect user rights, enhance trust, and mitigate privacy risks. It aligns with broader ethical AI principles and promotes responsible and ethical AI practices.
Safe AI entails developing and deploying AI systems with preventive measures to minimize risks and unintended outcomes. This involves strategies such as risk assessment, robust design, transparency, human oversight, thorough testing, and continuous monitoring to ensure ethical and reliable AI operation. By incorporating these principles, AI systems are designed to handle various inputs, explain decisions transparently, involve human judgment, undergo rigorous validation, and adapt over time to changing conditions. This approach ensures safer, more accountable, and resilient AI systems.
EGC acts as a bridge between the technical intricacies of AI systems and the broader societal implications. The EGC, comprising experts from various disciplines, is responsible for defining and upholding ethical principles that guide AI development and deployment. This council also influences
Figure 9: Governance and Collaboration ecosystem
awareness programs, as its guidance forms the foundation of educational efforts to raise awareness about AI's ethical implications. Furthermore, the EGC is closely intertwined with both CIA and IGC. The EGC ensures that AI systems evolve ethically over time and considers the long-term impact of AI on future generations.
Awareness programs encompass training and education initiatives that equip stakeholders with the knowledge needed to understand AI's complexities. This component intersects with both the EGC and CIA. By disseminating the EGC's ethical guidelines, awareness programs foster a culture of responsible AI development (global perspective). Simultaneously, these programs facilitate the iterative process of the CIA, as the knowledge gained from awareness initiatives informs the refinement of the AI systems.
CIA acknowledges that AI governance is a dynamic process that must evolve in response to technological advancements and changing societal needs. This component directly engages with MSE and IGC. Adapting AI systems requires input from diverse stakeholders and an inter-generational perspective ensures that AI governance strategies remain relevant for future generations.
IGC highlights the impact of AI systems on future societies. This aspect is intertween with both the EGC and CIA. The ethical decisions made today affect generations to come, and the adaptability of AI systems ensures that their implications are continually assessed and mitigated for the well-being of the future. Though it is not necessarily an ethical consideration, the economic incentives that drive AI development can profoundly influence its direction. Recognizing and potentially addressing these considerations can be crucial, especially when economic goals conflict with ethical ones.
The interconnected relationships between these components create a harmonious symphony of governance and collaboration. This framework promotes ethical, transparent, and adaptable AI systems that account for diverse perspectives, ensure privacy, and align with evolving societal norms. As AI technology continues to grow, this comprehensive approach provides a roadmap to navigate the challenges and opportunities that lie ahead, fostering an AI landscape that benefits humanity in the present and for generations to come.
## 7 Conclusion
The integration of AI into diverse domains, particularly healthcare, has unleashed unprecedented potential for advancements that can redefine human well-being and progress. However, this progress comes with ethical challenges that must be addressed to ensure that AI technologies serve humanity's best interests.
The ethical dimensions encompass a range of issues, from transparency in decision-making to safeguarding individual privacy. The surge in AI-driven decision-making processes brings forth concerns about accountability and explainability. Transparency in how AI systems arrive at conclusions is not only a matter of technological necessity but also an ethical imperative.
AI's reliance on data is undeniable, still, the ethical implications of data usage loom large. Ensuring that AI systems do not perpetuate biases present in data requires meticulous attention to data collection, processing, and algorithm design. The framework must emphasize the ethical use of data, mitigating potential discrimination and ensuring fairness. By adhering to principles of diversity and representation, we can reduce the risk of AI systems amplifying societal inequities.
A significant challenge arises in achieving contextual intelligence in AI systems, akin to human understanding. While AI has made remarkable strides, it continues to grapple with comprehending the nuances and intricacies of human context. The framework should promote ongoing research and development that strives for nuanced contextual awareness. Additionally, as AI and humans interact, understanding complex language and emotional nuances remains challenging. While AI can assist in various tasks, the irreplaceable nature of human empathy and connection highlights the necessity of maintaining a human-centric approach in AI applications.
Striking a balance between AI's analytical capabilities and human oversight is crucial to maintaining ethical control and accountability. Educated decisions can only be made when individuals understand the implications and consequences of AI applications. Equipping people with the necessary knowledge empowers them to engage in informed discussions and contribute to the responsible development of AI technologies.
Thus, standards and guidelines are required to unify the framework for global AI practices. Harmonizing these standards ensures that AI technologies are developed and deployed within consistent
ethical boundaries. Its agility will enable it to effectively respond to new technological developments, ensuring that ethical considerations remain at the forefront.
|
2309.10511 | Self2Seg: Single-Image Self-Supervised Joint Segmentation and Denoising | We develop Self2Seg, a self-supervised method for the joint segmentation and
denoising of a single image. To this end, we combine the advantages of
variational segmentation with self-supervised deep learning. One major benefit
of our method lies in the fact, that in contrast to data-driven methods, where
huge amounts of labeled samples are necessary, Self2Seg segments an image into
meaningful regions without any training database. Moreover, we demonstrate that
self-supervised denoising itself is significantly improved through the
region-specific learning of Self2Seg. Therefore, we introduce a novel
self-supervised energy functional in which denoising and segmentation are
coupled in a way that both tasks benefit from each other. We propose a unified
optimisation strategy and numerically show that for noisy microscopy images our
proposed joint approach outperforms its sequential counterpart as well as
alternative methods focused purely on denoising or segmentation. | Nadja Gruber, Johannes Schwab, Noémie Debroux, Nicolas Papadakis, Markus Haltmeier | 2023-09-19T10:47:32Z | http://arxiv.org/abs/2309.10511v2 | # Single-Image based unsupervised joint segmentation and denoising
###### Abstract
In this work, we develop an unsupervised method for the joint segmentation and denoising of a single image. To this end, we combine the advantages of a variational segmentation method with the power of a self-supervised, single-image based deep learning approach. One major strength of our method lies in the fact, that in contrast to data-driven methods, where huge amounts of labeled samples are necessary, our model can segment an image into multiple meaningful regions without any training database. Further, we introduce a novel energy functional in which denoising and segmentation are coupled in a way that both tasks benefit from each other. The limitations of existing single-image based variational segmentation methods, which are not capable of dealing with high noise or generic texture, are tackled by this specific combination with self-supervised image denoising. We propose a unified optimisation strategy and show that, especially for very noisy images available in microscopy, our proposed joint approach outperforms its sequential counterpart as well as alternative methods focused purely on denoising or segmentation. Another comparison is conducted with a supervised deep learning approach designed for the same application, highlighting the good performance of our approach.
## 1 Introduction
Image denoising and segmentation are fundamental problems in image processing [37, 5, 23]. In many biomedical applications, such as fluorescence microscopy or transmission electron cryomicroscopy, one is interested in the segmentation of objects. However, training data for this task is typically scarce and hard to obtain due to the intrinsic complexity and high
noise of such images as well as the long time required by experts to label them. Therefore, there is a need for unsupervised methods for tackling the two imaging tasks in a unified way. In this work, we propose such a framework, and apply it to a subset of a popular, public available dataset of microscopy images.
The objective of segmentation is to divide a given image into different, meaningful regions, while denoising describes the task of removing noise from a corrupted image. The main difficulty in noise removal is to flatten the unwanted, high frequency corruption, while preserving essential features such as edges. At first glance, denoising and segmentation are two different applications. Nevertheless, both tasks share relationships, as very similar models can be used to solve both problems [8]. As we demonstrate in this work, denoising and segmentation can benefit a lot from each other. By identifying edges segmentation guides the denoising process to preserve sharp structures while smoothing the unwanted high frequency residuals. Also, by removing unnecessary and misleading information from images, denoising helps and improves the segmentation accuracy.
There exist at least two main kinds of approaches to tackle the two tasks individually. The first class of methods involves the minimisation of an energy functional within graph or variational frameworks. The second type of approaches that recently became popular considers deep learning techniques, especially convolutional neural networks [29]. In the following, we give a short overview of the most important and related variational and deep learning based methods.
### Variational Methods
Standard imaging methods for segmentation and denoising are based on an energy functional that captures the desired characteristics of the output image. The energy functional typically consists of a data fitting term, and a regularisation term that encourages properties of the output image, such as smoothness or sparsity. The energy functional is then minimised using optimisation techniques such as gradient descent or proximal splitting algorithms.
DenoisingOne of the best known variational model for image denoising is the Rudin-Osher-Fatemi (ROF) [36] model. This model improves the region-based Mumford-Shah [31] functional that realises a piecewise smooth approximation of an input image. The ROF model and its extensions reduce noise by penalizing the total variation of the image. Such methods thus promote piecewise constant images with undesirable staircase effects in homogeneous regions and they are unable to recover image details and patterns with higher variation. In case of severe input noise, they provide poor denoising results as image contours are confused with noise [16].
On the other hand, the resulting image being piecewise constant, it can be used for segmentation, by choosing regions of the same value, or thresholding the image. More details about the link between the ROF based denoising models and segmentation can for example be found in [8].
SegmentationIn their seminal paper [11], Chan and Vese proposed to solve the MumfordShah problem with a levelset reformulation. Let us denote by \(\Omega\) a bounded subset of \(\mathbb{R}^{2}\) with Lipschitz boundary, where the given image \(f:\Omega\rightarrow[0,1]\) is defined, and by \(u\colon\Omega\rightarrow\{0,1\}\) the desired binary mask, separating \(f\) into two different areas corresponding to the two mean intensity values \(c_{1}\) and \(c_{2}\), belonging to foreground and background region, respectively. In 2D, this framework involves a surface \(\phi\) whose zero level represents the contour of interest, and the mask is obtained as \(u(x)=H\left(\phi(x)\right),\) where \(H(\cdot)\) is the Heaviside function. The proposed energy for binary segmentation is given by
\[\mathcal{E}(\phi,c_{1},c_{2})=\int_{\Omega}\lvert\nabla H(\phi(x ))\rvert dx +\lambda\int_{\Omega}\lvert f(x)-c_{1}\rvert^{2}H(\phi(x))dx \tag{1}\] \[+\lambda\int_{\Omega}\lvert f(x)-c_{2}\rvert^{2}(1-H(\phi(x))dx,\]
where \(\lambda>0\) is a regularization parameter to tune. Slight modifications of this method have already been used in microscopy [34, 40], as it is well adapted to cell segmentation, where labeled data are scarce. which computes the intensity averages by using constant intensity information across the region. However, the Chan-Vese model computes the intensity averages by using constant information across the region, and thus is a global region-based model. Therefore, it does not deal well with intensity inhomogeneities and the presence of high noise. To mitigate this problem, many local region-based extensions of the active contour of the piecewise constant active contour model have been proposed [27, 45], but the methods remain sensible to the considered hand-crafted features and the initial contour. In another line of works, pre-filtering tools are considered to better prepare the image for segmentation [9, 28, 43] in a sequential pipeline: denoise before segment. In [7], a three-stage approach is proposed, consisting of smoothing, lifting, and segmentation using thresholding.
In our work, we tackle the afore-mentioned issues by introducing a generalized Chan-Vese segmentation functional including a robust data-fidelity term that is jointly learned with self supervised deep denoising techniques.
### Deep Learning Methods
We now review some of the most relevant works using neural networks for the denoising and segmentation of images.
DenoisingWhile variational image denoising techniques focus on explicitly modeling data noise, modern deep learning approaches directly learn how to map the noisy image to its clean counterpart. In the literature, several types of deep learning based denoising methods can be found. In particular, supervised approaches require pairs of noisy images and corresponding clean ground truth data (see [21, 44, 41]). However, the presence of such noisy and clean data pairs is rare in reality and often artificial, and it therefore makes such methods useless for practical applications.
To overcome the requirement of clean images, internal statistical methods (i.e. methods, where image patches from the same image are used for the noise reduction), have been
introduced [46]. In [39], Ulyanov et al. exploit the fact that the internal structure of CNNs, inherently resonates with the distribution of natural images, and utilize this observation for image restoration without the need of additional training data. For each single image to restore, this method thus proposes to train a CNN to reconstruct the considered image. The idea is that an early stopping of the training allows to recover a regularised, denoised image. A different strategy is proposed in the Noise2Noise [25] method, where noisy image pairs are mapped to one another. The drawback of these type of methods is that it still relies on the availability of such pairs. In practice, even the acquisition of two noisy realisations of the same image content is often difficult [5]. To this end, self-supervised training methods operating on one single noisy image, such as Noise2Void [24], Noise2Self [3], and more recently Noise2Fast [26], are promising alternatives. This self-supervision is accomplished by excluding/masking the center (blind spot) of the receptive field of the network. In this type of training, it is assumed that the noise is pixelwise independent and that the true intensity of a pixel can be predicted from the local image context, with the exception of the blind spots mentioned previously [24]. In this work, We utilised Noise2Fast [26] in our study due to its optimal combination of computational speed and performance. The method itself will be explained in more detail in Sections 2, and 4.
SegmentationAmong existing deep learning based approaches addressing image segmentation, the U-Net [35] first introduced for microscopy cell segmentation, is one of the most successful network architectures. Next to it, we mention Mask-RCNN [20], a two-stage object detection and segmentation, extending the popular faster R-CNN architecture [17]. DeepLab [13] is a family of methods that use atrous convolution (also known as dilated convolution) to capture multi-scale context information. It has been shown to achieve state-of-the-art results on several segmentation benchmarks. Still, even the best existing methods offer plenty of scope for improvements, motivating further research in this field [21, 23, 7]. Their high performance comes along with a price to pay. A common trait to the mentioned approaches is their requirement for tremendous amounts of labeled ground truth training data, the creation of which is time-consuming and prone to subjective errors.
As already mentioned, for many applications such as microscopy, the available image data is very noisy and the available ground truth training data are scarce. It is thus of great interest to tackle both, the segmentation and the denoising, in a unified manner. In the following, we will review variational methods, as well as deep learning based approaches tackling both, the segmentation and the denoising in a joint manner.
### Joint denoising and segmentation methods
In [6], Cai et al. design a model tackling the segmentation of images with a high level of noise or blurriness. To this end, they propose a variational approach, coupling an extension of the piecewise constant Mumford Shah model with an image restoration model, making it more robust in the processing of the given corrupted image \(f\). In [14], the authors propose a variational approach for the joint reconstruction and segmentation. Therefore, they derive a model consisting of a total variation regularised reconstruction from undersampled data, and
a Chan-Vese based segmentation. The authors show the improvement of joint reconstruction and segmentation performance compared to the sequential approach. In another work [33], Ramlau _et. al_ illustrate that the Mumford-Shah level-et method can enhance the quality of reconstructed images and improve the accuracy of segmentation results.
In the context of microscopy data, purely deep learning based approaches dealing with both segmentation and denoising are [32] and [5]. In [32], Prakesh et al. demonstrated on various microscopy datasets that the use of self-supervised denoising priors improves the segmentation results, especially when only a few ground truth segmentation masks are available for training. In a similar work [5], the authors propose DenoiSeg, consisting of a U-Net for image segmentation, and the self-supervised denoising scheme Noise2Void [24], which are combined and trained with a common loss. The authors demonstrate, that the global optimisation outperforms the sequential counterpart, where the image is denoised first and then segmented. This method requires labeled data. To reach high denoising performance, a huge amount of noisy data is also required. Moreover, the loss function is just the sum of the segmentation and denoising losses. There exists no coupling between the two tasks in the objective to optimise. Segmentation can therefore benefit from the noise reduction, but the reverse is not possible.
To overcome these limitations, we a propose single-image method with a new joint loss that allows full interaction between segmentation and noise reduction.
### Contributions
In this work, we propose a new model for joint image denoising and segmentation that combines advantages from variational models and deep learning. In contrast to the aforementioned state-of-the-art deep learning based methods which require a large cohort of labeled and clean training images, we obtain comparable results using only one single image. While supervised deep learning based methods are trained with hand-made annotations which are prone to subjective labelling errors, the proposed combination of a variational segmentation model with a self-supervised denoising CNN does not require any labeled data or a representative dataset leading to the elimination of pre-training.
We combine denoising and segmentation tasks in such a way, that both of them benefit from each other. This is a main difference between existing deep joint approaches such as [5], where the denoising task solely aims at improving segmentation. More specifically, we design two dedicated denoising networks for the foreground and background regions to improve the overall image denoising performance, and use the difference of the denoising performances in the two regions to find the segmentation mask.
Our method can be seen as a flexible generalization of existing Chan-Vese models. Standard Chan-Vese models as well as joint variational methods for segmentation and denoising [6, 14] rely on the piecewise constant assumption of hand-crafted features. Thus, these methods are struggling with intensity inhomogeneities, textures or high noise levels. Further, methods of that kind strongly depend on the initialisation due to the non-convexity of the functional. In this work we propose to learn the structural patterns of different regions in the image without any prior information, by designing a novel energy functional, where
feature information is captured in a natural way by a denoising network. The paper is organised as follows. We start with toy examples that illustrate the region specific denoisers as the motivation of our method in 1.5. We then formulate the two problems that we aim to solve and revise the necessary ingredients that make up our algorithm in Section 2. Our proposed new algorithm is described and analysed in Section 3, and the numerical implementation is presented in 4. Section 5 shows the application of the method to microscopy data. Further, we apply our proposed model to natural images, and demonstrate, that with manual guidance by the user roughly indicating the two regions, our method can successfully be applied to more complex problems. The paper ends with a conclusion and outlook to possible future work.
### Motivation
In the following, we give an intuition of how the two denoising and segmentation tasks we aim to solve are coupled in a way that both of them have a positive influence on each other. We present some toy examples, showing how segmentation can successfully guide the denoising process. In a toy example, we generate an image of size 256\(\times\)256 that consists of stripe patterns that are aligned differently in different areas; see Figure 1. This image is further corrupted by manually adding Gaussian noise with a noise level of 50. Here, we use two linear neural networks (2 "experts" respectively dedicated to the foreground and the background) consisting of a single convolutional layer with one filter of size \(15\times 15\), which is trained using a slight modification of the Noise2Fast training strategy [26] described in Section 4. More precisely, we restrict the training of the network to the two regions of the image, respectively, by masking the loss function, and restricting the training on boxes of size \(30\times 30\) which are depicted in Figure 1. We find that learned filters are adapted to the structure of the signal in the corresponding region. As a result, the error patterns have higher values in the region where the denoiser has not been trained. This provides the basis for exploiting region-specific denoising for segmentation. The experimental details for this toy example are provided in Section 5.
The positive effect of segmentation on the denoising process is even more evident in the natural image shown in Figure 2. We used the network architecture proposed by the authors in [26], resulting in a non-linear neural network. First, the denoising network was trained and subsequently applied on the whole image. The second image shows the result obtained with two separately trained neural networks. This strategy yields a better visual result, which is further confirmed by PSNR values of 19.69 and 19.12, respectively. The noisy image has been generated by scaling the given clean RGB input image to \([0,1]\), and adding randomly distributed noise scaled with the maximum pixel value, ensuring that the noise is proportional to the image intensity. Here, we used a manually generated mask of the zebra and background region, and during training, computed the MSE restricted to the two different regions, respectively.
In the next section, we will fix the notation, formalize the problem, and describe the main ingredients that are used for our proposed method.
## 2 Problem Description
We now present in detail the background that is relevant for our algorithm. First we set our notations, and describe our proposed energy functional for our unified denoising and segmentation framework.
In the following, we denote by \(\Omega\subset\mathbb{R}^{2}\) a bounded set with Lipschitz boundary, and by \(\mathbb{F}\) a space of functions \(f\colon\Omega\to\mathbb{R}^{d}\), with \(d=1\) for grayscale images, and \(d=3\) in the RGB case. We consider a given (noisy) image \(f\in\mathbb{F}\), which we want to jointly denoise, and split up into \(C\) different regions.
**Problem 1** (Image Denoising).: _The goal of image denoising is to recover a clean image \(g\) from a noisy observation \(f\) which follows an image degradation model \(f=g+n\), where \(n\) is the signal degrading noise which we want to remove._
Note that although other degradation types are possible, we assume an additive model
Figure 1: Visualisation of the idea behind the proposed joint denoising and segmentation model. Here, we trained two networks consisting of one single filter using the Noise2Fast [25] strategy and restricted the training to the two boxes marked in the noisy image \(f\). From the two right binary images in the bottom row, we observe that the two denoising experts perform much better in the region they have been trained on. The difference images (noisy image minus Denoised by Expert 1 (resp. 2) can then be used in the segmentation process, by exploiting the fact that regions with a small denoising error for the first (resp. second) expert can be assigned as foreground (resp. background).
here and specifically we will consider noise with an expected value of zero.
**Problem 2** (Image Segmentation).: _Image segmentation refers to the process of automatically dividing an image into meaningful regions. Based on specific pre-defined characteristics of a given image \(f\in\mathbb{F},\) one is interested in splitting the image domain into two (in the case of binary segmentation) regions \(\Sigma,\) and \(\Omega\setminus\Sigma\). In the case of multiclass segmentation, the objective is to build a partition \(\Omega=\bigcup_{i=1}^{C}\Sigma_{i}\) of the image domain into \(C\) disjoint regions (classes), where each of the regions \(\Sigma_{1},\ldots,\Sigma_{C-1}\) represents a specific structure of objects in \(f\) and \(\Omega\setminus(\Sigma_{1}\uplus\Sigma_{2}\uplus\cdots\uplus\Sigma_{C-1})\) represents the background._
In this work, we address these two problems simultaneously by designing an energy functional in a way that both tasks benefit from each other. Next, we discuss the two main components from the literature that form the basis of our approach.
### Convex Chan-Vese Formulation
In [12], Chan et al propose to relax the binary Chan-Vese segmentation problem (1) and let the desired solution \(u(x)\) take values in \([0,1]\). The resulting convex energy is
\[\min_{0\leq u\leq 1}\int_{\Omega}\lvert\nabla u\rvert+\lambda\int_{\Omega} \left((c_{1}-f(x))^{2}-(c_{2}-f(x))^{2}\right)u(x)dx. \tag{2}\]
The authors showed that, for any fixed constants \(c_{1},c_{2}\in\mathbb{R}\), a global minimiser for the non-convex problem can be found by carrying out the minimisation in (2), and setting \(\Sigma=\{x:u(x)>\tau\}\) for a.e. \(\tau\in[0,1]\).
Though the model is convex it still suffers from difficulties in segmenting images where the piecewise constant assumption is not a relevant prior for the different regions in the
Figure 2: Given noisy RGB input image (corrupted with Gaussian noise, noise level = 0.75), denoised image using Noise2Fast on the whole image, region-specific experts, and ground truth image. We clearly observe sharper edges, and better recovered color information in the “two-experts”-example.
image, or if the image is corrupted by severe noise. These issues are the main problems to solve in the current paper.
### Self-supervised single-image based denoising
For a given noisy image \(f=g+n\), self-supervised denoising methods are based on some variant of the self supervised loss
\[\mathcal{L}_{f}(\theta)=\int_{\Omega}(\Phi_{\theta}(f)(x)-f(x))^{2}. \tag{3}\]
Clearly, such a strategy cannot work without restricting the class of functions \(\Phi_{\theta}\), since \(\Phi_{\theta}=Id\) would be the minimiser and does not yield a denoised image. One strategy to overcome this problem is the method introduced in [39] where a generative model \(\Phi_{\theta}\) is trained minimising (3). In this framework, the convolutional structure and early stopping prevents \(\Phi_{\theta}\) to learn the fine image features (noise) to obtain a denoised image. Another strategy is linear filtering with a restriction on the filter [3, 24]. For example, a filter which is zero in its central position, and therefore not taking into account the information of this pixel but only the surrounding areas can be used to denoise an image minimising (3). Another type of method tries to minimise a slightly different functional. Motivated by the previous example, the authors of [26] introduce \(N\) random binary masks \(\mathcal{H}_{k}\) that delete information in the image. Training is then done using the loss function
\[\mathcal{L}_{f}(\theta)=\frac{1}{N}\sum_{k=1}^{N}\int(\Phi_{\theta}(\mathcal{ H}_{k}\cdot f)(x)-f(x))^{2}\cdot(1-\mathcal{H}_{k}). \tag{4}\]
This training strategy also prevents the network from learning the identity operator. Although not directly minimising (3), we use a variant of this method named Noise2Fast [26]. This variant uses regular masks \(\mathcal{H}_{1},\mathcal{H}_{2},\mathcal{H}_{3},\mathcal{H}_{4}\), which consist of horizontal and vertical stripes on the even and odd image indices.
## 3 Proposed Joint Denoising and Segmentation
We now introduce our joint model, inspired by the observations described in Section 1.5. To control binary segmentation, we propose to train two denoising neural networks, each focusing on performing well in one of the regions to be segmented (cf. Figure 1 and 2). We denote these "experts" by \(\Phi_{\theta^{F}}\) for the foreground, and \(\Phi_{\theta^{B}}\) for the background. These experts are neural networks with parameters \(\theta^{F}\) and \(\theta^{B}\), that are trained with a modified denoising strategy. Let us mention that the model is presented in the case of two regions, but the extension to multi-class is straightforward, following for instance the framework in [19, 1, 30]
In Section 3.1, we present the proposed joint energy function designed for the combined denoising and segmentation process. This energy generalizes the convex Chan-Vese functional (2) with a data-fidelity term defined from the self-supervised denoising method. The
optimisation scheme is performed in an alternating way, as presented in Section 3.2. We finally provide theoretical convergence results for our algorithm in Section 3.3.
### Joint energy functional
In the following, we denote by \(BV(\Omega)\) the space of all integrable functions \(u:\Omega\to\mathbb{R}\) with bounded total variation \(|u|_{\mathrm{TV}}\), and consider the admissible set
\[\mathbb{A}\coloneqq\{u\in BV(\Omega)\mid 0\leq u\leq 1\}.\]
Further, let \(i_{\mathbb{A}}:BV(\Omega)\to[0,\infty]\) denote the associated indicator function, which is \(0\) inside \(\mathbb{A}\), and \(\infty\) elsewhere. The parameters of the two denoising experts, \(\Phi_{\theta^{F}}\) and \(\Phi_{\theta^{B}}\) are denoted by \(\mathbf{\theta}=(\theta^{F},\theta^{B})\in\mathbb{R}^{L\times L}\), and are respectively dedicated to the foreground and the background. These two experts are neural networks trained using the strategy proposed in [26]. We consider the joint model
\[\begin{split}\mathcal{E}_{f,\lambda}(u,\mathbf{\theta})=i_{\mathbb{A }}(u)+\lambda|u|_{\mathrm{TV}}&+\int_{\Omega}\left(f(x)-\Phi_{ \theta^{F}}(f)(x)\right)^{2}u(x)dx\\ &+\int_{\Omega}\left(f(x)-\Phi_{\theta^{B}}(f)(x)\right)^{2}(1- u(x))dx\,.\end{split} \tag{5}\]
Note that for fixed network parameters \(\mathbf{\theta}\), the proposed energy is convex in \(u\). Moreover, we can threshold the result and still have a global optima (see Theorem 3.2). Further we point out that in the case where the noise2fast training strategy is used, the energy functional for the denoising step is not exactly functional (5).
Figure 3 illustrates the idea behind the proposed segmentation model. For grayscale images, one can initialise the algorithm by thresholding image values. In more complex cases, a user can be asked to provide representative boxes for the background and foreground regions. Then, alternately, the denoising experts are trained on subsets of the two different segmented regions and the segmentations are updated. In practice, the data-fidelity term in (5) is updated given the denoising performance of the two experts \(\Phi_{\theta^{F}}\) and \(\Phi_{\theta^{B}}\). For fixed network parameters \(\mathbf{\theta}\), the energy (5) is minimised. Repeating this procedure until a convergence criteria is met, we obtain the segmentation mask \(u\), as well as the denoised image \(g\approx u\odot\Phi_{\theta^{F}}+(1-u)\odot\Phi_{\theta^{B}}\).
**Example 3**.: _Here, we give examples for neural networks that act as denoisers and relate to existing approaches._
* _Constant Background:_ _In case where the background is assumed constant, one could simply assume that_ \(\Phi_{\theta^{B}}(f)=\theta^{B}\mathds{1}\)_, which corresponds to estimate a scalar value_ \(\theta^{B}\) _being the mean value of the given image inside the corresponding region as in the original Chan and Vese model._
* _Linear filter:_ _In this case, the network is linear with respect to the network parameters_ \(\theta^{B}\)_, more precisely,_ \(\Phi_{\theta^{B}}(f)=\omega_{\theta^{B}}\ast f\)_, leading to a bi-convex energy functional (_5_). In our toy example in Figure_ 1_, we have applied such a linear network consisting of one single filter of kernel size_ \(15\times 15\)
* _Filtering of data fidelity term:_ _When one of the regions is assumed to be constant and high noise levels are present, mean filtering improves the results. The data fidelity terms of energy (_5_) can then be replaced by_ \(\int_{\Omega}\left[K_{\sigma}*\left(f-\phi_{\theta^{F}}(f)\right)\right]^{2}u\,\) _and_ \(\int_{\Omega}\left[K_{\sigma}*\left(f-\phi_{\theta^{B}}(f)\right)\right]^{2}(1-u)\)_, respectively, where_ \(K_{\sigma}\) _is a mean filter with kernel size_ \(\sigma\)_. A similar approach has been done in_ _[_27_]__, where a more robust version of the Chan-Vese model_ _[_11_]_ _has been proposed by introducing Gaussian convolution in the data fitting terms, in order to make the method robust to non homogeneous regions._
* _Generic CNN:_ _Any typical state of the art denoising neural network (Deep image prior_ _[_39_]__, Noise2Void_ _[_24_]__) can be used in our framework. Note, that in this case the bi-convexity of energy (_5_) is not ensured anymore._
In the next paragraph, we discuss in more detail the joint alternating optimisation procedure we propose to minimise energy (5).
### Joint optimisation
We propose to iteratively optimise problem (5) with an alternating procedure [15]. In case the denoising step does not exactly minimise energy (5), we actually alternate between minimising two slightly different functionals. For the sake of readability, this is not indicated in the notation. We start with the initialiseation of the segmentation mask \(u.\) This is either achieved by thresholding for grayscale images, or as shown in Figure 3, manually choosing boxes representing the different regions to segment in the image. Then, based on the initial guess, the denoising expert(s) \(\Phi_{\theta^{F}},\) and \(\Phi_{\theta^{B}}\) are trained on the given initial masks. To this end, we use the ADAM optimiser [22] until convergence. As a next step, for fixed network parameters \(\boldsymbol{\theta},\) we update the segmentation mask \(u\). For fixed \(\boldsymbol{\theta},\) the energy functional (5) is convex, and all the necessary assumptions for the application of the primal dual algorithm [10] are fulfilled. A more detailed description on the considered discrete schemes is provided in Section 4 (see Algorithm 2). These alternate steps are repeated as
Figure 3: The first image shows the given grayscale input image \(f\), and user defined boxes representing rough foreground and background regions. The third image highlights pixels where the foreground expert denoiser performs better than the background one, while the last image is the segmentation result obtained by minimising the proposed energy (5).
long as the decrease of energy (5) is greater than \(p=15\) percent, which we empirically found to give a good compromise between computation speed and quality of the results.
The overall joint optimisation scheme is presented in Algorithm 1. A sketch of the alternating procedure is provided in Figure 4.
```
Initialise \(u^{0}\leftarrow\mathbf{1}_{\{f>\epsilon\}}\) and \(\boldsymbol{\theta}^{0}=\boldsymbol{\theta}_{0}\) while\(\mathcal{E}^{k}_{f,\lambda}(u^{k},\boldsymbol{\theta}^{k})/\mathcal{E}^{k-1}_{f, \lambda}(u^{k-1},\boldsymbol{\theta}^{k-1})\geq p\cdot\mathcal{E}^{k-1}_{f, \lambda}(u^{k-1},\boldsymbol{\theta}^{k-1})/\mathcal{E}^{k-2}_{f,\lambda}(u^{k -2},\boldsymbol{\theta}^{k-2})\)do \(\boldsymbol{\theta}^{k+1}\leftarrow\operatorname*{argmin}_{\boldsymbol{ \theta}}\mathcal{E}^{k}_{f,\lambda}(u^{k+1},\boldsymbol{\theta})\) {with a few ADAM iterations for \(\theta^{F}\) and Chan and Vese update for the background if \(\Phi_{\boldsymbol{\theta}^{B}}(f)=\theta^{B}\mathbbm{1}\) \(u^{k+1}\leftarrow\operatorname*{argmin}_{u}\mathcal{E}^{k}_{f,\lambda}(u, \boldsymbol{\theta}^{k})\) {with Algorithm 2)} endwhile
```
**Algorithm 1** Alternating optimisation scheme.
In the following paragraph, we will discuss the convergence property of Algorithm 1.
### Theoretical Results
In this section, we discuss some theoretical results of the proposed energy functional and the presented alternating algorithm. Note that these results hold if the denoiser is trained by minimising (3).
**Remark 4** (Monotonicity of alternating minimisation).: _The proposed energy functional (5) is continuous and bounded from below. Therefore, for each \(k\geq 0\), the following relations
Figure 4: Alternating optimisation scheme. As a first step, regions are provided for the training of the two denoising experts using the strategy. These regions can be obtained by thresholding image values or by manually choosing boxes. The differences between the given noisy image \(f\) and network outputs \(\Phi_{\theta^{F}}(f)\) and \(\Phi_{\theta^{B}}(f)\), are used in the subsequent segmentation step, minimising \(\mathcal{E}_{\lambda,f}(\cdot,\boldsymbol{\theta})\) with Algorithm 2.
_hold_
\[\mathcal{E}_{f,\lambda}(u^{(k)},\mathbf{\theta}^{(k+1)}) \leq\mathcal{E}_{f,\lambda}(u^{(k-1)},\mathbf{\theta}^{(k)})\] \[\mathcal{E}_{f,\lambda}(u^{(k+1)},\mathbf{\theta}^{(k)}) \leq\mathcal{E}_{f,\lambda}(u^{(k)},\mathbf{\theta}^{(k-1)}).\]
_Hence, the generated sequence \(\{\mathcal{E}_{f,\lambda}(u^{(k)},\mathbf{\theta}^{(k)})\}_{k\in\mathbb{N}}\) converges monotonically._
**Theorem 3.1** (Convergence of Algorithm 1).: _Assume that the level set \(S^{0}=\{(u,\mathbf{\theta}):\mathcal{E}_{f,\lambda}(u,\mathbf{\theta})\leq\mathcal{E}_ {f,\lambda}(u^{0},\mathbf{\theta}^{0})\}\) of \(\mathcal{E}_{f,\lambda}\) defined in (5) is compact and that \(\mathcal{E}_{f,\lambda}\) is continuous on \(S^{0}\). Then, the sequence \(\{(u^{k},\mathbf{\theta}^{k})\) generated by Algorithm 1 is defined and bounded. Moreover, every cluster point of \(\{(u^{k},\mathbf{\theta}^{k})\}\) is a stationary point of \(\mathcal{E}_{f,\lambda}\)._
Proof.: This is a direct application of Theorem 4.1 in [38], using that (i) we only alternate between two variables \(u\) and \(\mathbf{\theta}\), (ii) the coupling between \(u\) and \(\mathbf{\theta}\) in \(\mathcal{E}_{f,\lambda}\) is smooth.
**Remark 5**.: _The energy (5), which is convex for fixed network parameters \(\mathbf{\theta}=(\theta^{F},\theta^{B})\) is a relaxation of the fully non-convex problem_
\[\mathcal{E}(\Sigma,\mathbf{\theta})=\text{Per}(\Sigma,\Omega)+\int_{\Sigma}(f- \Phi_{\theta^{F}}(f))^{2}dx+\int_{\Omega\setminus\Sigma}(f-\Phi_{\theta^{B}} (f))^{2}dx, \tag{6}\]
_where \(\Sigma\subset\mathbb{R}^{2}\), and \(\Omega\setminus\Sigma\) are the two regions of the given image \(f(x),\) and \(\text{Per}(\Sigma,\Omega)\) is the perimeter of the interface seperating these two regions._
**Theorem 3.2** (Thresholding).: _For any fixed \(\mathbf{\theta}\), a global minimiser for the non-convex problem \(\min_{\Sigma,\mathbf{\theta}}\mathcal{E}(\cdot,\mathbf{\theta})\) in (6) can be found by carrying out the minimisation of \(\min_{u}\mathcal{E}_{f,\lambda}(\cdot,\mathbf{\theta})\), and then setting \(\Sigma=\{x:u(x)\geq\tau\}\) for a.e. \(\tau\in[0,1]\)._
Proof.: The proof is similar to the one in [12](Theorem 2). The only difference is in the data fidelity term, where instead of the fixed constants \(c_{1}\), and \(c_{2}\), we look at fixed network outputs \(\Phi_{\theta^{F}}(f)\), and \(\Phi_{\theta^{B}}(f)\). As the problem is one-homogeneous in \(u\), thanks to the co-area formula, we show that \(\mathcal{E}_{f,\lambda}(u,\mathbf{\theta})=\int_{\epsilon}\mathcal{E}_{f,\lambda} (\mathbbm{1}_{u>\tau},\mathbf{\theta})\), so we can thus conclude that if \(u\) is a minimiser of the energy (5) for fixed \(\mathbf{\theta}\), then for a.e. \(\tau\in[0,1]\) the set \(\Sigma(\tau)\) has to be a minimiser of (6).
## 4 Numerical Implementation
In the following, we describe the numerical implementation of the proposed method.
### Segmentation Step
We can rewrite our segmentation sub-problem in the form
\[\min_{u\in\mathbb{X}}\mathcal{F}(K(u))+\mathcal{G}(u), \tag{7}\]
where \(K(u)\coloneqq\nabla u\), \(\mathcal{F}(v)\coloneqq\|v\|_{1,2}\) and \(\mathcal{G}(u)\coloneqq i_{\mathbb{A}}(u)+\int_{\Omega}(f-\Phi_{\theta_{F}}(f))^ {2}u+\int_{\Omega}(f-\Phi_{\theta_{B}}(f))^{2})(1-u)\). It holds that \(K:\mathbb{X}\to\mathbb{Y}\) is a linear mapping between Hilbert spaces \(\mathbb{X},\mathbb{Y}\) and \(\mathcal{F}:\mathbb{Y}\to[0,\infty]\) and \(\mathcal{G}:\mathbb{X}\to[0,\infty]\) are convex and lower semi-continuous functionals, i.e. all the necessary assumptions for the application of the primal dual algorithm framework proposed in [10] are fulfilled.
#### 4.1.1 Discretisation
In the following, we fix the notation which we use throughout this Section. We work with discrete images in \(\mathbb{H}\coloneqq\mathbb{R}^{N_{1}\times N_{2}}\), denoting a finite dimensional Hilbert space equipped with an inner product \(\langle u,v\rangle=\sum_{i}u[i]v[i]\) for \(u,v\in\mathbb{H}\) with \(i=(i_{1},i_{2})\in\{1,\ldots,N_{1}\}\times\{1,\ldots,N_{2}\}.\) The discrete gradient \(\nabla=(\nabla_{1},\nabla_{2}):\mathbb{H}\to\mathbb{H}\times\mathbb{H}\) is defined by forward differences with Neumann boundary conditions,
\[(\nabla_{1}u)[i] \coloneqq\begin{cases}(u[i_{1}+1,i_{2}]-u[i_{1},i_{2}])/h&\text{ if }i_{1}<N_{1}\\ 0&\text{ if }i_{1}=N_{1}\end{cases}\] \[(\nabla_{2}u)[i] \coloneqq\begin{cases}(u[i_{1},i_{2}+1]-u[i_{1},i_{2}])/h&\text{ if }i_{2}<N_{2}\\ 0&\text{ if }i_{2}=N_{2}\,.\end{cases}\]
Its adjoint is given by \(\nabla^{*}(v_{1},v_{2})=\nabla_{1}^{*}v_{1}+\nabla_{2}^{*}v_{2}=:-\operatorname {div}(v_{1},v_{2})\) where \(\operatorname{div}\colon\mathbb{H}\times\mathbb{H}\to\mathbb{H}\) is the discrete divergence operator and for \((v_{1},v_{2})\in\mathbb{H}\times\mathbb{H}\) we have
\[(\nabla_{1}^{*}v_{1})[i] =\begin{cases}-(v_{1}[i_{1},i_{2}]-v_{1}[i_{1}-1,i_{2}])/h&\text{ if }1<i_{1}<N_{1}\\ -v_{1}[1,i_{2}]&\text{ if }i_{1}=1\\ v_{1}[N_{1}-1,i_{2}]&\text{ if }i_{1}=N_{1}\end{cases}\] \[(\nabla_{2}^{*}v_{2})[i] =\begin{cases}-(v_{2}[i_{1},i_{2}]-v_{2}[i_{1},i_{2}-1])/h&\text{ if }1<i_{2}<N_{2}\\ -v_{2}[i_{1},1]&\text{ if }i_{2}=1\\ v_{2}[i_{1},N_{2}-1]&\text{ if }i_{2}=N_{2}\,.\end{cases}\]
The discrete, isotropic TV semi-norm of an image \(u\in\mathbb{H}\) is defined as
\[\|\nabla u\|_{2,1}\coloneqq\sum_{i}\sqrt{(\nabla_{1}u[i])^{2}+(\nabla_{2}u[i ])^{2}}\,.\]
The discrete versions of the admissible set and the corresponding indicator function, are \(\mathbb{A}=\{u\in\mathbb{H}|0\leq u\leq 1\}\), and \(i_{\mathbb{A}}.\) The discretisation of the data fidelity term of energy (5) is written as \(\sum_{i}D(u[i],\boldsymbol{\theta})\), where
\[D(u,\boldsymbol{\theta}) \coloneqq d(u,\theta^{F})+d(1-u,\theta^{B}) \tag{8}\] \[d(u,\theta^{F}) \coloneqq u\cdot(\Phi_{\theta^{F}}(f)-f)^{2}\] \[d(1-u,\theta^{B}) \coloneqq(1-u)\cdot(\Phi_{\theta^{B}}(f)-f)^{2}\,.\]
Using these notations, the discrete version of energy (5) reads
\[\mathcal{E}_{f,\lambda}^{CV}(u,\boldsymbol{\theta})=i_{\mathbb{A}}(u)+\lambda\| \nabla u\|_{1,2}+\sum_{i}D(u[i],\boldsymbol{\theta})\,. \tag{9}\]
The optimisation problem (9) is in general a non-convex and challenging problem to be solved. We will use alternating minimisation, where we employ for the update step of the segmentation mask \(u\) the Chambolle-Pock algorithm [10], while for updating the network parameters \(\boldsymbol{\theta}=(\theta^{F},\theta^{B})\), we apply ADAM optimisation [22].
#### 4.1.2 Segmentation algorithm
We here detail the minimisation of the functional (5) with respect to \(u\) for fixed \(\boldsymbol{\theta}\), which corresponds to solve problem (7) with
\[\mathbb{X} =\mathbb{H}\] \[\mathbb{Y} =\mathbb{H}^{2}\] \[\mathcal{F} =\lambda\|v\|_{2,1}\] \[K =\nabla\] \[\mathcal{G} =i_{\mathbb{A}}+\sum_{i}D(u[i],\boldsymbol{\theta}).\]
As the operator \(K\) is linear, and the functionals \(\mathcal{F}\) and \(\mathcal{G}\) are convex and lower semi-continuous, all requirements for the application of the primal dual algorithm proposed in [10] are fulfilled.
To implement this algorithm, it is required to compute the Fenchel conjugate \(\mathcal{F}^{*}\) of \(\mathcal{F}\), as well as the proximal mappings of \(\mathcal{F}^{*}\) and \(\mathcal{G}\). We start with the derivation of the Fenchel conjugate of \(\mathcal{F}\). For \(\|\cdot\|_{2,1}\) it corresponds to the indicator function of the unit ball of the dual norm, resulting in \(i_{2,\infty}=\|\cdot\|_{2,\infty}^{*}\). Hence we have \(\mathcal{F}^{*}(v)=i_{2,\infty}(v/\lambda)\), the indicator function of \(\{v\|\|\boldsymbol{v}\|_{2,\infty}\leq 1\}\subset(\mathbb{H})^{2}\). As a next step, we compute the proximal operators of \(\mathcal{F}^{*}\) and \(\mathcal{G}\). Recall that the proximal operator of the indicator function \(i_{C}\) of some set \(C\) is given by the orthogonal projection on \(C\). The projection \(P_{2,\infty}\colon\mathbb{H}\to\mathbb{H}^{2}\) onto the unit ball in the \((2,\infty)-\)norm is thus obtained by
\[(P_{2,\infty}(\boldsymbol{v}))[i,k]=\frac{v[i,k]}{\max\{1,(v_{1}[i,k]^{2}+v_{ 2}[i,k]^{2})^{1/2}\}}.\]
Thus, the proximal operator of \(\mathcal{F}^{*}\) results in
\[\operatorname{prox}_{\mathcal{F}^{*}}(\boldsymbol{v})=P_{2,\infty,\lambda}( \boldsymbol{v})\coloneqq P_{2,\infty}(\boldsymbol{v}/\lambda).\]
Further, by introducing \(\tilde{f}=\left(f-\Phi_{\theta^{F}}(f)\right)^{2}-\left(f-\Phi_{\theta^{B}}(f )\right)^{2}\), one can finally show that
\[\operatorname{prox}_{\tau\mathcal{G}}(u_{0}[i])=P_{\mathbb{A}}\left(u_{0}[i]- \tau\tilde{f}[i]\right).\]
The overall primal dual Algorithm 2 is summarised below.
```
Input: noisy input image \(f\in\mathbb{H}\) initialisation: \(v^{0}\in\mathbb{H}\), \(u^{0},\bar{u}^{0}\in\mathbb{H}\) while\(\|u^{n+1}-u^{n}\|>\epsilon\)do \(v^{n+1}\gets P_{2,\infty,\lambda}(v^{n}+\sigma\mathbf{\nabla}\bar{u}^{n})\) \(u^{n+1}\gets P_{\mathbb{A}}(u^{n}-\tau\mathbf{\nabla}^{\intercal}v^{n+1}- \tau\tilde{f})\) \(\bar{u}^{n+1}\gets u^{n+1}+\eta(u^{n+1}-u^{n})\). endwhile return\(u^{n+1}\)
```
**Algorithm 2** Segmentation algorithm based on the minimisation of the energy functional (5) with respect to \(u\) for a fixed \(\theta\).
### Acceleration with a mask prior
In our experiments, we observed, that one of the denoisers (the one that is trained on the more complex region), tends to improve on the other region. Once this progress has started, it is quite difficult to stop the segmentation mask from expanding and converging to an undesired minimum being the constant segmentation result. Inspired by the work in [2, 42], we now propose to overcome the problem of finding an appropriate stopping criteria, by adding a fidelity term, ensuring that the updated segmentation mask \(u^{k}\) does not deviate too far from its initial guess. Assume that we have a reference mask \(u_{R}^{0}\), then, in the continuous setting, we consider the successive problems:
\[\begin{split}\mathcal{E}_{f,\lambda}^{k}(u,\theta)\coloneqq i_{ \mathbb{A}}(u)&+\lambda|u|_{\text{TV}}+\int_{\Omega}\left(f-\Phi_{ \theta^{F}}(f)\right)^{2}u\\ &+\int_{\Omega}\left(f-\Phi_{\theta^{B}}(f)\right)^{2}(1-u)+ \frac{\mu}{2}||u-u_{R}^{k}||^{2}\,.\end{split} \tag{10}\]
We can therefore optimise iteratively problem (10) with the alternate procedure presented in Algorithm 3. Note, that in this case, as the global energy is changed at each iteration, we do not have convergence guarantee anymore for the alternating procedure.
```
Initialise \(u^{0}\gets f\) and \(\theta^{0}=\theta_{0}\) and \(u_{R}^{0}\) for\(k=1,\dots,N\)do \(\mathbf{\theta}^{k+1}\leftarrow\operatorname*{argmin}_{\mathbf{\theta}}\mathcal{E}_{f, \lambda}^{k}(u^{k+1},\mathbf{\theta})\) {with a few ADAM iterations for \(\theta^{F}\) and Chan and Vese update for the background if \(\Phi_{\theta^{B}}(f)=\theta^{B}\mathbbm{1}\) \(u^{k+1}\leftarrow\operatorname*{argmin}_{u}\mathcal{E}_{f,\lambda}^{k}(\mathbf{u}, \theta^{k})\) {with Algorithm 4.2} \(\mathbf{u}_{R}^{k+1}=\mathbf{u}^{k+1}\) (update reference mask) endfor
```
**Algorithm 3** Alternating optimisation scheme with acceleration.
To solve the segmentation problem, we reformulate the optimisation of problem (10) for
fixed \(\mathbf{\theta}\) as \(\min_{u}\mathcal{F}(K(u))+\mathcal{G}^{k}(u)\), with
\[\mathcal{G}^{k}(u)=i_{\mathbb{A}}(u)+\frac{\mu}{2}||u-u_{R}^{k}||^{2}+\int_{ \Omega}\left(f(x)-\Phi_{\theta^{F}}(f)(x)\right)^{2}u(x)+\int_{\Omega}\left(f (x)-\Phi_{\theta^{B}}(f)(x)\right)^{2}(1-u(x)).\]
Recalling that \(\tilde{f}=(f(x)-\Phi_{\theta^{F}}(f)(x))^{2}-(f-\Phi_{\theta^{B}}(f)(x))^{2}\), we can show that:
\[\mathrm{prox}_{\tau\mathcal{G}^{k}}(u^{0}[i])=P_{\mathbb{A}}\left(\frac{u^{0}[ i]+\tau\mu u_{R}^{k}[i]-\tau\tilde{f}[i]}{1+\tau\mu}\right).\]
Observing that \(\mathcal{G}^{k}\) is \(\mu\)-strongly convex in \(u\), we consider the accelerated primal dual algorithm of [10] to solve problem (10).
```
Input: noisy input image \(f\in\mathbb{H}\) Parameters: \(\lambda,\sigma,\tau,\theta\) Initialisation: \(v^{0}\in\mathbb{H}\), \(u^{0},\bar{u}^{0}\in\mathbb{H}\) while\(\|u^{n+1}-u^{n}\|>\epsilon\)do \(v^{n+1}\gets P_{2,\infty,\lambda}(v^{n}+\sigma\mathbf{\nabla}\bar{u}^{n})\) \(u^{n+1}\gets P_{\mathbb{A}}\left((u^{n}-\tau\mathbf{\nabla}^{\intercal}v^{n+1 }+\tau\mu u_{R}^{k}-\tau\tilde{f})/(1+\tau\mu)\right)\) \(\eta=\frac{1}{1+2\mu\tau},\tau=\tau\eta,\sigma=\frac{\sigma}{\eta}\) \(\bar{u}^{n+1}\gets u^{n+1}+\eta(u^{n+1}-u^{n}).\) endwhile return\(u^{n+1}\)
```
**Algorithm 4** Segmentation algorithm based on the minimisation of the energy functional (10) with respect to \(u\) for a fixed \(\theta\).
As we have discussed the numerical implementation of the segmentation step, we now present the discrete setting and implementation of the denoising step.
### Denoising step using Noise2Fast strategy
We here detail the denoising of a discretized 2D image \(f\in\mathbb{R}^{m\times n}\) composed of a clean signal \(g\in\mathbb{R}^{n\times n}\) and noise \(n\in\mathbb{R}^{m\times n}\), i.e.
\[f=g+n.\]
For completeness, we introduce \(u_{B}^{k}\), which for \(k=0\) corresponds to the Initialisation of the background region. These masks can either be obtained by thresholding the image, or can be given in form of user-provided boxes. For the next update steps, i.e. \(k=1,\ldots,N\) it holds that \(u_{B}^{k}=1-u^{k}.\) Using these notations, for fixed \(\mathbf{u}^{k}=(u^{k},u_{B}^{k})\), in the \(k\)-th denoising step of our alternating procedure, the energy functional (5), reduces to
\[\min_{\mathbf{\theta}}\sum_{i}D(\mathbf{u}^{k}[i],\mathbf{\theta})=\min_{\mathbf{\theta}}\sum_ {i}\left(\Phi_{\theta^{F}}(f)[i]-f[i]\right)^{2}\cdot u^{k}[i]+\cdot(\Phi_{ \theta^{B}}(f)[i]-f[i])^{2}\cdot u_{B}^{k}[i], \tag{11}\]
where \(\Phi_{\theta^{F}}\) and \(\Phi_{\theta^{B}}\) are (deep) experts respectively dedicated to the denoising of the foreground and background.
We build our denoisers on top of the the Noise2Fast method introduced by Lequyer et al. in [26]. In this paper, the authors propose a fast single image blind denoiser, using a special downsampling strategy. More precisely, their method consists in splitting a given image into smaller parts by using a checkerboard downsampling strategy. From a single image, four images are thus generated, by removing one half of all pixels, and shifting the remaining pixels to fill in the gaps left behind. Then, a network is trained to learn the mappings between the resulting downsampled image pairs. Due to the internal redundancy in form of recurrent patches present in images, and the high degree of self-similarity, the neural network will also be able to denoise the whole image instead of the downsampled ones [4, 46, 18]. For a more detailed description of the Noise2Fast training strategy, such as the network architecture, we refer the reader to [26].
In our approach, we use a different loss function as the one described in the work of Lequyer et al [26]. Instead of considering the whole image domain for training, we restrict the optimisation process for the foreground \(\Phi_{\theta^{F}}\) (resp. background \(\Phi_{\theta^{B}}\)) expert to the current segmentation masks \(u^{k}\) (resp. \(1-u^{k}\)) obtained by Algorithm 2.
In a first step, as in [26] the downsampled training images are generated in the following way
\[f_{\text{even}}(i,j)=f\left(i,2j+(i\operatorname{mod}2)\right) \in\mathbb{R}^{m\times\frac{n}{2}}\] \[f_{\text{odd}}(i,j)=f\left(i,2j+(i\operatorname{mod}2)+1\right) \in\mathbb{R}^{m\times\frac{n}{2}}\] \[f_{\text{even}}^{\prime}(i,j)=f\left(2i+(i\operatorname{mod}2),j\right)\in\mathbb{R}^{\frac{m}{2}\times n}\] \[f_{\text{odd}}^{\prime}(i,j)=f\left(2i+(i\operatorname{mod}2)+1 \right),j\right)\in\mathbb{R}^{\frac{m}{2}\times n},\]
and we repeat this downsampling procedure for the segmentation masks \(u^{k}\) and \(u^{k}_{B}\), for \(k=0,\ldots,N\) as well. We denote as
\[\mathcal{J}^{k}=\{ (f_{\text{even}},f_{\text{odd}},u^{k}_{\text{odd}},u^{k}_{B, \text{odd}}),(f_{\text{odd}},f_{\text{even}},u^{k}_{\text{even}},u^{k}_{B, \text{even}}),\] \[(f_{\text{even}}^{\prime},f_{\text{odd}}^{\prime},u^{k^{\prime}}_ {\text{odd}},u^{k^{\prime}}_{B,\text{odd}}),(f_{\text{odd}}^{\prime},f_{ \text{even}}^{\prime},u^{k^{\prime}}_{\text{even}},u^{k^{\prime}}_{B,\text{ even}})\}\]
the set of training data for \(k=0,\ldots,N\), with \(N\) being the number of iterations of the alternating minimisation.
We then train the two denoising networks, \(\Phi_{\theta^{F}}\) and \(\Phi_{\theta^{B}}\), restricted to the given regions, \(u^{k}\), and \(u^{k}_{B}\), i.e. for \((\tilde{f},\tilde{g},\tilde{u},\tilde{u}_{B})\in\mathcal{J}^{k}\) we minimise
\[\mathcal{L}^{k}_{\mathbf{u}}(\mathbf{\theta})=\sum_{i}\left(\Phi_{\theta^{F}}(\tilde{f} )[i]-\tilde{g}[i])\right)^{2}\cdot\tilde{u}[i]+\left(\Phi_{\theta^{B}}(\tilde{ f})[i]-\tilde{g}([i])\right)^{2}\cdot\tilde{u}_{B}[i]. \tag{12}\]
Thus the self-supervised denoisers learn to reconstruct even (resp. odd) lines and columns \(\tilde{f}\) of the image thanks to the odd (resp. even) ones \(\tilde{g}\). As mentioned above, caused by the self-similarity redundancy, by minimising (12), \(\mathcal{L}^{k}_{u}\), we also solve problem (11).
In the next paragraph, we demonstrate the possible applications of three different variants of the proposed joint denoising and segmentation method.
Experiments and Results
The code which was used to obtain the results presented in this work is provided on GitHub ([https://github.com/Nadja1611/Single-Image-based-unsupervised-joint-segmentation-and-denoising.git](https://github.com/Nadja1611/Single-Image-based-unsupervised-joint-segmentation-and-denoising.git)). As a first application, we test our method on the microscopy cell nuclei dataset from the DSB2018 dataset1 stemming from the Kaggle 2018 Data Science Bowl challenge. The data consists of a diverse collection of cell nuclei imaged by various fluorescence microscopes. The patches are of size \(128\times 128\), and come with manually generated segmentation ground truths. More precisely, we use the noise free data and manually add gaussian noise with three different noise levels, namely 10, 30, and 50. In our experiments, we considered the same subset of images than the one used in [5], where the authors demonstrated that the segmentation of noisy data can be improved by addressing denoising and segmentation in a cooperative (but not fully joint) manner.
Footnote 1: [https://www.kaggle.com/c/data-science-bowl-2018](https://www.kaggle.com/c/data-science-bowl-2018)
In the following experiments, for the evaluation of the segmentation performance we use the Dice metric, and for capturing the denoising performance in the experiments, we choose peak signal to noise ratio (PSNR) and structural similarity metric (SSIM).
We stop our alternating Algorithm 1, as soon as the decrease of energy (5) is less than 15 percent of the previous decrease rate. We tried out a few different strategies, and this one turned out to be the most promising one. We indeed observed that the a criteria based on the change in the energy decay is robust to different scales for the regularisation parameter \(\lambda\), and it also adapts to different type of images.
We compare the segmentation performance of our joint denoising and segmentation approach with the convex Chan-Vese model from [12] applied either on the noisy images directly, or on the previously denoised data within a sequential approach. For both the proposed joint approach and the sequential one, we use the same denoised image as starting point for fair comparisons. Further, we test our method against the partially joint denoising and segmentation framework in [5].
### Segmentation with the constant background assumption
We start with the evaluation of our method on a subset of the DSB2018 cell nuclei data which were manually corrupted (noise levels 10, 30 and 50). To this end, we train a foreground denoiser, \(\Phi_{\theta^{F}}\), and we assume the background to be constant, i.e. \(\Phi_{\theta^{B}}=\theta^{B}\mathds{1}.\) For this particular type of images, this assumption is useful, while for images with more structural patterns, this may not be a reasonable choice, and two denoising experts might be necessary.
To apply our joint approach, we first denoise the given image using the Noise2Fast strategy in the way as described in Section 4, and use the thresholded denoised image (with the threshold \(\epsilon\) set to 0.5) as initialisation. For noise level 10, we applied the segmentation Algorithm 1 with the constant background assumption, while a Noise2Fast expert was considered for higher noise levels. We recall that the overall process for solving the joint segmentation and denoising is presented in Algorithm 1. Depending on the type of image,
for the alternate process between two and six iterations are required to meet the convergence criteria.
For each method, we conducted the experiments with ten different values of the regularisation parameter \(\lambda\) evenly distributed in the interval \([0,1]\), and then selected for each image the result with the highest Dice value.
As a further comparison, we applied the convex Chan-Vese model from [12] directly on the noisy images. The obtained results are depicted in Figures 5 to 7, while the segmentation evaluation metrics are summarised in Table 5. We observe that for all three noise levels, the sequential and Chan-Vese method from [12] struggle with intensity inhomogenities of the cells. These examples highlight the strength of the proposed unified approach, which is capable of segmenting cells with intensities close to the mean value of the background. Notice that the proposed approach does not perform well on the last example due to the presence of intensity inhomogeneities, ascribed to a spatially varying field, the bias field, in the upper left corner of the image. Please note that in this case, evaluating the denoising performance might not be appropriate, as we are assuming a constant background and not applying denoising to the background.
In Table 5, the results obtained by the supervised joint denoising and segmentation method DenoiSeg[5], are summarised. Here, we ran the provided code using 10 training images. More precisely, we used the DSB2018 dataset with noise level zero and added Gaussian noise in the same way as before to all of the 4320 images, among which 3750 were used for training, 670 for validation and the same 50 as we used for our experiments for testing. It has to be mentioned, that for validation all 570 annotated validation images are used in [5], resulting in a total number of 580 annotated images in total during the training process. As displayed in Table 5, this method performs the best. To have a fairer comparison in term of training data, we decided to adapt their method by using 10 images in total (DenoiSeg(10 images in Table 5), 7 for training and 3 for validation. In this setting, all available data are still used for the training of the denoiser, whereas for the segmentation network, the ground truth masks for all but the ten training images are zeroed out. With this smaller level of supervision, our approach outperforms the method of [5].
For higher noise levels, it is required to filter the background fidelity term. This prevents from considering higher values of the regularisation parameter \(\lambda\), that may lead to an oversegmentation of the background and an overall decrease of the segmentation performance. For noise level 30 and 50, as mentioned in Section 3.1, we therefore minimise
\[\mathcal{E}_{f,\lambda}(u,\boldsymbol{\theta})=i_{\mathbb{A}}(u) +\lambda|u|_{\mathrm{TV}} +\int_{\Omega}\left[K_{\sigma}*(f-\phi_{\theta^{F}}(f))\right]^{ 2}u(x)\] \[+\int_{\Omega}\left[K_{\sigma}*(f-\phi_{\theta^{B}}(f))\right]^{ 2}(1-u(x))dx\]
with \(K_{\sigma}\) being a mean filter with \(\sigma=3\).
The next paragraph shows experimental results which were obtained applying our idea of training denoising experts for both regions.
### Segmentation using two denoisers
In the toy example in Figure 1 from Section 1.5, we trained two denoising experts (in this case we used a linear network consisting of one filter of size \(15\times 15\)) initialised by the yellow and purple boxes of size \(30\times 30\). We iterated between the denoising and segmentation three times, until the energy decrease was less then 10 percent. For segmentation, we set the regularisation parameter \(\lambda\) to 0.02. After the first segmentation step, the loss functions of the denoisers were restricted to \(u\) and \(1-u\) respectively.
Figure 8 is a typical example showing the strength of the proposed algorithm compared to intensity-based approaches. In this experiment, we preprocessed the given image of size 256\(\times\)256 in a way, that both regions have the same mean value, and added gaussian noise
Figure 5: **Visual comparison of the segmentation results of data with noise level 10.** From left to right, this figure shows: the noisy input, the results obtained with the proposed joint approach, the sequential approach, the chan-Vese baseline and the ground truth segmentation masks. For all compared methods, the \(\lambda\) maximising the Dice score has been selected.
as described before, with a noise level of 10. As a consequence, the classical Chan-Vese algorithm totally fails on this example. This model can nevertheless perform well with an adapted hand-crafted transformation of the image to segment. As illustrated in the two last image of Figure 8, when fed with the map of normalized image gradient instead of the original image intensities, the Chan-Vese model is able to segment the two part of the image.
On the other hand, our approach is able to automatically learn a relevant transformation of image data and provides excellent segmentation without any previous trick. The reason for that is again, that the weights learnt by the two denoising experts strongly depend on the true underlying signal, which, in contrast to the mean intensity, is different in the two regions. Here, both denoising experts were initialised by boxes of size \(50\times 50\) centered in the regions. We used a regularisation parameter \(\lambda\) of 0.06, and set the learning rate to 0.001. Using the same stopping criterion as in the cell example, these results were obtained after
Figure 6: **Visual comparison of the segmentation results of data with noise level 30.**
3 iterations of the alternating procedure involving denoising and segmentation steps.
In Figure 9, we display the clean image considered in the experiment of Figure 8, as well as different denoised images with their corresponding quantitative metrics. More precisely, the second image in the figure is obtained by applying the Noise2Fast strategy to the whole image, while the third image is the result of the proposed joint optimisation procedure, where the image is composed using the segmentation mask \(u\) and the denoised images from the two denoising experts. Especially in the left region, we can observe a better denoising performance of the proposed method, which is more evident by examining the PSNR (20.36 vs 19.815) and SSIM (0.753 vs 0.696) values.
### Segmentation with a reference mask using Algorithm 3
In Figure 10, we show another example of image segmentation for three different noise levels using Algorithm 4.2. The main difficulty of this image lies in the intensities which
Figure 7: **Visual comparison of the segmentation results of data with noise level 50.**
are shared by the object to be segmented and the background. Therefore, we chose a representative box for initialising the squirrel, which includes both, dark and bright areas, in order to enable the foreground denoising expert to better generalize on the foreground region consisting of dark and bright areas. Naturally, as the squirrel and background do not differ much in terms of their structural properties, the foreground denoiser, \(\Phi_{\theta F}\) also performs well on the background, causing the segmentation mask \(u\) to grow. In order to control this behaviour, we applied our second strategy that includes a recursive reference mask as described in Algorithm 3, thus preventing the segmentation mask obtained at iteration \(k+1\) from deviating too much from the previous one at iteration \(k\). More precisely, the parameters that we used for noise level 10 were \(\mu=0.0001,\lambda=0.005\), for noise level 30 we set \(\mu=0.005\), \(\lambda=0.005\), while for a noise level 50 \(\mu=0.00015\), \(\lambda=0.005\).
In the following we discuss some possible extensions, and current limitations of the proposed joint denoising and segmentation approach.
\begin{table}
\begin{tabular}{c|c|c|c} \hline noise level & \(n10\) & \(n30\) & \(n50\) \\ \hline baseline & 0.820 & 0.773 & 0.582 \\ sequential & 0.799 & 0.777 & 0.735 \\ proposed & 0.851 & 0.825 & 0.786 \\ \hline DenoiSeg[5] & **0.864** & **0.848** & **0.818** \\ DenoiSeg (10 images) & 0.843 & 0.820 & 0.750 \\ \hline \end{tabular}
\end{table}
Table 1: Dice values obtained on 50 images of the DSB2018 dataset for the compared methods, and three different noise levels. Here, baseline is the convex Chan-Vese [12] method directly applied to the noisy data, while for the sequential method, we first denoise the image using Noise2Fast [26]. Our unsupervised method almost reaches the performance of the fully supervised approach [5].
Figure 8: **Segmentation of a noisy Brodatz image consisting of two different textures.** The first three images show the noisy input \(f\), the minimiser of energy (5), and the result obtained by directly applying the active contour algorithm [11]. The fourth image shows the normalized gradient of \(f\), and the last one is the result obtained when applying the classical Chan-Vese algorithm on the normalized gradient map.
## 6 Extensions and limitations
First, our proposed unified framework can be extended to the (multichannel) multiclass segmentation case, as we discuss in the following paragraph.
### Vector-valued multi-class model
In order to segment a noise-corrupted vector-valued image represented as \(\mathbf{f}=(f_{1},\dots,f_{L})\) into \(C\) different regions, we can consider \(C\) dedicated neural networks acting as denoising experts for each region. In this case, the objective is to estimate \(C\) segmentation masks \(\{u_{i}\}_{i=1}^{C}\) satisfying the simplex constraint, i.e. \(\sum_{k=1}^{C}u_{i}=1\), as well as the set of network parameters \(\mathbf{\theta}^{\text{MC}}=(\theta_{1}^{\text{MC}},\dots,\theta_{C}^{\text{MC}})\). With these notations, the energy (5) can be extended to segment noise-corrupted, vector-valued images \(\mathbf{f}\) as
\[\mathcal{E}_{f,\lambda}(\mathbf{u},\mathbf{\theta})\coloneqq i_{\mathbb{A}}(\mathbf{u})+ \lambda|\mathbf{u}|_{\text{TV}}+\sum_{i=1}^{C}\sum_{j=1}^{L}\int_{\Omega}\left(f_ {j}-\Phi_{\theta_{i}^{\text{MC}}}(f_{j})\right)^{2}u_{i}\,. \tag{13}\]
As before, it may not be necessary to train \(C\) different denoising networks, as some regions may be assumed to be constant and in this case the "expert" for region \(i\) can be replaced by the mean value of the image inside region \(i\).
### Limitations
A limitation of the current work lies in the training strategy in the case where two denoisers are applied. In our experiments, we observed that once the denoising experts have been trained on the initial boxes and the subsequent segmentation step has been realised, it may occur that one of the two classes in the image contains an important part of the other class.
Figure 9: **Comparison of denoising performance with different Noise2Fast strategies**. On the middle image, Noise2Fast is applied to the whole image. On the right image, we present the final denoised image obtained from the two separate denoisers learned with the proposed framework.
As a result, during the next denoising step, one of the network is trained on parts of both regions present in the image. With the influence of the total variation regularisation, \(u\) may converge to an undesired constant mask. With the recursive integration of a reference mask, we already proposed in section 4.2 a strategy to overcome this drawback. One interesting alternative would be to include an additional constraint enforcing the denoisers to perform better in their initial regions than in the other initial ones.
Next, in some of our experiments we have observed that the Noise2Fast denoiser is not well suited for the segmentation of certain images such as the zebra image in Figure 3. The reason for that is that the filters of the learned experts operate locally, and are not good in capturing global information of the regions. As a consequence, in the case of the zebra, in regions where the stripes are thicker, and the intensity values are closer to the one in the background, the background expert outperforms the one of the foreground, resulting in an undesired result as obtained by the piecewise constant Chan-Vese model. To overcome this
Figure 10: **Segmentation results obtained on the noisy images showing a squirrel corrupted with three different noise levels.** The first column shows clean input image, and initialisation for foreground and background regions, while in the second column the noisy versions of the given image are depicted. The remaining ones present the segmentation results obtained using the proposed strategy with the segmentation Algorithm 4.2, the segmentation masks using the Chan-Vese algorithm provided by skimage with checkerboard initialisation and box initialisation, respectively. The last row shows the denoised images which are obtained by composing the obtained segmentation mask and expert denoiser outputs.
limitation, we modified the checkerboard strategy of the Noise2Fast method, and instead of squeezing the image to half of its width/height, we divided its size by a factor of four. In addition to include different denoisers, such as for instance the deep image prior [39], an interesting perspective would be to define new data fitting terms focusing on structural similarities within the different classes.
## 7 Conclusion
In this work, we have proposed a novel energy functional for the joint denoising and segmentation of images. Our framework combines the advantages of well-established variational models with modern self-supervised deep learning strategies. A major strength of the method lies in the fact that it can handle single images without the need of ground truth segmentation masks or noisy-clean training pairs. Further, the energy functional is designed in a such a way that both tasks benefit from each other, which has also been confirmed by experiments. |
2309.16160 | Respondent-Driven Sampling: An Overview in the Context of Human
Trafficking | Respondent-driven sampling (RDS) is both a sampling strategy and an
estimation method. It is commonly used to study individuals that are difficult
to access with standard sampling techniques. As with any sampling strategy, RDS
has advantages and challenges. This article examines recent work using RDS in
the context of human trafficking. We begin with an overview of the RDS process
and methodology, then discuss RDS in the particular context of trafficking. We
end with a description of recent work and potential future directions. | Jessica P. Kunke, Adam Visokay, Tyler H. McCormick | 2023-09-28T04:23:14Z | http://arxiv.org/abs/2309.16160v1 | # Respondent-driven sampling: An overview in the context of human trafficking
###### Abstract
Respondent-driven sampling (RDS) is both a sampling strategy and an estimation method. It is commonly used to study individuals that are difficult to access with standard sampling techniques. As with any sampling strategy, RDS has advantages and challenges. This article examines recent work using RDS in the context of human trafficking. We begin with an overview of the RDS process and methodology, then discuss RDS in the particular context of trafficking. We end with a description of recent work and potential future directions.
## 1 Introduction
Human trafficking is a global public health concern with widespread and long-lasting negative consequences. Understanding trafficking and estimating the number of people being trafficked is complicated by the stigma, sensationalization, and secrecy of trafficking. Recent estimates of the number of people being trafficked worldwide range from 12.3 million to 45.8 million people (Barrick and Pfeffer, 2021). While prevalence estimation is just one of
many research priorities in this field, better constraining the prevalence estimates is important for guiding policy decisions. In this paper, we describe a common sampling technique for reaching this population, known as respondent-driven sampling (RDS). We provide an overview of the methodology as well as a perspective on RDS in the context of trafficking.
Human trafficking was legally defined in 2000, internationally by the Palermo Protocol adopted by the UN (United Nations General Assembly, 2000), and within the US also by the Trafficking Victims Protection Act (Victims of Trafficking and Violence Protection Act, 2000). Generally, **human trafficking** is the use of force, fraud or coercion to exploit one or more people through commercial sex or forced labor, though inducing a minor (someone under the age of 18 years old) into commercial sex is considered human trafficking regardless of the presence of force, fraud or coercion. There are many popular misconceptions about trafficking. For instance, human trafficking is often confused with people smuggling, in which people are moved consensually but illegally; by contrast, human trafficking can but does not necessarily involve movement, and it requires the use of force, fraud, or coercion (Rothman et al., 2017; Schroeder et al., 2022). While human trafficking has often been defined and approached through the lens of criminal justice, it is increasingly recognized as a complex public health issue.
A major challenge in studying human trafficking stems from differences in definitions or in the operationalization of the same definition from one study to another (Zhang, 2022). The Palermo Protocol explicitly mentions slavery and organ removal as forms of exploitation in its definition of trafficking, and some definitions include forced marriage, forced begging, or child soldiers. Even under a single definition, what gets counted as trafficking on a case-by-case basis depends on popular conceptions of trafficking, which are shaped by racism, sexism, colonialism, and other systemic injustices. Much of the anti-trafficking movement in the United States, and early legislation such as the Mann Act which continues to be used in prosecutions today, has roots in unfounded early-twentieth-century panic about "white slave traffic" (Allain, 2017). Black youth who trade sex for money or material needs are more likely than their white counterparts to be viewed as deviants or complicit agents rather than as victims (Showden and Majic, 2018). Gendered and racialized ideas of innocence and purity misinform the popular narrative about who is trafficked, why, and what they need. This focus also encourages sensationalism, distracting from progress.
Researchers, policymakers, law enforcement officials, service providers, and others who are central to identifying and combating human trafficking are subject to these same biases and misconceptions.
In addition to examining the complex question of what counts as trafficking, researchers have been working on developing effective, standardized statistical and sampling methods to understand the scope and nature of human trafficking. Many studies to-date have used administrative data, but for various reasons trafficking-related charges and prosecutions are thought to represent only a small, biased sample of existing trafficking cases. These case data reflect the aforementioned biases in determining which cases count as trafficking and who counts as a victim. Additionally, traffickers are often charged instead with other easier offenses to prove (Barrick and Pfeffer, 2021).
For this reason, many other studies have collected fresh data, which raises other challenges. Traditional survey methods assume both that researchers have a sampling frame or a list of people in the general population that includes the target population of interest (in our case, people who are being trafficked) and that respondents will willingly identify whether they are being trafficked. In practice, accessing people in this population requires learning how to find them and building their trust. Individuals who are trafficked may not have autonomy of their movements, may distrust officials, and may not feel comfortable identifying themselves either because of stigma or fear of retribution. In this setting, traditional sampling techniques often does not reach trafficked individuals.
One strategy to reach individuals excluded from standard surveys involves leveraging the social networks of individuals in the group of interest. With these techniques, the researcher does not need to access a representative sample from the general population but, rather, interact with a sample of people who are _connected_ with members of the group of interest. These network based methods, broadly, fall into two categories. The first category does not involve interacting with members of the group directly. Strategies such as the network scale-up method (NSUM) ask individuals how many people they know in the target population, i.e. people who are experiencing trafficking or who have within some time period. Responses from the general population are then "scaled-up" with assumptions about how well the average fraction of the respondents' networks are made up of trafficked individuals extrapolates to the population as a whole (Bernard et al., 1991, 2010; Killworth
et al., 1998; Laga et al., 2021; McCormick, 2021).
These indirect methods have the advantage that they do not require respondents to identify themselves or specific other people as members of the group of interest. A disadvantage is that these methods are often limited to prevalence estimation rather than gaining additional insight into risk factors, experiences, or possible paths out of trafficking. NSUM-based trafficking prevalence estimates have not yet been published to our knowledge, but several studies as part of the Prevalence Reduction Innovation Forum (PRIF) initiative are currently underway to directly compare NSUM and other estimation methods on the same target populations (University of Georgia Center on Human Trafficking Research and Outreach, 2023; Schroeder et al., 2022; Zhang, 2022).
A second class of methods, which we focus on in this paper, involve directly interacting with individuals in the group of interest. These methods fall under a general class of methods known as "link-tracing" or "chain-referral" designs because recruitment proceeds along links in the social network connecting individuals who are victims of trafficking. Several related iterations of link-tracing designs have been proposed, so we begin with some terminology. **Chain-referral sampling** is a general term for a method that "traces" respondents' networks as a means of recruitment. **Snowball sampling** is a chain-referral method originally proposed as a way to learn about network features that starts with a probability sample of respondents and traces their networks. **Respondent-driven sampling (RDS)** is sometimes called "non-probability snowball sampling" because it starts with a _convenience sample_ and respondents choose network members to recruit. Researchers use RDS to estimate prevalence, understand characteristics of particular population (e.g. the fraction of sex workers in an area who have been trafficked), or to access members of a hard-to-reach group for an intervention.
This paper focuses on RDS since it is an increasingly popular sampling and estimation strategy for human trafficking. In their scoping review of measurement strategies to learn about the prevalence and experience of human trafficking, Barrick and Pfeffer (2021) report that around 16% of the studies included in their review utilized RDS. Franchino-Olsen et al. (2022) conducted a scoping review on prevalence estimates for domestic minor sex trafficking and commercial sexual exploitation of children, and of the six studies included in the review, one used RDS. These examples demonstrate both that RDS is being used in
human trafficking research and that the methodology is not yet standardized.
In the remainder of this paper, we provide an overview of the implementation and assumptions required for RDS (Section 2), followed by a discussion of RDS methodology (Section 3). We then turn to discussion of RDS and its potential advantages and challenges in the context of human trafficking (Section 4), as well as recent advances and potential opportunities (Section 5). Finally, we conclude in Section 6.
## 2 RDS implementation and assumptions
In this section, we describe the typical RDS process and assumptions required for statistical inference, which we discuss in the next section. RDS can be used for three distinct purposes: (i) estimating the characteristics of the group of individuals (e.g. the fraction of trafficking victims who are also minors); (ii) estimating a population size or prevalence; and (iii) accessing a representative sample of individuals for further study or intervention (e.g. to evaluate the effectiveness a particular type of outreach). RDS relies on members of the group of interest to recruit other members of the group into the study, thus leveraging the social connections and relationships of group members to increase participation.
The RDS process begins with a convenience sample of members of the group of interest, people who have been trafficked within some period of time. These individuals are often known to researchers from previous studies or interventions, or they have previously inter
Figure 1: Each figure shows an entire example network. Nodes recruited in waves 1, 2 and 3 are shown in red, yellow, and light blue, respectively. Grey nodes are never recruited, and bolded paths indicate directed recruitment links. Paths that are not bolded remain unobserved to the researcher.
acted with public health infrastructure. These initial recruits are known as **seeds**. The seed individuals are asked to recruit a particular number of additional individuals, also from the group of interest. Each of these recruits typically receives an incentive to participate in the study, known as the **primary incentive**. Recruits are then asked to bring more individuals from the group of interest into the study. When a new recruit participates in the study, the person who recruited them receives a **secondary incentive**. Each new recruitment cycle defines a **wave** of a recruitment **chain**, the complete set of individuals and their referral connections including the original seeds.
There are a number of practical considerations when performing RDS. The order of recruitment can be consequential when analyzing RDS, as we discuss below, and recording it is necessary if there is a secondary incentive. Thus, it is critical for researchers to keep track of the recruitment sequencing. Often, this task is accomplished by passing out coupons with unique identifying numbers. Each participant gets a certain number of coupons containing their unique recruiter number, so that when a new participant brings a coupon they were given, the study team knows who recruited them. An additional consideration is the number of coupons available to each person. Allowing each person to recruit more people reduces the chances that a chain terminates in the early waves. For a given sample size, however, it also means that the study can incorporate fewer seeds and, thus, have chains originating in potentially different parts of the network.
Figure 1 demonstrates the RDS process on a small example network, starting with a single seed (RDS typically begins with multiple seeds). We begin with a single seed, selected as a convenience sample, denoted in red. The seed recruits, in this example, three additional participants in the first wave. These participants then, in turn, recruit additional participants in subsequent waves. The unbolded network edges in the figure are not observed. This example also illustrates two challenges with the RDS procedure. First, the sampling is happening on top of an existing social network which is unknown to the researcher and difficult to recover from the RDS chain. Second, the sampling process on that network is controlled by the respondents, not by the researcher, meaning that the choice of who is included in the study is up to the respondents and may or may not be representative or meet other desirable sample criteria.
With this procedure in mind, we begin our discussion of the statistical properties and
assumptions of RDS. Since the initial seeds for RDS are a convenience sample, they are not representative of the population. In an ideal world, however, subsequent recruitment waves would "move away" from the initial seeds in the social network, making it less and less consequential which initial seeds are used. Also under ideal circumstances, as the chains traverse the network they will include respondents with heterogeneous characteristics and the frequency of those characteristics in the sample will be roughly that of the population. If this happens, RDS behaves like a mathematical process known as a Markov Chain Monte Carlo (Goel and Salganik, 2009).
This ideal behavior of RDS requires several assumptions. First, RDS assumes that members of the population can be reached through their network, i.e. that they know one another reciprocally, interact frequently, are willing to recruit others, and have mobility. As we discuss further later, this assumption may not be met in the context of human trafficking. Restrictions on mobility, for example, may make it impossible for individuals to receive a coupon or to bring a coupon they may receive to a research center and participate in the study. Second, RDS assumes that respondents' network sizes are either known or accurately estimated. This assumption is necessary because the likelihood of being sampled depends on the respondent's network; a person with more contacts has more chances to be included. Third, the sampling process needs to continue through enough waves to mitigate the dependence on seeds. If there aren't enough waves, then the structure of the sample will be too closely related to the initial seeds. Gile and Handcock (2010) found that the seed-induced bias depends on the extent of homophily and the number of sampling waves. If there are substantial bottlenecks in the network then the recruitment process can get "stuck" in one pocket not explore the full extent of the graph (see Rohe (2019), for example). Particularly with small groups, the sample size can become close to the total population size. Fourth, RDS assumes sampling is done with replacement, meaning that the study may recruit the same person more than once. Fifth, RDS assumes that network connections are reciprocal, that person A is equally likely to refer person B as person B would be to refer person A (Volz and Heckathorn, 2008).
Finally, RDS assumes that respondents recruit randomly from their contacts. Under this assumption, the only factor that impacts how likely you are to be recruited is your number of contacts. In practice, though, recruitment may be highly preferential, or even if
the recruitment is random, there may be selection bias in which people who receive coupons are more likely to actually participate in the study. Preferential referral can introduce bias (Gile and Handcock, 2010). The recruitment process is likely based on several factors that are not visible to the researchers and therefore cannot be controlled for in a straightforward way. However, in Section 5, we discuss some extensions of RDS that are designed to give the researcher more control over recruitment. Goel and Salganik (2010) point out that violations of these assumptions can lead to substantial issues with statistical inference.
## 3 RDS estimation
As mentioned previously, researchers use RDS for a variety of estimation goals. In this section, we discuss briefly the intuition behind using RDS to estimate prevalence and using RDS to estimate a population fraction. We focus on estimating the population fraction. Readers interested in prevalence estimation can refer to to Handcock et al. (2014) or Crawford et al. (2018).
We take as a working example conducting RDS to estimate the fraction of sex workers who have been trafficked. The researcher performs RDS on the population of sex workers (which is likely difficult to access with other sampling methods due to stigma, fear of prosecution, or other factors) and for each person recruited performs an interview where the person indicates whether they have been trafficked.
The first estimator that we might consider would be to simply take the average. That is, we take the number of sex workers recruited who report being trafficked divided by the total number recruited. This estimator would be biased because some people are more likely to be recruited than others. Specifically, as described in the previous sections, people with more contacts have more chances to be included in the sample. To compensate for this, a class of estimators called _Horvitz-Thompson estimators_ (or sometimes generalized Horvitz-Thompson estimators) re-weight the average by the inverse of the likelihood that a person is included in the sample (Heckathorn, 1997; Salganik and Heckathorn, 2004; Volz and Heckathorn, 2008). Respondents with fewer connections are less likely to be included in a referral chain, and thus have a lower inclusion probability, so the estimator gives their responses extra weight, proportional to how likely (or not) they are to be referred. In this case, the RDS estimator uses the inverse of each respondent's estimated degree--how many
reciprocal ties they have to other members of the population of interest--as a correction factor for the estimate. Gile and Handcock (2010) provide a much more thorough discussion of these estimators and their properties. The question of how to measure the uncertainty in these estimators is also an area of current work (see for example Green et al. (2020); Rohe (2019); Baraff et al. (2016); Goel and Salganik (2010)).
## 4 Considerations in the context of trafficking
There are several considerations for successfully implementing RDS, particularly in the context of human trafficking. Beginning with observations from Simic et al. (2006), we illustrate some of the difficulties that have arisen in the context of specific trafficking studies from the literature. In their studies of sex workers in three different countries, Simic et al. (2006) were unable to recruit enough study participants through RDS, and they attributed this to several interrelated potential factors:
**Lack of trust.** General mistrust of official agencies combined with the sensitivity around sex worker status to reduce participation. Even though the study team worked in advance to build trust and create community ties, tight control by brothels and police crackdowns increased potential participants' reluctance to identify themselves or others. Some participants did not want to reveal their own status as a sex worker to others by recruiting them, since the recruits would find out the eligibility criteria of the study when they were interviewed. Participants sometimes avoided recruiting particular people to avoid identifying them. Simic et al. (2006) suggest it may help to run the studies for a longer time to gain more trust, especially if the seed sampling and interviews occur at a location with ongoing services that can be used as an interview site. However, in places where sex workers have little contact with local services, this may not be feasible.
**Social network structure.** RDS assumes dense, connected networks. By contrast, many of the sex workers in these communities were isolated, either due to restricted movement or because they worked independently and did not tend to reveal their status to others. In Serbia Simic et al. (2006) found that most sex workers worked independently and did not connect much with each other. They also found that street sex workers, organized sex workers, and independent sex workers tended not to connect with each other. Street sex workers tended to be socially separated by ethnicity, sexuality, and other aspects of identity.
**Restricted movement.** In Montenegro, brobthels were tightly controlled and sex workers were not allowed to leave. Sex workers with restricted movement were unlikely to be able to receive a coupon or to go to the study locations to participate even if they received a coupon. This seems to have combined with intense policing practices in retaliation for a recent police HIV infection to hamper study recruiting.
**Inadequate study incentives.** The financial incentives provided were not high enough relative to the earnings sex workers could make and the opportunity cost of missing work to participate in the study. Sex workers were more interested in the HIV testing than the financial incentives. If the incentive is too high and generally appealing beyond the target population, this can encourage people who are not trafficked to attempt to participate. This illustrates the importance of better identifying effective incentives in advance of the study.
We note a few additional concerns illustrated by other studies:
**Unrepresentative seeds.**Zhang et al. (2014) implemented RDS to estimate trafficking prevalence among unauthorized migrant laborers in San Diego. They worked with a community partner to build trust with the migrant community and recruit initial respondents, and they were able to exceed their estimated minimum sample size. However, the study was also limited by the combination of ethnic homophily and an ethnically homogeneous set of seeds. All their seeds were Mexican workers, and unauthorized Mexican migrants tended to connect among themselves rather than with other Spanish-speaking unauthorized migrants; as a result, the study had limited recruitment among non-Mexicans and was not representative of the undocumented migrant laborer community as a whole.
**Non-reciprocal links.** In a study of sex trafficking risk factors using data from a 2011 RDS survey of urban street-based Ohio sex workers, Chohaney (2016) found that the network violated the reciprocity assumption. Roughly one quarter of respondents described the person who referred them as a stranger, one quarter described them as neighbors or someone they "kind of know," and the remaining half described them as friends or family members.
**Lack of visibility to other sex workers.** While Carrillo et al. (2020) did not study trafficking but rather women who exchange sexual services for money or other goods, their study findings are relevant to implementing RDS in trafficking studies as well. One-third of the study participants recruited by previous waves of participants turned out to be
ineligible or had not exchanged sex in the past year. One potential reason they identified is that women may have limited knowledge of whether specific other women they know are exchanging sex.
**Misalignment between recruitment and eligibility criteria.** Additionally in the study by Carrillo et al. (2020), recruiters were asked to recruit women who exchanged sex, not women who exchanged sex specifically within the last 12 months, which was the specific question that interviewers asked to assess group membership. This illustrates the importance of aligning the recruitment instructions with the study criteria, and yet asking participants to confirm more specific criteria when they recruit further participants may not be feasible if they require them to ask others more specific and sensitive questions.
## 5 Recent advances and future directions
In this section we discuss three extensions of RDS that were developed to address the challenges that arise when some or many of the critical assumptions mentioned in Section 2 are not met in practice: network sampling with memory (NSM), randomized respondent-driven sampling (RRDS), and link-tracing sampling methods that combine various approaches to improve estimation.
### Network Sampling with Memory (NSM)
NSM is an application of RDS that builds upon advancements in the mathematics and computer science literature on random walks on graphs proposed by Mouw and Verdery (2012). At a high level, the researcher supervises and strategically directs the recruitment process as it unfolds. This gives the researcher more control, ultimately yielding a more efficient sampling framework than can be attained with traditional RDS.
NSM begins with initial seeds from a convenience sample, all of whom provide a roster of their contacts known to be members of the target population in addition to answering the substantive interview questions chosen by the researcher. NSM is then implemented as a two-step approach--the **Search** mode followed by the **List** mode. Search mode prioritizes **bridge** nodes, which are individuals who connect two or more clusters together, to sufficiently explore the network, while List mode which ensures that nodes sampled early in the process are not over-represented in the sample.
Search mode takes the network information of respondents and uses the local topography to identify bridge nodes that connect unexplored portions of the network. These nodes are then given priority in the recruitment process. The researcher pre-specifies a threshold that triggers when the network has been sufficiently explored by Search mode. After Search mode concludes, NSM proceeds to the List mode which entails two steps: (1) keeping a list of all individuals on the revealed network and (2) sampling from that list with the same cumulative probability for each individual such that new additions to the list are given priority.
One of the key advantages to NSM compared to RDS is the improved efficiency in searching the network. Given high quality network data collected from each respondent, the computation and processing costs associated with this method are small. However, collecting high quality network data from human populations can be prohibitively expensive or logistically infeasible. NSM may also be impractical in the context of human trafficking where estimates of each respondents network degree are be highly variable. There are also additional time and effort costs associated with the real-time supervision and direction of the recruitment process that make NSM more challenging to implement in practice.
### Randomized Respondent-Driven Sampling (RRDS)
Respondents recruiting randomly from their contacts is an important assumption of RDS that is often violated in practice.
Boudreau et al. (2023) propose a cell-phone based variant of RDS that addresses this challenge. The set up is similar to RDS in that researchers begin with a convenience sample for the initial seeds. From each seed, the researcher collects a list of phone numbers for their contacts believed to also be in the target population. The researcher then chooses a random subset of the respondents' contacts, administers the survey, and collects a list of phone numbers of their contacts for the subsequent wave. This process is then repeated until the desired sample size is reached. This process is illustrated in Figure 2.
RPDS has several advantages relative to traditional RDS.
**More reliable randomization.** By introducing randomness at each wave of the recruitment process, this method is much closer to the ideal RDS assumption of random selection among contacts.
Figure 2: Each figure shows an entire example network. Nodes recruited in waves 1, 2 and 3 are shown in red, yellow, and light blue, respectively. Grey nodes are never recruited, and bolded paths indicate directed recruitment links. (A) Demonstrates the process of a traditional RDS recruitment tree. (B) Shows how randomizing recruitment of respondents’ contacts for each wave results in better coverage.
**Phone based rather than venue based.** RRDS is a carried out using phone surveys and does not require in person interview sites. This makes RRDS an attractive option for applications like human trafficking where restricted mobility makes it difficult to recruit respondents in person.
**More control over recruitment.** While RRDS is still respondent driven, introducing randomization gives the researcher more control over the recruitment process compared to traditional RDS.
**Simple to supervise.** The time and effort costs associated with administering the randomization are relatively low. It is as simple as drawing a simple random sample from each respondents list of phone numbers before proceeding with the next wave of recruitment.
RRDS will, of course, not be well-suited to every context. The most obvious requirement is the need to work with a population where individuals have access to cellphones (and the autonomy to use those phones as they wish). Individuals in the group of interest without access to a phone will be excluded. RRDS also relies on individuals knowing phone numbers (or saving contacts) of other members of the group of interest. Boudreau et al. (2023) implemented RRDS with workers in a large-scale industrial manufacturing setting during the peak of the COVID-19 pandemic. Telephone surveys were the only means to obtain critical information on the health and well-being of workers, as in-person activities were restricted. More research is needed to understand how effective this approach will be in the context of trafficking.
### Combining link-tracing methods and other strategies
Specifically in the context of trafficking studies, there have also been efforts to combine RDS with traditional representative sampling methods, venue-based sampling, and multiple systems estimation to improve estimation and inference (Vincent and Thompson, 2017; Vincent, 2018). Zhang et al. (2019) leverage overlaps among RDS participants' social networks to improve population size estimation using mark-recapture methods. Vincent et al. (2021) conduct both RDS and venue-based sampling, then use mark-recapture methodology to improve inference by combining the two samples. Vincent et al. (2021) increase the number of initial seeds and reduce the necessary number of RDS waves.
## 6 Conclusion
Human trafficking is a complex, stigmatized, secretive and constantly evolving phenomenon. Respondent-driven sampling offers several advantages over traditional survey methods for studying this population, but its success requires building trust with the relevant communities, providing well-informed and motivating incentives, and understanding and accounting for several aspects of the social network structure of the people being trafficked and their surrounding community. We have highlighted some potential suggestions from the literature for addressing these concerns as well as promising directions for future research.
|
2301.05058 | Sparse Coding in a Dual Memory System for Lifelong Learning | Efficient continual learning in humans is enabled by a rich set of
neurophysiological mechanisms and interactions between multiple memory systems.
The brain efficiently encodes information in non-overlapping sparse codes,
which facilitates the learning of new associations faster with controlled
interference with previous associations. To mimic sparse coding in DNNs, we
enforce activation sparsity along with a dropout mechanism which encourages the
model to activate similar units for semantically similar inputs and have less
overlap with activation patterns of semantically dissimilar inputs. This
provides us with an efficient mechanism for balancing the reusability and
interference of features, depending on the similarity of classes across tasks.
Furthermore, we employ sparse coding in a multiple-memory replay mechanism. Our
method maintains an additional long-term semantic memory that aggregates and
consolidates information encoded in the synaptic weights of the working model.
Our extensive evaluation and characteristics analysis show that equipped with
these biologically inspired mechanisms, the model can further mitigate
forgetting. | Fahad Sarfraz, Elahe Arani, Bahram Zonooz | 2022-12-28T12:56:15Z | http://arxiv.org/abs/2301.05058v1 | # Sparse Coding in a Dual Memory System for Lifelong Learning
###### Abstract
Efficient continual learning in humans is enabled by a rich set of neurophysiological mechanisms and interactions between multiple memory systems. The brain efficiently encodes information in non-overlapping sparse codes, which facilitates the learning of new associations faster with controlled interference with previous associations. To mimic sparse coding in DNNs, we enforce activation sparsity along with a dropout mechanism which encourages the model to activate similar units for semantically similar inputs and have less overlap with activation patterns of semantically dissimilar inputs. This provides us with an efficient mechanism for balancing the reusability and interference of features, depending on the similarity of classes across tasks. Furthermore, we employ sparse coding in a multiple-memory replay mechanism. Our method maintains an additional long-term semantic memory that aggregates and consolidates information encoded in the synaptic weights of the working model. Our extensive evaluation and characteristics analysis show that equipped with these biologically inspired mechanisms, the model can further mitigate forgetting1.
Footnote 1: Code available at [https://github.com/NeurAI-Lab/SCoMMER](https://github.com/NeurAI-Lab/SCoMMER)
## 1 Introduction
The ability to continually acquire, consolidate, and retain knowledge is a hallmark of intelligence. Particularly, as we look to deploy deep neural networks (DNNs) in the real world, it is essential that learning agents continuously interact and adapt to the ever-changing environment. However, standard DNNs are not designed for lifelong learning and exhibit catastrophic forgetting of previously learned knowledge McCloskey and Cohen (1989) when required to learn tasks sequentially from a stream of data McCloskey and Cohen (1989).
The core challenge in continual learning (CL) in DNNs is to maintain an optimal balance between plasticity and the stability of the model. Ideally, the model should be stable enough to retain previous knowledge while also plastic enough to acquire and consolidate new knowledge. Catastrophic forgetting in DNNs can be attributed to the lack of stability, and multiple approaches have been proposed to address it. Among them, _Rehearsal-based_ methods, Riemer et al. (2018); Aljundi et al. (2019) which aim to reduce forgetting by continual rehearsal of previously seen tasks, have proven to be an effective approach in challenging CL tasks Farquhar and Gal (2018). They attempt to approximate the joint distribution of all the observed tasks by saving samples from previous tasks in a memory buffer and intertwine the training of the new task with samples from memory. However, due to the limited buffer size, it is difficult to approximate the joint distribution with the samples alone. There is an inherent imbalance between the samples of previous tasks and the current task. This results in the network update being biased towards the current task, leading to forgetting and recency bias in predictions. Therefore, more information from the previous state of the model is needed to better approximate the joint distribution and constrain the update of the model to preserve the learned knowledge. However, it is still an open question what the optimal information is for replay and how to extract and preserve it.
The human brain provides an existence proof for successful CL in complex dynamic environments without intransigence or forgetting. Therefore, it can provide insight into the design principles and mechanisms that can enable CL in DNNs. The human brain maintains a delicate balance between stability and plasticity through a complex set of neurophysiological mechanisms Parisi et al. (2019); Zenke et al. (2017) and the effective use of multiple memory systems Hassabis et al. (2017). In particular, evidence suggests that the brain employs _Sparse Coding_, that the neural code is characterized by strong activations of a relatively small set of neurons. The efficient utilization of sparsity for information representation enables new associations to be learned faster with controlled interference with previous associations while maintaining sufficient representation capacity. Furthermore, complementary learning systems (CLS) theory posits that effective learning requires two complementary learning systems. The hippocampus rapidly encodes episodic information into non-overlapping representations, which are then gradually consolidated into the structural knowledge representation in the neocortex through the replay of neural activities.
Inspired by these mechanisms in the brain, we hypothesize that employing a mechanism to encourage sparse coding in DNNs and mimic the interplay of multiple memory systems can be effective in maintaining a balance between
stability and plasticity. To this end, we propose a multi-memory experience replay mechanism that employs sparse coding, SCoMMER. We enforce activation sparsity along with a complementary dropout mechanism, which encourages the model to activate similar units for semantically similar inputs while reducing the overlap with activation patterns of semantically dissimilar inputs. The proposed semantic dropout provides us with an efficient mechanism to balance the reusability and interference of features depending on the similarity of classes across tasks. Furthermore, we maintain additional long-term semantic memory that aggregates the information encoded in the synaptic weights of the working memory. Long-term memory interacts with episodic memory to retrieve structural knowledge from previous tasks and facilitates information consolidation by enforcing consistency in functional space.
Our empirical evaluation on challenging CL settings and characteristic analysis show that equipping the model with these biologically inspired mechanisms can further mitigate forgetting and effectively consolidate information across the tasks. Furthermore, sparse activations in conjunction with semantic dropout in SCoMMER leads to the emergence of subnetworks, enables efficient utilization of semantic memory, and reduces the bias towards recent tasks.
## 2 Related Work
The different approaches to address the problem of catastrophic forgetting in CL can be broadly divided into three categories: _Regularization-based_ methods regularize the update of the model in the parameter space [1, 1, 2, 3] or the functional space [10, 11], _Dynamic architecture_ expands the network to dedicate a distinct set of parameters to each task, and _Rehearsal-based_ methods [1, 12] mitigate forgetting by maintaining an episodic memory buffer and continual rehearsal of samples from previous tasks. Among these, our method focuses on rehearsal-based methods, as it has proven to be an effective approach in challenging continual learning scenarios [1]. The base method, Experience Replay (ER) [1] interleaves the training of the current task with the memory sample to train the model on the approximate joint distribution of tasks. Several studies focus on the different aspects of rehearsal: memory sample selection [1, 13], sample retrieval from memory [12] and what information to extract and replay from the previous model [11, 12, 13].
Dark Experience Replay (DER++) samples the output logits along with the samples in the memory buffer throughout the training trajectory and applies a consistency loss on the update of the model. Recently, CLS theory has inspired a number of approaches that utilize multiple memory systems [14, 15] and show the benefits of multiple systems in CL. CLS-ER [1] mimics the interplay between fast and slow learning systems by maintaining two additional semantic memories that aggregate the weights of the working model at different timescales using an exponential moving average. Our method enforces sparse coding for efficient representation and utilization of multiple memories.
## 3 Methodology
We first provide an overview of motivation from biological systems before formally introducing the different components of the proposed approach.
Figure 1: SCoMMER employs sparse coding in a multi-memory experience replay mechanism. In addition to the instance-based episodic memory, we maintain a long-term memory that consolidates the learned knowledge in the working memory throughout training. The long-term memory interacts with the episodic memory to enforce consistency in the functional space of working memory through the knowledge retrieval loss. To mimic sparse coding in the brain, we enforce activation sparsity along with semantic dropout, whereby the model tracks the class-wise activations during training and utilizes them to enforce sparse code, which encourages the model to activate similar units for semantically similar inputs. Schematic shows how the activations from layer \(l\) are propagated to the next layer. Darker shades indicate higher values. Given a sample from class 4, semantic dropout retains the units with higher activation counts for the class, and top-k remaining (here 2) units with higher activations are propagated to the next layer. This enables the network to form semantically conditioned subnetworks and mitigate forgetting.
### Continual Learning in the Biological System
Effective CL in the brain is facilitated by a complex set of mechanisms and multiple memory systems. Information in the brain is represented by neural activation patterns, which form a neural code [12]. Specifically, evidence suggests that the brain employs _Sparse Coding,_ in which sensory events are represented by strong activations of a relatively small set of neurons. A different subset of neurons is used for each stimulus [12, 13]. There is a correlation between these sparse codes [10] that could capture the similarity between different stimuli. Sparse codes provide several advantages: they enable faster learning of new associations with controlled interference with previous associations and allow efficient maintenance of associative memory while retaining sufficient representational capacity.
Another salient feature of the brain is the strong differentiation and specialization of the nervous systems [12]. There is evidence for modularity in biological systems, which supports functional specialization of brain regions [10] and reduces interference between different tasks. Furthermore, the brain is believed to utilize multiple memory systems [11, 12]. Complementary learning systems (CLS) theory states that efficient learning requires at least two complementary systems. The instance-based hippocampal system rapidly encodes new episodic events into non-overlapping representations, which are then gradually consolidated into the structured knowledge representation in the parametric neocortical system. Consolidation of information is accompanied by replay of the neural activities that accompanied the learning event.
The encoding of information into efficient sparse codes, the modular and dynamic processing of information, and the interplay of multiple memory systems might play a crucial role in enabling effective CL in the brain. Therefore, our method aims to incorporate these components in ANNs.
### Sparse coding in DNNs
The sparse neural codes in the brain are in stark contrast to the highly dense connections and overlapping representations in standard DNNs which are prone to interference. In particular, for CL, sparse representations can reduce the interference between different tasks and therefore result in less forgetting, as there will be fewer task-sensitive parameters or fewer effective changes to the parameters [1, 13]. Activation sparsity can also lead to the natural emergence of modules without explicitly imposing architectural constraints [12]. Therefore, to mimic sparse coding in DNNs, we enforce activation sparsity along with a complementary semantic dropout mechanism which encourages the model to activate similar units for semantically similar samples.
Sparse Activations:To enforce the sparsity in activations, we employ the k-winner-take-all (k-WTA) activation function [12]. k-WTA only retains the top-k largest values of an \(N\times 1\) input vector and sets all the others to zero before propagating the vector to the next layer of the network. Importantly, we deviate from the common implementation of k-WTA in convolutional neural networks (CNNs) whereby the activation map of a layer (\(C\times H\times W\) tensor where \(C\) is the number of channels and \(H\) and \(W\) are the spatial dimensions) is flattened into a long \(CHW\times 1\) vector input and the k-WTA activation is applied similar to the fully connected network [12, 13]. We believe that this implementation does not take into account the functional integrity of an individual convolution filter as an independent feature extractor and does not lend itself to the formation of task-specific subnetworks with specialized feature extractors. Instead, we assign an activation score to each filter in the layer by taking the absolute sum of the corresponding activation map and select the top-k filters to propagate to the next layer.
Given the activation map, we flatten the last two dimensions and assign a score to each filter by taking the absolute sum of the activations. Based on the sparsity ratio for each layer, the activation maps of the filters with higher scores are propagated to the next layers, and the others are set to zero. This enforces global sparsity, whereby each stimulus is processed by only a selected set of convolution filters in each layer, which can be considered as a subnetwork. We also consider each layer's role when setting the sparsity ratio. The earlier layers have a lower sparsity ratio as they learn general features, which can enable higher reusability, and forward transfer to subsequent tasks use a higher sparsity for later layers to reduce the interference between task-specific features.
Semantic Dropout:While the k-WTA activation function enforces the sparsity of activation for each stimulus, it does not encourage semantically similar inputs to have similar activation patterns and reduce overlap with semantically dissimilar inputs. To this end, we employ a complementary _Semantic Dropout_ mechanism, which controls the degree of overlap between neural activations between samples belonging to different tasks while also encouraging the samples belonging to the same class to utilize a similar set of units. We utilize two sets of activation trackers: _global activity counter_, \(\mathcal{A}_{g}\in\mathbb{R}^{N}\), counts the number of times each unit has been activated throughout training, whereas _class-wise activity counter_, \(\mathcal{A}_{s}\in\mathbb{R}^{C\times N}\), tracks the number of times each unit has been active for samples belonging to a particular class. \(N\) and \(C\) denote the total number of units and classes, respectively. For each subsequent task, we first employ Heterogeneous Dropout [1] to encourage the model to learn the new classes by using neurons that have been less active for previously seen classes by setting the probability of a neuron being dropped to be inversely proportional to its activation counts. Concretely, let \([\mathcal{A}_{g}^{l}]_{j}\) denote the number of times that the unit \(j\) in layer \(l\) has been activated after learning \(t\) sequential tasks. For learning the new classes in task \(t\)+1, the probability of retaining this unit is given by:
\[[P_{h}^{l}]_{j}=exp(\frac{-[\mathcal{A}_{g}^{l}]_{j}}{\max_{i} \left[\mathcal{A}_{g}^{l}\right]_{i}}\pi_{h}) \tag{1}\]
where \(\pi_{h}\) controls the strength of dropout with larger values leading to less overlap between representations. We then allow the network to learn with the new task with heterogeneous dropout in place of a fixed number of epochs, \(\mathcal{E}_{h}\). During this period, we let the class-wise activations emerge and then employ _Semantic Dropout_. It encourages the model to utilize the same set of units by setting the probability of retention of a unit for each class \(c\) as proportional to the number of times it has been activated for that class so far:
\[[P_{s}^{l}]_{c,j}=1-exp(\frac{-[A_{s}^{l}]_{c,j}}{\max_{i}\left[A_{s}^{l} \right]_{c,i}}\pi_{s}) \tag{2}\]
where \(\pi_{s}\) controls the strength of dropout. The probabilities for semantic dropout are updated at the end of each epoch to enforce the emerging pattern. This provides us with an efficient mechanism for controlling the degree of overlap in representations as well as enabling context-specific processing of information which facilitates the formation of semantically conditioned subnetworks. Activation sparsity, together with semantic dropout, also provides us with an efficient mechanism for balancing the reusability and interference of features depending on the similarity of classes across the tasks.
### Multiple Memory Systems
Inspired by the interaction of multiple memory systems in the brain, in addition to a fixed-size instance-based episodic memory, our method builds a long-term memory that aggregates the learned information in the working memory.
Episodic Memory:Information consolidation in the brain is facilitated by replaying the neural activation patterns that accompanied the learning event. To mimic this mechanism, we employ a fixed-size episodic memory buffer, which can be thought of as a very primitive hippocampus. The memory buffer is maintained with _Reservoir Sampling_, [14] which aims to match the distribution of the data stream by assigning an equal probability to each incoming sample.
Long-Term Memory:We aim to build a long-term semantic memory that can consolidate and accumulate the structural knowledge learned in the working memory throughout the training trajectory. The knowledge acquired in DNNs resides in the learned synaptic weights [10]. Hence, progressively aggregating the weights of the working memory (\(\theta_{w}\)) as it sequentially learns tasks allows us to consolidate the information efficiently. To this end, we build long-term memory (\(\theta_{s}\)) by taking the exponential moving average of the working memory weights in a stochastic manner (which is more biologically plausible [1]), similar to [1]:
\[\theta_{s}\leftarrow\alpha\theta_{s}+(1-\alpha)\;\theta_{w},\;\;\;\;\;if\;\;r> a\sim U(0,1) \tag{3}\]
where \(\alpha\) is the decay parameter and \(r\) is the update rate.
Long-term memory builds structural representations for generalization and mimics the slow acquisition of structured knowledge in the neocortex, which can generalize well across tasks. The long-term memory then interacts with the instance-level episodic memory to retrieve structural relational knowledge [14] for the previous tasks encoded in the output logits. Consolidated logits are then utilized to enforce consistency in the functional space of the working model. This facilitates the consolidation of information by encouraging the acquisition of new knowledge while maintaining the functional relation of previous knowledge and aligning the decision boundary of working memory with long-term memory.
### Overall Formulation
Given a continuous data stream \(\mathcal{D}\) containing a sequence of tasks (\(\mathcal{D}_{1},\mathcal{D}_{2},..,\mathcal{D}_{T}\)), the CL task is to learn the joint distribution of all the observed tasks without the availability of task labels at test time. Our proposed method, SCoMMER, involves training a working memory \(\theta_{w}\), and maintains an additional long-term memory \(\theta_{s}\) and an episodic memory \(\mathcal{M}\). The long-term memory is initialized with the same parameters as the working memory and has the same sparsity constraints. Therefore, long-term memory aggregates the weights of working memory. We initialize heterogeneous dropout probabilities \(\pi_{h}\) randomly to set the probability of retention of a fraction of units to 1 and others to 0 so that the first task is learned using a few, but sufficient units and the remaining can be utilized to learn subsequent tasks.
During each training step, we interleave the batch of samples from the current task \(x_{t}\sim\mathcal{D}_{t}\), with a random batch of exemplars from episodic memory \(x_{m}\sim\mathcal{M}\). Working memory is trained with a combination of cross-entropy loss on the interleaved batch \(x\leftarrow(x_{t},x_{b})\), and knowledge retrieval loss on the exemplars. Thus, the overall loss is given by:
\[\mathcal{L}=\mathcal{L}_{ce}(f(x;\theta_{w}),y)+\gamma\mathcal{L}_{kr}(f(x_{m} ;\theta_{w}),f(x_{m};\theta_{s})) \tag{4}\]
where \(\gamma\) controls the strength of the enforcement of consistency, and mean-squared error loss is used for \(\mathcal{L}_{kr}\). The training step is followed by stochastically updating the long-term memory (Eq. 3). The semantic dropout and heterogeneous dropout probabilities are updated at the end of each epoch and task, respectively (using Eqs. 1 and 3). We use long-term memory for inference, as it aggregates knowledge and generalizes well across tasks (cf. Figure 2). Algorithm 1 provides further training details.
## 4 Evaluation Protocol
To gauge the effectiveness of SCoMMER in tackling the different challenges faced by a lifelong learning agent, we consider multiple CL settings that test different aspects of the model.
**Class-IL** presents a challenging CL scenario where each task presents a new set of disjoint classes, and the model must learn to distinguish between all the classes seen so far without the availability of task labels at the test time. It requires the model to effectively consolidate information across tasks and learn generalizable features that can be reused to acquire new knowledge. **Generalized Class-IL (GCIL)**[11] extends the Class-IL setting to more realistic scenarios where the agent has to learn an object over multiple recurrences spread across tasks and tackle the challenges of class imbalance and a varying number of classes in each task. GCIL utilizes probabilistic modeling to sample the number of classes, the appearing classes, and their sample sizes. Details of the datasets used in each setting are provided in the Appendix. Though our method does not utilize separate classification heads or subnets, for completion, we also evaluate the performance under the Task-IL setting, where the model has access to the task labels at inference. In this setting, we use the task label to select the subset of output logits to select from.
## 5 Empirical Evaluation
We compare SCoMMER with state-of-the-art rehearsal-based methods across different CL settings under uniform experimental settings (details provided in Appendix). _SGD_ provides the lower bound with standard training on sequential tasks, and _JOINT_ gives the upper bound on performance when the model is trained on the joint distribution.
Table 1 shows that SCoMMER provides performance gains in the majority of the cases and demonstrates the effectiveness of our approach under varying challenging CL settings. In particular, it provides considerable improvement under low buffer size settings, which suggests that our method is able to mitigate forgetting with fewer samples from previous tasks. The performance gains over CLS-ER, which employs two semantic memories, show that sparse coding in our method enables the effective utilization of a single semantic memory. In particular, the gains in the GCIL setting, where the agent has to face the challenges of class imbalance and learn over multiple occurrences of objects, alludes to several advantages of our method. Our proposed semantic dropout in conjunction with sparse activations enables the model to reuse the sparse code associated with the
\begin{table}
\begin{tabular}{l l l c c c c c} \hline \hline \multirow{2}{*}{Buffer} & \multirow{2}{*}{Method} & \multicolumn{2}{c}{**S-CIFAR10**} & \multicolumn{2}{c}{**S-CIFAR100**} & \multicolumn{2}{c}{**GCIL**} \\ \cline{3-8} & & Class-IL & Task-IL & Class-IL & Task-IL & Unif & Longtail \\ \hline \multirow{2}{*}{–} & JOINT & 92.20\({}_{\text{n.0}}\) & 98.31\({}_{\text{n.0}}\) & 70.62\({}_{\text{n.0}}\) & 86.19\({}_{\text{n.0}}\) & 58.36\({}_{\text{n.0}}\) & 56.94\({}_{\text{n.1}}\) \\ & SGD & 19.62\({}_{\text{n.0}}\) & 61.02\({}_{\text{n.3}}\) & 17.58\({}_{\text{n.0}}\) & 40.46\({}_{\text{n.0}}\) & 12.67\({}_{\text{n.0}}\) & 22.88\({}_{\text{n.0}}\) \\ \hline \multirow{4}{*}{200} & ER & 44.79\({}_{\text{n.1}}\) & 91.19\({}_{\text{n.0}}\) & 21.40\({}_{\text{n.0}}\) & 26.13\({}_{\text{n.0}}\) & 16.40\({}_{\text{n.0}}\) & 19.27\({}_{\text{n.0}}\) \\ & DER++ & 64.88\({}_{\text{n.1}}\) & 91.92\({}_{\text{n.0}}\) & 29.60\({}_{\text{n.1}}\) & 16.24\({}_{\text{n.0}}\) & 18.84\({}_{\text{n.0}}\) & 26.94\({}_{\text{n.1}}\) \\ & CLS-ER & 66.19\({}_{\text{n.0}}\) & **93.90\({}_{\text{n.0}}\)** & 35.23\({}_{\text{n.0}}\) & 86.73\({}_{\text{n.0}}\) & 25.06\({}_{\text{n.0}}\) & 28.54\({}_{\text{n.0}}\) \\ & SCoMMER & **69.19\({}_{\text{n.0}}\)** & 69.20\({}_{\text{n.1}}\) & **40.25\({}_{\text{n.0}}\)** & **69.39\({}_{\text{n.0}}\)** & **30.84\({}_{\text{n.0}}\)** & **29.08\({}_{\text{n.0}}\)** \\ \hline \multirow{4}{*}{500} & ER & 57.74\({}_{\text{n.0}}\) & 93.61\({}_{\text{n.0}}\) & 28.02\({}_{\text{n.0}}\) & 68.23\({}_{\text{n.0}}\) & 28.21\({}_{\text{n.0}}\) & 20.30\({}_{\text{n.0}}\) \\ & DER++ & 72.70\({}_{\text{n.1}}\) & 36.38\({}_{\text{n.0}}\) & 41.40\({}_{\text{n.0}}\) & 70.61\({}_{\text{n.0}}\) & 32.92\({}_{\text{n.0}}\) & 25.82\({}_{\text{n.0}}\) \\ & CLS-ER & **75.22\({}_{\text{n.0}}\)** & **94.94\({}_{\text{n.0}}\)** & 47.63\({}_{\text{n.0}}\) & 73.78\({}_{\text{n.0}}\) & 36.34\({}_{\text{n.0}}\) & 28.63\({}_{\text{n.0}}\) \\ & SCoMMER & 74.97\({}_{\text{n.1}}\) & 94.36\({}_{\text{n.0}}\) & **49.63\({}_{\text{n.1}}\)** & **75.49\({}_{\text{n.0}}\)** & **36.87\({}_{\text{n.0}}\)** & **35.20\({}_{\text{n.0}}\)** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison on different CL settings. The baseline results for S-CIFAR100 and GCIL are from [1].
Figure 2: Task-wise performance of working memory and the long-term memory. The long-term memory effectively aggregates knowledge encoded in the working memory and generalizes well across the tasks.
recurring object and learn better representations with the additional samples by adapting the corresponding subset of filters. Furthermore, compared to the dense activations in CLS-ER, the sparse coding in SCoMMER leads to the emergence of subnetworks that provide modularity and protection to other parts of the network since the entire network is not updated for each input image. This increases the robustness of the model to the class imbalance.
Overall, our method provides an effective approach to employ sparse coding in DNN and enables better utilization of long-term memory, which can effectively consolidate information across tasks and further mitigate forgetting.
## 6 Ablation Study
To gain further insight into the contribution of each component of our method, we systematically remove them and evaluate the performance of the model in Table 2. The results show that all components of SCoMMER contribute to the performance gains. The drop in performance from removing semantic dropout suggests that it is effective in enforcing sparse coding on the representations of the model, which reduces the interference between tasks and allows semantically similar classes to share information. We also observe the benefits of multiple memory systems in CL. Additional long-term memory provides considerable performance improvement and suggests that the EMA of the learned synaptic weights can effectively consolidate knowledge across tasks. Furthermore, we observe that sparsity is a critical component for enabling CL in DNNs. Sparse activations alone significantly improve ER performance and also enable efficient utilization of semantic memory. We highlight that these individual components complement each other and that the combined effect leads to the observed performance improvement in our method.
## 7 Characteristics Analysis
We look at different characteristics of the model to understand what enables the performance gains in our method. We analyze the models trained on S-CIFAR100 with a buffer size of 200.
### Stability-Plasticity Dilemma
To better understand how well different methods maintain a balance between stability and plasticity, we look at how task-wise performance evolves as the model learns tasks sequentially. The diagonal of the heatmap shows the plasticity of the model as it learns the new task, whereas the difference between the accuracy of the task when it was first learned and at the end of the training indicates the stability of the model. Figure 3 shows that SCoMMER is able to maintain a better balance and provides a more uniform performance on tasks compared to baselines. While CLS-ER provides better stability than DER++, it comes at the cost of the model's performance on the last task, which could be due to the lower update rate of the stable model. SCoMMER, on the other hand, retains performance on the earlier tasks (T1 and T2) and provides good performance on the recent task. We also compare the long-term semantic and working memory performance in Figure 2. Long-term memory effectively aggregates the learned knowledge into the synaptic weights of working memory and generalizes well across tasks.
### Emergence of Subnetworks
To evaluate the effectiveness of activation sparsity and semantic dropout in enforcing sparse coding in the model, we look at the average activity of the units in the penultimate layer. The emerging sparse code for each class is tracked during training using the class-wise activity counter and enforced using semantic dropout probabilities (Equation 2).
\begin{table}
\begin{tabular}{c c c|c} \hline \hline Sparse & Long-Term & Semantic & \\ Activations & Memory & Dropout & Accuracy \\ \hline ✓ & ✓ & ✓ & **69.19\({}_{\pm 0.61}\)** \\ ✓ & ✓ & ✗ & 67.38\({}_{\pm 1.51}\) \\ ✗ & ✓ & ✗ & 61.88\({}_{\pm 2.43}\) \\ ✓ & ✗ & ✗ & 49.44\({}_{\pm 5.43}\) \\ ✗ & ✗ & ✗ & 44.79\({}_{\pm 1.86}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Ablation Study:** Effect of systematically removing different components of SCoMMER on the performance of the models on S-CIFAR10. All components contribute to the performance gain.
Figure 3: Task-wise performance of different methods. The heatmaps provide the test set of each task (x-axis) evaluated at the end of each sequential learning task (y-axis). SCoMMER retains the performance of earlier tasks better without compromising on the current task.
Given a test sample from class c, ideally, we would want the model to use the subset of neurons that had higher activity for the training samples from class c without providing any task information. Concretely, we track the class-wise activity on the test set and plot the normalized activation counts for a set of neurons next to their class-wise probabilities at the end of training. Figure 4 shows a high correlation between the test set activation counts and the semantic dropout probabilities at the end of training, particularly for recent classes. The activation counts also hint at the natural emergence of semantically conditioned subnets, as the model utilizes a different set of units for different classes. Furthermore, we observe that semantically similar classes have a higher degree of correlation between their activation patterns. For instance, cat and dog share the most active neurons, a similar pattern is observed between horse and deer, and car and truck. The cosine similarities between the activation counts of the different classes further supports the observation. This is even more remarkable given that these classes are observed in different tasks, particularly for cars and trucks, which are observed in the first and last tasks.
### Task Recency Bias
A major challenge in CL is the recency bias, in which the update of the model on new task samples biases its predictions toward the current task [20]. This leads to considerable forgetting of earlier tasks. To compare the degree to which SCoMMER tackles this issue, we evaluate the probabilities of predicting each task by aggregating the softmax output of samples from the test set of all seen tasks and averaging the probabilities of classes in each task. Figure 5 shows that SCoMMER provides more uniform probabilities to predict each task. CLS-ER is able to mitigate the bias towards the last task, which can be attributed to the aggregation of knowledge in the semantic memories; however, CLS-ER reduces the probability of predicting the last task, which explains the low performance. SCoMMER effectively mitigates recency bias and provides uniform prediction probabilities across tasks without any explicit regularization.
## 8 Conclusion
Motivated by the mechanisms for information representation and utilization of multiple memory systems in the brain, we proposed a novel approach to employ sparse coding in multiple memory systems. SCoMMER enforces activation sparsity along with a complementary semantic dropout mechanism, which encourages the model to activate similar units for semantically similar inputs and reduce the overlap with dissimilar inputs. Additionally, it maintains long-term memory, which consolidates the learned knowledge in working memory. Our empirical evaluation shows the effectiveness of the approach in mitigating forgetting in challenging CL scenarios. Furthermore, sparse coding enables efficient consolidation of knowledge in the long-term memory, reduces the bias towards recent tasks, and leads to the emergence of semantically conditioned subnetworks. We hope that our study inspires further research in this promising direction.
Figure 4: Class-wise activation counts of the filters in the penultimate layer of the model trained on S-CIFAR10 with 200 buffer size. Comparison of the activation counts on the test set with the learned class-wise probabilities, \(P_{s}\), during training shows the effectiveness of semantic dropout in enforcing sparse coding. Right plot shows the cosine similarities between the activation counts of different classes. Semantically similar classes have higher correlation in activations. Darker color shows higher values.
Figure 5: Average probabilities of predicting classes from each tasks at the end of training. SCoMMER provides more uniform probabilities across the tasks. |
2309.06373 | Chebyshev Particles | Markov chain Monte Carlo (MCMC) provides a feasible method for inferring
Hidden Markov models, however, it is often computationally prohibitive,
especially constrained by the curse of dimensionality, as the Monte Carlo
sampler traverses randomly taking small steps within uncertain regions in the
parameter space. We are the first to consider the posterior distribution of the
objective as a mapping of samples in an infinite-dimensional Euclidean space
where deterministic submanifolds are embedded and propose a new criterion by
maximizing the weighted Riesz polarization quantity, to discretize rectifiable
submanifolds via pairwise interaction. We study the characteristics of
Chebyshev particles and embed them into sequential MCMC, a novel sampler with a
high acceptance ratio that proposes only a few evaluations. We have achieved
high performance from the experiments for parameter inference in a linear
Gaussian state-space model with synthetic data and a non-linear stochastic
volatility model with real-world data. | Xiongming Dai, Gerald Baumgartner | 2023-09-10T16:40:30Z | http://arxiv.org/abs/2309.06373v1 | # Chebyshev Particles
###### Abstract
Markov chain Monte Carlo (MCMC) provides a feasible method for inferring Hidden Markov models, however, it is often computationally prohibitive, especially constrained by the curse of dimensionality, as the Monte Carlo sampler traverses randomly taking small steps within uncertain regions in the parameter space. We are the first to consider the posterior distribution of the objective as a mapping of samples in an infinite-dimensional Euclidean space where deterministic submanifolds are embedded and propose a new criterion by maximizing the weighted Riesz polarization quantity, to discretize rectifiable submanifolds via pairwise interaction. We study the characteristics of Chebyshev particles and embed them into sequential MCMC, a novel sampler with a high acceptance ratio that proposes only a few evaluations. We have achieved high performance from the experiments for parameter inference in a linear Gaussian state-space model with synthetic data and a non-linear stochastic volatility model with real-world data.
Markov chain Monte Carlo Hidden Markov models Riesz
## 1 Introduction
Markov chain Monte Carlo methods [1, 2] allow researchers to replace the unobserved latent variables with simulated variables for Bayesian analysis [3, 4]. It relieves the burden of evaluating the likelihood function unconditionally on the unobserved latent variables to allow a focus on the conditional likelihood function [5, 6, 7]. However, although the sample is drawn after a "burn-in" period, it is still uncertain whether the sample is the output at convergence [8]. Using the output of an MCMC algorithm that has not converged may lead to incorrect inferences on the target distribution at hand. In addition, the idea behind a Monte Carlo sampler is to randomly "walk around" in the parameter space, which may lead to a low acceptance rate of samples and generate duplicates, especially for a high dimensional space [8]. Thus, an adequate amount of samples are obtained at the expense of a large computational effort.
Points on a design space that minimize certain energy functions often have desirable properties such as good separation and adequate covering and sparsely reflect special representations of the space [9, 10, 11, 12, 13]. Here, we consider the posterior distribution of the objective function as a mapping of samples in an infinite-dimensional Euclidean space, where deterministic submanifolds are embedded, and propose a new criterion by maximizing weighted Riesz polarization quantity to discretize rectifiable submanifolds via particle interaction. This gives rise to equilibrium points that are useful for a variety of applications, especially for high-dimensional sampling. We study the characteristics of deterministic points, termed Chebyshev particles, and embed them into the MCMC. We propose a new sampler with few evaluations and a high acceptance ratio. In our experiments, we have achieved high performance for parameter inference in a linear Gaussian state-space model with synthetic data as well as a non-linear stochastic volatility model with real-world data.
In this paper, we concentrate on the analysis of our new criterion, and on how to improve the acceptance ratio of MCMC with fewer evaluations of constraints. We present an efficient algorithm to deterministically sample the target distribution by maximizing the weighted Riesz polarization quantity, from which the particles sparsely represent rectifiable geometrical manifolds with few samplings for approximating the objective posterior distribution.
In Section 2, we present a brief introduction to the discrete energy on rectifiable sets. Here, we develop our new criterion by maximizing the weighted Riesz polarization quantity and focus on the bounds and asymptotic behavior, and covering radius. For Section 3, we propose a novel sampler, where Chebyshev particles are embedded and the discretized deterministic submanifolds inherit the special representations of the sampling space. Then, we present the pipeline for sequentially sampling the Chebyshev particles and for embedding them into the particle Metropolis-Hastings algorithm for hidden Markov models. In Section 4, we validate the algorithm with practical experiments and present their performance and error analysis. The summary of our contributions is outlined in Section 5.
## 2 Weighted Riesz Polarization Criterion
In this section, we provide the main idea of the discrete energy on rectifiable high-dimensional manifolds and propose a new criterion by maximizing the weighted Riesz polarization quantity. We then study the asymptotic behavior of the corresponding configuration, the bounds and the covering radius for this quantity.
### Discrete Weighted Riesz Polarization
Let \(\Omega\) denote a compact set in \(\mathbb{R}^{d}\) whose \(d\)-dimensional Borel measure, \(\mathbb{B}_{d}(\Omega)\subset(\Omega,\mathbb{R}^{d})\), is finite, and \(K\) denote a bi-Lipschitz mapping from \(\Omega\times\Omega\) to \(\mathbb{R}^{d}\), for a collection of \(n(\geq 2)\) distinct points of configuration in \(\Omega\), let \(X_{1:n}=x_{1},...,x_{n}\), we define the energy of \(X_{1:n}\) to be
\[E(X_{1:n}):=\sum_{i=1}^{n}\sum_{j=1,j\neq i}^{n}K(x_{i},x_{j})=\sum_{i\neq j} K(x_{i},x_{j}), \tag{1}\]
and let
\[\mathcal{E}(\Omega,n):=\inf\{E(X_{1:n}):X_{1:n}\subset\Omega,|X_{1:n}|=n\} \tag{2}\]
be the minimal discrete \(n\)-point energy of the configuration in \(\Omega\), where \(|X_{1:n}|\) represents the cardinality of the set \(X_{1:n}\). (I) For \(K(x_{i},x_{j})=-\text{log}\parallel x_{i}-x_{j}\parallel\), it was first proposed by.Fekete who explored the connection between polynomial interpolation and discretized manifolds [14]. In computational complexity theory, Smale [15] proposed the \(7th\) problem in his list of "Mathematical problems for the next century" that how to design a polynomial time algorithm for generating "nearly" optimal logarithmic energy points \(X_{1:n}^{*}\), also called Fekete points, on the unit sphere in \(\mathbb{R}^{3}\) that satisfy \(E(X_{1:n}^{*})-\mathcal{E}(\mathbb{S}^{2},n)\leq{}^{1}C_{1}\cdot\text{log}n\) for some universal constant \(C_{1}\); (II) when \(K(x_{i},x_{j})=\frac{1}{\|x_{i}-x_{j}\|^{n}},m\in\mathbb{R}^{+}\), let \(\mathcal{E}_{m}(\Omega,n)\) denote the Riez \(m\)-energy, by Taylor's formula, for any \(m\in(0,+\infty)\), we have
\[\lim_{m\to 0^{+}}\mathcal{E}_{m}(\Omega,n)=\lim_{m\to 0^{+}}\frac{n(n-1)+m \mathcal{E}_{\text{log}}(\Omega,n)+\mathcal{O}(m)}{m}=\mathcal{E}_{\text{log} }(\Omega,n). \tag{3}\]
Consequently, the Fekete points set \(X_{1:n}^{(m)}\) can be considered as limiting cases of point sets that minimize the discrete Riesz energy, which is widely used to discretize manifolds via particle interactions in Euclidean space [12, 16].
From the perspective of statistical high-dimensional sampling, we consider \(d\) sufficiently large and propose the maximum weighted Riesz polarization criterion with
\[\mathcal{E}_{\beta}(\Omega)=\max_{\Omega}\min_{x_{i},x_{j}}\left\{\sum_{i=1}^ {n-1}\sum_{j=i+1}^{n}\frac{\omega(x_{i},x_{j})}{\parallel x_{i}-x_{j}\parallel ^{m}}\right\}^{\frac{1}{m}},\omega(x_{i},x_{j})\propto e^{[\alpha\cdot\gamma(x _{i})\gamma(x_{j})+\beta\cdot\|x_{i}-x_{j}\|]^{-\frac{m}{2}}}. \tag{4}\]
As \(m\rightarrow\infty\), the formulation is convex under mild conditions, the denominator approximates \(\parallel x_{i}-x_{j}\parallel\), thus, our criterion inherits the properties of Riesz energy, termed as weighted Riesz polarization criterion. To obtain a finite collection of point sets that are distributed according to a specified non-uniform density such as might be used as points for weighted integration or design of complex surfaces where more points are required in regions with higher curvature, we introduce \(\omega(x_{i},x_{j})\) in (4), where \(\gamma(x)\propto-\ln f(x)\), \(\parallel x_{i}-x_{j}\parallel\) is included to ensure that is locally bounded for \(\alpha=-1\), \(\beta\) is the local discrepancy coefficient and is positive to balance off the local conflict with the distributed points when short-range interactions between points are the dominator. Thus, given a proper distribution \(f(x)\), we can use \(\mathcal{E}_{\beta}(\Omega)\) to generate a sequence of \(n\)-point configurations that are "well-separated" and have asymptotic distribution \(f(x)\).
Our weighted Riesz polarization \(\mathcal{E}_{\beta}(\Omega,N)\) is continuous and derivable with respect to the parameter \(\beta\subset\mathbb{R}\) from (4), it provides a more flexible and versatile framework when we discretize the submanifolds via particle interactions.
### Asymptotics for Extremal Weighted Riesz Polarization Criterion
**Properties of \(\omega(x_{i},x_{j}):\)** (I) \(\omega(x_{i},x_{j})\) is continuous as a function of \(\gamma(x)\propto-\ln f(x)\) when \(\beta\leq\beta_{0}\); it is a positive constant when \(\beta\geq\beta_{1}\); (II) There exists a neighborhood set \(P^{\prime}\), where \(x_{i}^{\prime},x_{j}^{\prime}\in P^{\prime},\omega(x_{i}^{\prime},x_{j}^{ \prime})\) is bounded and larger than zero; (III) \(\omega(x_{i},x_{j})\) is bounded on any closed and compact metric space \(\Omega\).
Assume the compact set \(\Omega\subset\mathbb{R}^{d}\), for high dimension \(m>d\), we define the generalized Borel measure on sets \(\mathbb{S}\subset\Omega\) with \(\mathcal{U}_{d}^{m}(\mathbb{S}):=\int_{\mathbb{S}}\omega(x_{i},x_{j})d\mathcal{ U}_{d}(x)\). It is bounded and the corresponding normalized form: \(u_{d}^{m}(\mathbb{S}):=\mathcal{U}_{d}^{m}(\mathbb{S})/\mathcal{U}_{d}^{m}(\Omega)\).
**Measure Metric** Consider a high dimensional space \(\mathbb{R}^{d},d\geq 2\), for \(m>d\), let \(\mu(\sigma\text{-algebra}):=\cup_{d=2}^{\infty}\{\parallel x_{i}-x_{j}\parallel^{ d}\}\), represent a Borel measure from the \(\sigma\)-algebra on \(\Omega\), a measure \(\phi\) in \(\Omega_{i}\) is a non-negative \(\sigma\)-algebra set function defined on \(\mu(\sigma\text{-algebra})\) and finite on all compact sets \(\Omega_{i}\subset\Omega,i\in[1,n]\). If \(\phi<\infty\), then the measure \(\phi\) is called finite. Generally, for the smallest \(\sigma\)-algebra, containing all compact subsets of \(\Omega_{i}\).
We have the following novel version of the Poppy-Seed Bagel Theorem [17] for the maximum weighted Riesz polarization using the measure theoretics [18].
**Theorem 2.2.1**.: Given a distribution \(f(x)\) with respect to a compact and \(d\)-rectifiable set \(\Omega\) embedded in Euclidean space \(\mathbb{R}^{d},\omega(x_{i},x_{j})>0\) is bounded and continuous on the closed Borel sets \(\mathbb{S}\subset\Omega\times\Omega\), for \(m>d\), the configuration on \(\Omega\) from \(\mathcal{E}_{\beta}(\Omega)\) where the \(N\)-point interacts via the \(K_{\beta}(x_{i},x_{j})\) potential, have
\[\lim_{n\to\infty}\frac{\mathcal{E}_{\beta}(\Omega)}{n^{\frac{1}{n}+\frac{1}{ 2}}}=\frac{C_{2}}{[\mathcal{U}_{d}^{m}(\mathbb{S})]^{\frac{1}{2}}}. \tag{5}\]
Moreover, if \(\mathcal{U}_{d}^{m}(\mathbb{S})>0\), any configuration \(X_{1:n},n>1\) generated by asymptotically maximizing the weighted Riesz polarization is uniformly distributed with respect to \(\mathcal{U}_{d}\), that is,
\[\lim_{n\to\infty}\frac{1}{n}\sum_{i=1,i\neq j}^{n}\parallel x_{i}-x_{j} \parallel=u_{d}^{m}(\mathbb{S}). \tag{6}\]
**Proof of Theorem 2.2.1** We divide the proof of Theorem 2.2.1 into two parts; The proof works by induction with Lemma 2.2.2 for (4) in the main text, and Lemmas 2.2.3, 2.2.4, 2.2.5, 2.2.6 and 2.2.7 for (5) in the main text. The second part will introduce the subadditivity and superadditivity properties using the measure theoretics [18].
**Lemma 2.2.2**.: Given a distribution \(f(x)\) with respect to \(d\)-rectifiable set \(\Omega\) embedded in Euclidean space, \(\omega(x_{i},x_{j})>0\) is bounded and continuous on the closed Borel sets \(\mathbb{S}\subset\Omega\times\Omega\), for \(m>d\) and \(\beta\in(-\infty,\beta_{0}]\cup[\beta_{1},+\infty),\beta_{0},\beta_{1},C_{3} \in\mathbb{R}\), the maximal weighted Riesz polarization configuration on \(\Omega\) from \(\mathcal{E}_{\beta}(\Omega)\) where the \(n\)-point interacts via the \(K_{\beta}(x_{i},x_{j})\) potential, have
\[\lim_{n\to\infty}\lim_{\beta\to\beta_{0}}\frac{\mathcal{E}_{\beta}(\Omega)}{n^ {\frac{1}{n}+\frac{1}{2}}}=\frac{C_{3}}{[\mathcal{U}_{d}^{m}(\mathbb{S})]^{ \frac{1}{2}}},\ \lim_{n\to\infty}\lim_{\beta^{+}\to\beta_{1}}\frac{\mathcal{E}_{\beta}(\Omega) }{n^{\frac{1}{n}+\frac{1}{2}}}=\frac{C_{3}}{[\mathcal{U}_{d}^{m}(\mathbb{S})]^ {\frac{1}{2}}}. \tag{7}\]
**Proof \(\mathcal{E}_{\beta}(\Omega)\)** is strictly decreasing as \(\beta\) increases, this monotonicity makes it possible to analyze the asymptotics and extend it into high-dimensional sampling on the compact space \(\Omega\) under mild assumptions. Let \(h(\beta^{\prime}):=\lim_{n\to\infty}\lim_{\beta\to\beta^{\prime}}\mathcal{E}_{ \beta}(\Omega)\),
\[h(\beta)=\sum_{i=1}^{n}\sum_{j=1,j\neq i}^{n}\left[K_{\beta}(x_{i},x_{j}) \right]^{\frac{1}{n}}.\]
\(K_{\beta}(x_{i},x_{j})\) is also strictly decreasing as \(\beta\) increases, we firstly focus on \(\beta\in(-\infty,\beta_{0}]\cup[\beta_{1},+\infty),\beta_{0},\beta_{1}\in \mathbb{R}\), then relax this assumption later, define
\[K_{\beta^{\prime}}(x_{i},x_{j}):=\lim_{\beta\to\beta^{\prime}}\frac{\omega(x_{i },x_{j})}{\parallel x_{i}-x_{j}\parallel^{m}},\omega(x_{i},x_{j})>0,\]
if \(\beta\leq\beta_{0}\) is sufficiently small such that
\[\gamma(x_{i})\gamma(x_{j})\gg\beta_{0}\cdot\parallel x_{i}-x_{j}\parallel, \tag{8}\]
then,
\[K_{\beta_{0}^{-}}(x_{i},x_{j}):=\lim_{\beta\to\beta_{0}^{-}}K_{\beta}(x_{i},x_{ j})=\frac{e^{[-\gamma(x_{i})\gamma(x_{j})]^{-\frac{m}{2d}}}}{\parallel x_{i}-x_{j} \parallel^{m}}. \tag{9}\]
From Taylor's theorem
\[e^{z}=1+z+\frac{z^{2}}{2!}+\cdots+\frac{z^{k^{\prime}}}{k^{\prime}!},k^{\prime} \rightarrow\infty,z\in\mathbb{R}.\]
Let \(z=[-\gamma(x_{i})\gamma(x_{j})]^{-\frac{m}{24}}\), substitute (8) into (9),
\[K_{\beta_{0}^{-}}(x_{i},x_{j}) =\frac{1+z+\frac{z^{2}}{2!}+\cdots+\frac{z^{k^{\prime}}}{k^{ \prime}!}}{\parallel x_{i}-x_{j}\parallel^{m}}\geq\frac{1}{\parallel x_{i}-x_ {j}\parallel^{m}}+\frac{[-\beta_{0}\cdot\parallel x_{i}-x_{j}\parallel]^{- \frac{m}{24}}}{\parallel x_{i}-x_{j}\parallel^{m}}+...\] \[+\frac{[-\beta_{0}\cdot\parallel x_{i}-x_{j}\parallel]^{\frac{- mk^{\prime}}{24}}}{k^{\prime}!\parallel x_{i}-x_{j}\parallel^{m}}.\]
For \(m>d\), the right-hand side terms are belonging to the classical Riesz-kernel model, from the Poppy-Seed Bagel Theorem [17], there exists a \(C_{4}\),
\[K_{\beta_{0}^{-}}(x_{i},x_{j})=\frac{C_{4}}{[\mathcal{U}_{d}^{m}(\mathbb{S})] ^{\frac{1}{24}}}\cdot n^{1+\frac{m}{d}}.\]
Thus,
\[h(\beta_{0}^{-}):=\sum_{i=1}^{n}\sum_{j=1,j\neq i}^{n}\left[K_{\beta_{0}^{-}} (x_{i},x_{j})\right]^{\frac{1}{m}}=\frac{C_{4}}{[\mathcal{U}_{d}^{m}(\mathbb{ S})]^{\frac{1}{2}}}\cdot n^{\frac{1}{m}+\frac{1}{d}}.\]
Similarly, if \(\beta\geq\beta_{1}\) is sufficiently large such that \(\gamma(x_{i})\gamma(x_{j})\ll\beta_{1}\cdot\parallel x_{i}-x_{j}\parallel\), then
\[K_{\beta_{1}^{+}}(x_{i},x_{j}) :=\lim_{\beta\rightarrow\beta_{1}^{+}}K_{\beta}(x_{i},x_{j})= \frac{e^{[\beta_{1}\parallel x_{i}-x_{j}\parallel]^{-\frac{m}{24}}}}{ \parallel x_{i}-x_{j}\parallel^{m}}=\frac{1}{\parallel x_{i}-x_{j}\parallel ^{m}}+\frac{[\beta_{1}\cdot\parallel x_{i}-x_{j}\parallel]^{-\frac{m}{24}}}{ \parallel x_{i}-x_{j}\parallel^{m}} \tag{10}\] \[+...+\frac{[\beta_{1}\cdot\parallel x_{i}-x_{j}\parallel]^{\frac {-mk^{\prime}}{24}}}{k^{\prime}!\parallel x_{i}-x_{j}\parallel^{m}}.\]
It provides a flexible framework to prove the asymptotics of the proposed weighted Riesz polarization criterion for (10) that we will frequently use for the following lemma and related proof.
For \(m>d\), the right-hand side terms belong to the classical Riesz-kernel model, from the Poppy-Seed Bagel Theorem [17], there exists a \(C_{5}\),
\[K_{\beta_{1}^{+}}(x_{i},x_{j})=\frac{C_{5}}{[\mathcal{U}_{d}^{m}(\mathbb{S})] ^{\frac{1}{24}}}\cdot n^{1+\frac{m}{d}}.\]
Thus,
\[h(\beta_{1}^{+}):=\sum_{i=1}^{n}\sum_{j=1,j\neq i}^{n}\left[K_{\beta_{1}^{+}} (x_{i},x_{j})\right]^{\frac{1}{m}}=\frac{C_{5}}{[\mathcal{U}_{d}^{m}(\mathbb{ S})]^{\frac{1}{2}}}\cdot n^{\frac{1}{m}+\frac{1}{d}}. \tag{11}\]
As \(h(\beta)\) is strictly decreasing, and continuous and derivative for \(\beta\in\mathbb{R}\), Consequently, There exists a \(C_{3}\),
\[\lim_{n\rightarrow\infty}\frac{\mathcal{E}_{\beta}(\Omega)}{n^{\frac{1}{m}+ \frac{1}{2}}}=\frac{C_{3}}{[\mathcal{U}_{d}^{m}(\mathbb{S})]^{\frac{1}{2}}}.\]
Thus, (7) holds.
From Lemma 2.2.2, as \(n\rightarrow\infty\), the approximation of \(\mathcal{E}_{\beta}(\Omega)\) is not correlated with \(\beta\). That is, we are assuming that \(\beta\) approximates a specific real value, and for the convenience of introducing Taylor's theorem to derive, it does not affect the final limit value of \(\mathcal{E}_{\beta}(\Omega)\) for \(n\rightarrow\infty\).
Analogous to the proof of classical Poppy-Seed Bagel Theorem [17], we define
\[\mathcal{M}(d):=1+\frac{m}{d},n\geq 2.\]
\(\lambda(n):=n^{\mathcal{M}(d)}\), for \(n\geq 2\), \(\lambda(1):=1\). And define
\[\psi_{m,d}(\Omega):=\lim_{n\rightarrow\infty}\frac{\mathcal{E}_{\beta}^{m}( \Omega)}{\lambda(n)}, \tag{12}\]
let \(\psi_{m,d}^{\text{inf}}(\Omega)=\text{inf}(\psi_{m,d}(\Omega))\), \(\psi_{m,d}^{\text{supp}}(\Omega)=\text{sup}(\psi_{m,d}(\Omega))\) and decompose the \(d\)-rectifiable set \(\Omega\) into different subsets \(\Omega_{i},i\in\mathbb{R}^{+}\), satisfying \(\cup_{i=1}^{\infty}\Omega_{i}=\Omega\).
**Lemma 2.2.3**.: [17]\(\exists\alpha_{1},\alpha_{2}\in\mathbb{R}^{+}\), \(\mathcal{M}(d)\) is continuous and derivative for \(d\in\mathbb{R}^{+}\), the function \(U(t)=\min\{\alpha_{1}t^{\mathcal{M}(d)-1},\alpha_{2}(1-t)^{\mathcal{M}(d)-1}\}\) has the maximum for \(t\in[0,1]\) where occurs at the points \(t^{*}:=\frac{1}{1+(\frac{\alpha_{1}}{\alpha_{2}})^{\frac{1}{\mathcal{M}(d)-1}}}\) with \(U(t^{*})=\left[\alpha_{2}^{\frac{-1}{1-\mathcal{M}(d)}}+\alpha_{1}^{\frac{-1} {1-\mathcal{M}(d)}}\right]^{1-\mathcal{M}(d)}\).
The proof is straightforward from the first order derivative of the function \(\frac{dU(t)}{dt}=0\)[17].
We will introduce the subadditivity and superadditivity properties as follows.
**Lemma 2.2.4**.: \(\exists\Omega_{j},\Omega_{k}\subset\Omega\), and \(\Omega_{j},\Omega_{k}\not\subset\emptyset\), \(j\neq k\), \(\mathcal{M}(d)>1\) is continuous and derivative for \(d\in\mathbb{R}^{+}\), let \(\alpha_{3}=\frac{1}{1-\mathcal{M}(d)}\) for \(m>d\),
\[\psi_{m,d}^{\text{inf}}(\Omega_{j}\cup\Omega_{k})^{\alpha_{3}}\leq\psi_{m,d}^{ \text{inf}}(\Omega_{j})^{\alpha_{3}}+\psi_{m,d}^{\text{inf}}(\Omega_{k})^{ \alpha_{3}}. \tag{13}\]
**Proof** If \(\psi_{m,d}^{\text{inf}}(\Omega_{j})\) or \(\psi_{m,d}^{\text{inf}}(\Omega_{k})\) equals zero, or one of the quantities \(\psi_{m,d}^{\text{inf}}(\Omega_{j})\) or \(\psi_{m,d}^{\text{inf}}(\Omega_{k})\) approximates infinite, as the size of set increase, \(\mathcal{E}_{\beta}^{m}(\Omega)\) will increase, the lemma holds.
Hereafter we follow an argument in [17], we consider the general case of \(\psi_{m,d}^{\text{sup}}(\Omega_{j})\in(0,\infty),\psi_{m,d}^{\text{sup}}( \Omega_{k})\in(0,\infty)\), the distance of two set is defined with \(r:=\left\|a_{i}-b_{j}\right\|,a_{i}\in\Omega_{j},b_{j}\in\Omega_{k},i,j\in R ^{+}\). Motivated by Lemma 2.2.3 with \(\alpha_{1}=\psi_{m,d}^{\text{inf}}(\Omega_{j})\) and \(\alpha_{2}=\psi_{m,d}^{\text{inf}}(\Omega_{k})\), for a given \(n\) units, let \(X_{1:n}^{(i)}\cap\Omega_{j}\) and \(X_{1:n}^{(i)}\setminus\Omega_{j}\) be configurations of \(N_{j}:=\left\lfloor\tilde{p}\cdot n\right\rfloor\) and \(N_{k}:=n-N_{j}\) points, respectively, where
\[\tilde{p}=\frac{\psi_{m,d}^{\text{inf}}(\Omega_{k})^{-\alpha_{3}}}{\psi_{m,d} ^{\text{inf}}(\Omega_{j})^{-\alpha_{3}}+\psi_{m,d}^{\text{inf}}(\Omega_{k})^{ -\alpha_{3}}}, \tag{14}\]
and \(\left\lfloor x\right\rfloor\) is the floor function of \(x\). Let \(X_{j,k}=X_{1:n}^{\Omega_{j}}\cup X_{1:n}^{\Omega_{k}}\), from the measure theory [17; 18] the following inequalities hold:
\[\mathcal{E}_{\beta}^{m}(\Omega_{j}\cup\Omega_{k}) \geq\mathcal{E}_{\beta}^{m}(X_{j,k})\geq\min\left\{\inf_{x\in \Omega_{j}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Furthermore, if \(\psi^{\text{sup}}_{m,d}(\Omega_{j}),\psi^{\text{sup}}_{m,d}(\Omega_{k})\geq 0\) and at least one of these oracles is finite, then for any infinite subset \(N^{\prime}\) of \(\mathbb{N}\) and any sequence \(\{X_{1:n}\}_{n\in N^{\prime}}\) of \(n\)-point configurations in \(\Omega_{j}\cup\Omega_{k}\) such that
\[\lim_{n\to\infty}\frac{\mathcal{E}_{\beta}^{m}(X_{1:n})}{\lambda(n)}=(\psi^{ \text{sup}}_{m,d}(\Omega_{j})^{\alpha_{3}}+\psi^{\text{sup}}_{m,d}(\Omega_{k} )^{\alpha_{3}})^{\frac{1}{\alpha_{3}}} \tag{18}\]
holds, for a given \(n\) units, let \(X_{1:n}^{(i)}\cap\Omega_{j}\) and \(X_{1:n}^{(i)}\setminus\Omega_{j}\) be configurations of \(N_{j}:=\lfloor p_{j}\cdot n\rfloor\) and \(N_{k}:=n-N_{j}\) points, respectively, we have
\[p_{j}=\frac{\psi^{\text{sup}}_{m,d}(\Omega_{j})^{\alpha_{3}}}{\psi^{\text{ sup}}_{m,d}(\Omega_{j})^{\alpha_{3}}+\psi^{\text{sup}}_{m,d}(\Omega_{k})^{ \alpha_{3}}}. \tag{19}\]
**Proof** When \(\psi^{\text{sup}}_{m,d}(\Omega_{j})=\infty\) and \(\psi^{\text{sup}}_{m,d}(\Omega_{k})<\infty\), \(p_{j}\) would be \(0\), while when \(\psi^{\text{sup}}_{m,d}(\Omega_{j})<\infty\) and \(\psi^{\text{sup}}_{m,d}(\Omega_{k})=\infty\), \(p_{j}\) would be \(1\). We assume \(\psi^{\text{inf}}_{m,d}(\Omega)\) is bounded on the compact set \(\Omega_{j}\) and \(\Omega_{k}\). Let \(\tilde{N}\) be an infinite subsequence such that
\[\lim_{n\to\tilde{N}}\frac{\mathcal{E}_{\beta}^{m}(\Omega_{j}\cup\Omega_{k})}{n ^{\mathcal{M}(d)}}:=\psi^{\text{inf}}_{m,d}(\Omega_{j}\cup\Omega_{k}).\]
Then, inspired by [17, 18], for any \(n\in N^{\prime}\),
\[\mathcal{E}_{\beta}^{m}(X_{1:n})=\min\left\{\inf_{x\in\Omega_{j} }\sum_{y\in X_{1:n}}\frac{\omega(x,y)}{\|x-y\|},\inf_{x\in\Omega_{k}}\sum_{y \in X_{1:n}}\frac{\omega(x,y)}{\|x-y\|}\right\}\] \[=\min\Biggl{\{}\inf_{x\in\Omega_{j}}\left(\sum_{y\in X_{1:n}\cap \Omega_{j}}\frac{\omega(x,y)}{\|x-y\|}+\sum_{y\in X_{1:n}\cap\Omega_{k}}\frac{ \omega(x,y)}{\|x-y\|}\right),\] \[\qquad\qquad\inf_{x\in\Omega_{k}}\left(\sum_{y\in X_{1:n}\cap \Omega_{j}}\frac{\omega(x,y)}{\|x-y\|}+\sum_{y\in X_{1:n}\cap\Omega_{k}}\frac{ \omega(x,y)}{\|x-y\|}\right)\Biggr{\}}\] \[\leq\min\left\{\inf_{x\in\Omega_{j}}\sum_{y\in X_{1:n}}\frac{ \omega(x,y)}{\|x-y\|},\inf_{x\in\Omega_{k}}\sum_{y\in X_{1:n}}\frac{\omega(x,y )}{\|x-y\|}\right\}+n\cdot\|\omega(x,y)\|\] \[\leq\min\left\{\mathcal{E}_{\beta}^{m}(\Omega_{j}),\mathcal{E}_{ \beta}^{m}(\Omega_{k})\right\}+n\cdot\left\|\omega(x,y)\right\|.\]
Thus,
\[\psi^{\text{sup}}_{m,d}(\Omega_{j}\cup\Omega_{k}) =\limsup_{n\to\infty}\frac{\mathcal{E}_{\beta}^{m}(\Omega_{j}\cup \Omega_{k})}{n^{\frac{n}{2}+1}} \tag{20}\] \[\geq\limsup_{n\to\infty}\min\left\{\frac{N_{j}}{n}\right\}^{ \frac{m}{2}+1}\cdot\frac{\mathcal{E}_{\beta}^{m}(\Omega_{j})}{N_{j}^{\frac{n}{2 }+1}},(\frac{N_{k}}{n})^{\frac{m}{2}+1}\cdot\frac{\mathcal{E}_{\beta}^{m}( \Omega_{k})}{N_{k}^{\frac{m}{2}+1}}\right\}.\]
When \(\psi^{\text{sup}}_{m,d}(\Omega_{j})<\infty\) and \(\psi^{\text{sup}}_{m,d}(\Omega_{k})=\infty\) from (20), it follows that
\[\psi^{\text{sup}}_{m,d}(\Omega_{j}\cup\Omega_{k}) \leq\limsup_{n\to\infty}(\frac{N_{j}}{n})^{\frac{m}{2}+1}\cdot \frac{\mathcal{E}_{\beta}^{m}(\Omega_{j})}{N_{j}^{\frac{m}{2}+1}}\leq p_{j}^{- \alpha_{3}+1}\cdot\psi^{\text{sup}}_{m,d}(\Omega_{j})\leq\psi^{\text{sup}}_{m,d }(\Omega_{j}) \tag{21}\] \[=\left[\psi^{\text{sup}}_{m,d}(\Omega_{j})^{\alpha_{3}}+\psi^{ \text{sup}}_{m,d}(\Omega_{k})^{\alpha_{3}}\right]^{\frac{1}{\alpha_{3}}}.\]
Similarly, when \(\psi^{\text{sup}}_{m,d}(\Omega_{j})=\infty\) and \(\psi^{\text{sup}}_{m,d}(\Omega_{k})<\infty\), we obtain
\[\psi^{\text{sup}}_{m,d}(\Omega_{j}\cup\Omega_{k})\leq(1-p_{j})^{-\alpha_{3}+1} \cdot\psi^{\text{sup}}_{m,d}(\Omega_{j})\leq\psi^{\text{sup}}_{m,d}(\Omega_{j}) =\left[\psi^{\text{sup}}_{m,d}(\Omega_{j})^{\alpha_{3}}+\psi^{\text{sup}}_{m,d }(\Omega_{k})^{\alpha_{3}}\right]^{\frac{1}{\alpha_{3}}}. \tag{22}\]
Thus, both (21) and (22) imply (13).
When \(\psi^{\text{sup}}_{m,d}(\Omega_{j})<\infty\) and \(\psi^{\text{sup}}_{m,d}(\Omega_{k})<\infty\), (20) can be rewritten into
\[\psi^{\text{sup}}_{m,d}(\Omega_{j}\cup\Omega_{k})\leq\min\left\{p_{j}^{\frac{m}{2 }+1}\psi^{\text{sup}}_{m,d}(\Omega_{j}),(1-p_{j})^{\frac{m}{2}+1}\psi^{\text{sup}}_{m,d}(\Omega_{k})\right\}. \tag{23}\]
From Lemma 2.2.3, for the bounded \(N_{j}\) and \(N_{k}\), it follows that
\[\psi_{m,d}^{\text{sup}}(\Omega_{j}\cup\Omega_{k})^{\alpha_{3}}\geq\psi_{m,d}^{ \text{sup}}(\Omega_{j})^{\alpha_{3}}+\psi_{m,d}^{\text{sup}}(\Omega_{k})^{ \alpha_{3}}.\]
Thus, (13) holds. Combining (13) with (18), we get
\[\lim_{n\to\infty}\frac{\mathcal{E}_{\beta}^{m}(\Omega_{j}\cup\Omega_{k})}{ \lambda(n)}=\psi_{m,d}^{\text{sup}}(\Omega_{j}\cup\Omega_{k})^{\alpha_{3}}. \tag{24}\]
Assume both \(\psi_{m,d}^{\text{sup}}(\Omega_{j})\) and \(\psi_{m,d}^{\text{sup}}(\Omega_{k})\) are finite, from (18) and (23) with Lemma 2.2.3,
\[\begin{split}&\left[\psi_{m,d}^{\text{sup}}(\Omega_{j})^{\alpha_{3}}+ \psi_{m,d}^{\text{sup}}(\Omega_{k})^{\alpha_{3}}\right]^{\frac{1}{\alpha_{3}}} =\lim_{n\to\infty}\frac{\mathcal{E}_{\beta}^{m}(\Omega_{j}\cup\Omega_{k})}{ \lambda(n)}\\ &\leq\min\left\{p_{j}^{\frac{m}{2}+1}\psi_{m,d}^{\text{sup}}( \Omega_{j}),(1-p_{j})^{\frac{m}{2}+1}\psi_{m,d}^{\text{sup}}(\Omega_{k}) \right\}\\ &\leq\left[\psi_{m,d}^{\text{sup}}(\Omega_{j})^{\alpha_{3}}+\psi _{m,d}^{\text{sup}}(\Omega_{k})^{\alpha_{3}}\right]^{\frac{1}{\alpha_{3}}}. \end{split} \tag{25}\]
From Lemma 2.2.3, we obtain
\[p_{j}=\frac{\psi_{m,d}^{\text{sup}}(\Omega_{j})^{\alpha_{3}}}{\psi_{m,d}^{ \text{sup}}(\Omega_{j})^{\alpha_{3}}+\psi_{m,d}^{\text{sup}}(\Omega_{k})^{ \alpha_{3}}},\]
as claimed in (19). When \(\psi_{m,d}^{\text{sup}}(\Omega_{j})<\infty\) and \(\psi_{m,d}^{\text{sup}}(\Omega_{k})^{\alpha_{3}}=\infty\), from (21), we have
\[\begin{split}\psi_{m,d}^{\text{sup}}(\Omega_{j})&= \left[\psi_{m,d}^{\text{sup}}(\Omega_{j})^{\alpha_{3}}+\psi_{m,d}^{\text{sup}} (\Omega_{k})^{\alpha_{3}}\right]^{\frac{1}{\alpha_{3}}}=\lim_{n\to\infty} \frac{\mathcal{E}_{\beta}^{m}(\Omega_{j}\cup\Omega_{k})}{\lambda(n)}\\ &\leq p_{j}^{\frac{m}{2}+1}\psi_{m,d}^{\text{sup}}(\Omega_{j}) \leq\psi_{m,d}^{\text{sup}}(\Omega_{j}),\end{split} \tag{26}\]
which can only be held if
\[p_{j}=\frac{\psi_{m,d}^{\text{sup}}(\Omega_{j})^{\alpha_{3}}}{\psi_{m,d}^{ \text{sup}}(\Omega_{j})^{\alpha_{3}}+\psi_{m,d}^{\text{sup}}(\Omega_{k})^{ \alpha_{3}}}=1. \tag{27}\]
Similarly, when \(\psi_{m,d}^{\text{sup}}(\Omega_{j})=\infty\) and \(\psi_{m,d}^{\text{sup}}(\Omega_{k})^{\alpha_{3}}<\infty\), we obtain
\[p_{j}=\frac{\psi_{m,d}^{\text{sup}}(\Omega_{j})^{\alpha_{3}}}{\psi_{m,d}^{ \text{sup}}(\Omega_{j})^{\alpha_{3}}+\psi_{m,d}^{\text{sup}}(\Omega_{k})^{ \alpha_{3}}}=0. \tag{28}\]
Thus, (19) holds.
**Lemma 2.2.6**.: Suppose that \(m>d\), and \(\Omega\subset\mathbb{R}^{d}\) is a compact set with \(0<\mu(\Omega)<\infty\), \(\mathcal{M}(d)>1\) is continuous and derivative for \(d\in\mathbb{R}^{+}\). Furthermore, suppose that for any compact subset \(\Omega_{i}\subset\Omega\), the limit \(\psi_{m,d}(\Omega_{i}),i\in\mathbb{R}^{+}\) exists and is given by
\[\psi_{m,d}(\Omega_{i})=\frac{C_{6}}{\mathcal{U}_{d}^{m}(\Omega_{i})^{\mathcal{ M}(d)-1}}.\]
Then, \(\psi_{m,d}(\Omega)\) exists and is given by
\[\psi_{m,d}(\Omega)=\frac{C_{6}}{\mathcal{U}_{d}^{m}(\Omega)^{\mathcal{M}(d)-1}}. \tag{29}\]
Moreover, if a sequence of \(n\)-point configurations \(X_{1:n}\) is asymptotically weighted Riesz polarization maximizing on the set \(\Omega\) and \(\mu(\Omega)>0\), then
\[\lim_{n\to\infty}\frac{1}{n}\sum_{i=1,i\neq j}^{n}\parallel x_{i}-x_{j} \parallel\to u_{d}^{m}(\mathbb{S}). \tag{30}\]
**Proof** To prove (29), we firstly decompose the entire metric space \(\Omega\) into extremely small disconnected parts with diameter less than \(\epsilon>0\), according to the property of Borel metrics, then
\[\sum_{P\in\Omega_{i}}\mathcal{U}_{d}(P)\leq\mathcal{U}_{d}(\Omega). \tag{31}\]
Hereafter we follow an argument in [17], we define a sufficiently small space \(\Omega_{i}\) as follows. We consider the hyperplane \(\Omega^{\prime}\) consisting of all points, \((-l,l)\) is a cube embedded in \(\Omega^{\prime}\), we discretize the cube with tiny intervals for \(j\)-th ordinate,
\(-l=h_{0}^{j}<h_{1}^{j}\cdots<h_{k}^{j}=l,j\in i^{d},d^{\prime}\in(1,d),i=(i_{1},i_{2 }\cdots,i_{N})\), \(k\) is sufficiently large, \(\exists\left\|h_{k}^{j}-h_{k-1}^{j}\right\|<\epsilon\) such that (31) holds. \(\Omega_{i}\) can be written as
\[\Omega_{i}:=[h_{i_{1}^{1}}^{1},h_{i_{1}^{1}+1}^{1})\times\cdots\times[h_{i_{k }^{d}-1}^{d^{\prime}},h_{i_{k}^{d^{\prime}}}^{d^{\prime}}).\]
For \(\Omega_{i}\subset\Omega\), if \(\omega(x_{i},x_{j})\) is bounded, let
\[\overline{\omega}_{\Omega_{i}}=\sup_{x_{i},x_{j}\in\Omega_{i}}\omega(x_{i},x_ {j}),\text{ and }\underline{\omega}_{\Omega_{i}}=\inf_{x_{i},x_{j}\in\Omega_{i}} \omega(x_{i},x_{j}),\]
we introduce the radial basis functions \(\varphi(\cdot)\) to approximate the corresponding bounded \(\omega(x_{i},x_{j})\):
\[\begin{split}\overline{\omega}_{\Omega_{i}}(x_{i},x_{j})& =\sum_{P\in\Omega_{i}}\overline{\omega}_{P}\varphi(\|x_{i}-x_{j }\|),\\ \underline{\omega}_{\Omega_{i}}(x_{i},x_{j})&=\sum_ {P\in\Omega_{i}}\underline{\omega}_{P}\varphi(\|x_{i}-x_{j}\|).\end{split} \tag{32}\]
From Lemma 2.2.5, and (17), there exists a \(C_{6}\) satisfying
\[\begin{split}\psi_{m,d}^{\text{sup}}(\Omega)^{\alpha 3}&\geq\sum_{i=1}^{n}\psi_{m,d}^{ \text{sup}}(\Omega_{i})^{\alpha 3}\geq\sum_{i=1}^{n}\left[\overline{\omega}_{ \Omega_{i}}(x_{i},x_{j})\cdot\psi_{m,d}^{\text{sup}}(\Omega_{i})\right]^{ \alpha 3}=C_{6}\sum_{x_{i},x_{j}\in\Omega_{i}}\overline{\omega}_{\Omega_{i}}^{ \alpha 3}\cdot\mathcal{U}_{d}(\Omega_{i})\\ &\geq C_{6}\int_{x_{i},x_{j}\in\Omega_{i}}\overline{\omega}_{ \Omega_{i}}(x_{i},x_{j})^{\alpha 3}d\mathcal{U}_{d}(\Omega_{i}).\end{split}\]
From Lemma 2.2.4 and (13), similarly, we have
\[\begin{split}\psi_{m,d}^{\text{inf}}(\Omega)^{\alpha 3}&\leq\sum_{i=1}^{n}\psi_{m,d}^{\text{inf}}(\Omega_{i})^{\alpha 3}\leq\sum_{i=1}^{n}\left[\underline{\omega}_{\Omega_{i}}(x_{i},x_{j})\cdot \psi_{m,d}^{\text{inf}}(\Omega_{i})\right]^{\alpha 3}=C_{6}\sum_{x_{i},x_{j}\in\Omega_{i}} \underline{\omega}_{\Omega_{i}}^{\alpha 3}\cdot\mathcal{U}_{d}(\Omega_{i})\\ &\leq C_{6}\int_{x_{i},x_{j}\in\Omega_{i}}\underline{\omega}_{ \Omega_{i}}(x_{i},x_{j})^{\alpha 3}d\mathcal{U}_{d}(\Omega_{i}).\end{split}\]
Given a sufficiently small \(P\), for (32), use the equation limit, we have
\[\begin{split}\overline{\omega}_{\Omega_{i}}(x_{i},x_{j})& =\sum_{P\in\Omega_{i}}\overline{\omega}_{P}\varphi(\|x_{i}-x_{j }\|)=\omega_{\Omega}(x_{i},x_{j}),\\ \underline{\omega}_{\Omega_{i}}(x_{i},x_{j})&=\sum_{P \in\Omega_{i}}\underline{\omega}_{P}\varphi(\|x_{i}-x_{j}\|)=\omega_{\Omega}(x _{i},x_{j}).\end{split}\]
Since \(\omega(x_{i},x_{j})\) is continuous on \(\Omega\), both \(\int_{x_{i},x_{j}\in\Omega_{i}}\overline{\omega}_{\Omega_{i}}(x_{i},x_{j})^{ \alpha 3}d\mathcal{U}_{d}(\Omega_{i})\) and \(\int_{x_{i},x_{j}\in\Omega_{i}}\underline{\omega}_{\Omega_{i}}(x_{i},x_{j})^{ \alpha 3}d\mathcal{U}_{d}(\Omega_{i})\) converge to \(\mathcal{U}_{d}^{m}(\Omega)\). Consequently, the limit \(\psi_{m,d}(\Omega_{i}),i\in\mathbb{R}^{+}\) exists and can be given by
\[\psi_{m,d}(\Omega_{i})=\frac{C_{6}}{\mathcal{U}_{d}^{m}(\Omega_{i})^{\mathcal{ M}(d)-1}}.\]
By the Fatou's Lemma and Monotone Convergence Theorem, Thus, (29) holds on \(\Omega\).
To prove (30), suppose that \(X_{1:n}\) is an asymptotically weighted Riesz polarization maximizing sequence of \(n\)-point configuration on \(\Omega\), the corresponding signed finite Borel measures \(\cup_{i=1}^{n}\mu_{d}^{m}(\Omega_{i})\) in \(\mathbb{R}^{d}\) converges weak\({}^{*}\) to a signed finite Borel measure \(\mu_{d}(\Omega)\), as \(n\rightarrow\infty\). Consequently, (30) is equivalent to the assertion that
\[\lim_{n\rightarrow\infty}\sum_{j=1}^{n}p_{j}=\cup_{i=1}^{n}\mu_{d}^{m}(\Omega_{ j})=\mu_{d}(\mathbb{S})\]
holds for any almost \(\sigma\)-algebra subset on \(\Omega\), let \(\Omega_{\sigma}=\cup_{i=1}^{n}\Omega_{i}\) be a subset of \(\sigma\)-algebra on \(\Omega\), for any Borel subset \(\Omega_{\sigma}\subset\Omega\). Since \(\Omega_{\sigma}\) and \(\Omega/\Omega_{\sigma}\) are the compact subsets of \(\Omega\), suppose \(\psi_{m,d}(\Omega_{\sigma})=\frac{C_{7}}{\mu(\Omega_{\sigma})^{-\frac{1}{ \alpha 3}}}\) and \(\psi_{m,d}(\Omega/\Omega_{\sigma})=\frac{C_{7}}{\mu(\Omega/\Omega_{\sigma})^{- \frac{1}{\alpha 3}}}\), for the asymptotically weighted Riesz polarization maximal sequence \(X_{1:n}\),
\[\begin{split}\lim_{n\rightarrow\infty}\frac{E^{m}(X_{1:n})}{\lambda (n)}&=C_{7}\cdot(\mu(\Omega))^{\frac{1}{\alpha 3}}=C_{7}\cdot(\mu(\Omega_{\sigma})+\mu(\Omega/\Omega_{\sigma}))^{\frac{1}{ \alpha 3}}\\ &=[\psi_{m,d}(\Omega_{\sigma})^{\alpha_{3}}+\psi_{m,d}(\Omega/ \Omega_{\sigma})^{\alpha_{3}})]^{\frac{1}{\alpha 3}}\,.\end{split}\]
Using (17) in Lemma 2.2.5 and (29) which holds for \(\Omega_{\sigma}\) and \(\Omega/\Omega_{\sigma}\), we have
\[\lim_{n\to\infty}\sum_{j=1}^{n}p_{j}=\frac{\psi_{m,d}(\Omega/\Omega_{\sigma})^{ -\alpha_{3}}}{\psi_{m,d}(\Omega_{\sigma})^{-\alpha_{3}}+\psi_{m,d}(\Omega/ \Omega_{\sigma})^{-\alpha_{3}}}=\frac{\mathcal{U}_{d}^{m}(\Omega_{\sigma})}{ \mathcal{U}_{d}^{m}(\Omega_{\sigma})+\mathcal{U}_{d}^{m}(\Omega/\Omega_{ \sigma})}=\mu_{d}^{m}(\mathbb{S}).\]
Thus, (30) holds.
### Bounds and Asymptotics
To derive the asymptotics, we will first provide the lower and upper estimates of the Borel measure in a restricted compact space for the maximum weighted Riesz polarization quantity and then prove the asymptotics on \(\mathbb{S}^{2}\).
**Theorem 2.3.1**.: If \(\Omega\subset\mathbb{R}^{d}\) is an infinite compact set, then
\[\mathcal{E}_{\beta}^{m}(\Omega)\geq\frac{1}{n-1}\cdot\min_{x_{i},x_{j}}\left\{ \sum_{i=1}^{n-1}\sum_{j=i+1}^{n}\frac{1}{\parallel x_{i}-x_{j}\parallel^{m+1} }\right\}. \tag{33}\]
If \(\mathcal{U}_{d}^{m}(\mathbb{S})>0\), then there exists a constant \(C_{8}>0\) depending only on \(m\) such that
\[\mathcal{E}_{\beta}^{m}(\Omega)\leq\frac{C_{8}}{m-d+1}n^{\frac{m+1}{d}}. \tag{34}\]
**Proof** Inspired by [19], define
\[D_{n}(\Omega)=\min_{X_{1},\cdots,X_{n}\in\Omega}\frac{1}{n(n-1)}\sum_{i=1}^{n} \sum_{j=1,j\neq i}^{n}\frac{1}{\parallel x_{i}-x_{j}\parallel^{m+1}}, \tag{35}\]
we obtain [19]
\[\sum_{i=1}^{n-1}\sum_{j=i+1}^{n}\frac{1}{\parallel x_{i}-x_{j}\parallel^{m+1} }\geq n\cdot D_{n}(\Omega). \tag{36}\]
From the definition (36) and (30), we have
\[\begin{split}\mathcal{E}_{\beta}^{m}(\Omega)&\geq \min_{x_{i},x_{j}}\left\{\sum_{i=1}^{n-1}\sum_{j=i+1}^{n}\frac{\omega(x_{i},x_{ j})}{\parallel x_{i}-x_{j}\parallel^{m}}\right\}\geq\min_{x_{i},x_{j}} \left\{\sum_{i=1}^{n-1}\sum_{j=i+1}^{n}\frac{1}{\parallel x_{i}-x_{j}\parallel ^{m+1}}\right\}\geq n\cdot D_{n}(\Omega)\\ &\geq\frac{1}{n-1}\cdot\min_{x_{i},x_{j}}\left\{\sum_{i=1}^{n-1} \sum_{j=i+1}^{n}\frac{1}{\parallel x_{i}-x_{j}\parallel^{m+1}}\right\}.\end{split} \tag{37}\]
Thus, (33) holds.
Inspired by [20, 21], to prove (34), let \(X_{1:n}=\{x_{1},...,x_{n}\}\) be a configuration of \(n\) points on \(\Omega\) that maximize the weighted Riesz polarization. Let \(r_{n}=C_{9}n^{-\frac{1}{d}}\), \(\Omega_{i}:=\Omega\setminus B(x,r_{n})\), where \(B(x,r_{n})\) is the open ball in \(\mathbb{R}^{d}\) with center \(x\) and radius \(r_{n}\), we have
\[\begin{split}&\mu(B(x,r_{n})\cap\Omega)\leq C_{9}r_{n}^{d},\\ &\mu(\Omega)\geq 1-\sum_{j=1}^{n}\mu(B(x_{j},r_{n}))\geq 1-C_{9}nr_{n} ^{d}.\end{split} \tag{38}\]
The inequality of the quantity \(\mathcal{E}_{\beta}^{m}(\Omega)\) can be expressed by
\[\begin{split}\mathcal{E}_{\beta}^{m}(\Omega)&\leq \frac{\parallel\omega(x_{i},x_{j})\parallel}{\mu(\Omega)}\int_{\Omega_{i}}\sum_ {j=1}^{n}\left\|x-x_{j}\right\|^{-(m+1)}d\mu(x)\\ &\leq\frac{1}{1-C_{9}nr_{n}^{d}}\sum_{j=1}^{n}\int_{\Omega_{i}} \left\|x-x_{j}\right\|^{-(m+1)}d\mu(x).\end{split} \tag{39}\]
From [20], we obtain the integral inequality
\[\begin{split}\int_{\Omega_{i}}\|x-x_{j}\|^{-(m+1)}\,d\mu(x)& =\int_{0}^{\infty}\mu\left\{x\in\Omega_{i}:\|x-x_{j}\|^{-(m+1)}>t \right\}dt\\ &\leq 1+\int_{1}^{r_{n}^{-(m+1)}}\mu(B(x_{j},t^{-\frac{1}{m+1}})\cap \Omega)dt\\ &\leq 1+C_{9}\int_{1}^{r_{n}^{-(m+1)}}t^{-\frac{d}{m+1}}dt,\end{split} \tag{40}\]
when \(n\) is sufficiently large such that \(r_{n}^{-(m+1)}>1\). For \(m>d\), thus (39) follows that
\[\begin{split}\mathcal{E}_{\beta}^{m}(\Omega)&\leq \frac{n}{1-C_{9}nr_{n}^{d}}(1+C_{9}\int_{1}^{r_{n}^{-(m+1)}}t^{-\frac{d}{m+1}} dt)\leq\frac{n}{1-C_{9}nr_{n}^{d}}(1+C_{9}\int_{1}^{r_{n}^{-(m+1)}}t^{-\frac{d}{m+1 }}dt)\\ &=\frac{n}{1-C_{9}nr_{n}^{d}}\cdot\frac{(m+1)(r_{n}^{d-m-1}-1)}{ m-d+1}\leq\frac{C_{8}}{m-d+1}n^{\frac{m+1}{d}}.\end{split} \tag{41}\]
Thus, (34) holds.
Regarding the asymptotic behavior of the weighted Riesz polarization as \(m\) approximate the \(d\), we have
**Theorem 2.3.2**.: For \(m>d\), if \(\Omega=\mathbb{S}^{d}\),
\[\lim_{m\to d^{+}}\liminf_{n\to\infty}\frac{\mathcal{E}_{\beta}^{m}(\Omega)}{n ^{\frac{m+1}{d}}}=\infty. \tag{42}\]
**Proof** From [20], let \(\mathcal{E}(\Omega):=\sum_{i=1}^{n}\sum_{j=i+1}^{n}\frac{1}{\|x_{i}-x_{j}\|^{ m}}\), we have
\[\liminf_{n\to\infty}\frac{\mathcal{E}(\Omega)}{n\log n}\geq\frac{c_{d}}{u_{d}( \mathbb{S}^{d})}=\frac{\Gamma(\frac{d+1}{2})}{\sqrt{\pi}d\cdot\Gamma(\frac{d} {2})}:=\tau_{d}, \tag{43}\]
when \(m=d\). Let \(\Omega=\mathbb{S}^{d}\), we have the estimate for \(x\in\mathbb{S}^{d}\)[21]
\[\mu(B(x,r)\cap\mathbb{S}^{d})\leq\tau_{d}r^{d}, \tag{44}\]
and for \(m>d\), from the estimate
\[\begin{split}\int_{\Omega\setminus B(x_{j},r_{n})}\|x-x_{j}\|^{ -(m+1)}\,d\mu(x)&=d\tau_{d}2^{-\frac{m+1}{2}}\int_{-1}^{1-\frac{ r_{n}^{2}}{2}}(1-t)^{-\frac{m+1}{2}+\frac{d}{2}-1}(1+t)^{\frac{d}{2}-1}dt\\ &\leq d\tau_{d}2^{-\frac{m+1}{2}+\frac{d}{2}-1}\int_{-1}^{1- \frac{r_{n}^{2}}{2}}(1-t)^{-\frac{m+1}{2}+\frac{d}{2}-1}dt\\ &=\frac{d\tau_{d}}{m+1-d}\left[r^{-m-1+d}-2^{-p-1+d}\right],r<2, \end{split} \tag{45}\]
Substitute (45) into (39), we have
\[\mathcal{E}_{\beta}^{m}(\Omega)\leq\frac{n}{1-C_{9}nr_{n}^{d}}\cdot\frac{d \tau_{d}}{m+1-d}\cdot r^{-m-1+d}. \tag{46}\]
The optimal value for \(r_{n}\) is
\[r_{n}=(\frac{m+1-d}{nm\tau_{d}+n\tau_{d}})^{\frac{1}{2}}. \tag{47}\]
Substitute (47) and (46) into (42), the inequality holds.
### Covering Radius
In this section, we state and prove the bound of the covering radius. And extend to deal with the weak* limit distribution of best-covering \(n\)-point configurations on rectifiable sets \(\Omega\). Suppose that \(\Omega\) is a compact infinite metric space with Euclidean metric \(r(x,y)=\|x-y\|\), \(\Omega\times\Omega\to[0,\infty)\), we define the covering radius of an \(n\)-point configuration \(X_{1:n}\) in a metric space \((\Omega,r)\) as \(\rho(X_{1:n},\Omega):=\max_{x\in\Omega}\min_{i=1,\dots,n}r(x,x_{i})\). From the geometrical perspective, the
covering radius of \(X_{1:n}\) can be considered as the minimal radius of \(n\) adjacent closed balls centered at \(X_{1:n}\) whose union contains the entire \(\Omega\). Among finite element analysis and approximation theory, this quantity is known as the best approximation of the set \(\Omega\) by the configuration \(X_{1:n}\)[17]. The optimal values of this quantity are also of interest and we define the minimal \(n\)-point covering radius of a set \(\Omega\) as
\[\rho_{n}(\Omega):=\min\{\rho(X_{1:n},\Omega):X_{1:n}\subset\Omega\}.\]
\(\rho_{n}(\Omega)\) is also called an \(n\)-point best-covering configuration for \(\Omega\)[10].
**Theorem 2.4.1**.: Suppose the compact set \(\Omega\subset\mathbb{S}^{d}\) with \(\mathcal{U}_{d}^{m}(\Omega)>0\), there exists a positive constant \(C_{10}\) such that for any \(n\) -point configuration \(X_{1:n}^{*}\) that is optimal for \(\mathcal{E}_{\beta}(\Omega)\), we have \(\rho(X_{1:n}^{*},\Omega)\leq C_{10}\cdot n^{-\frac{m-2d}{2(m-d)}}\), where \(C_{10}\propto\left(\frac{m}{m-d}\right)^{\frac{1}{m-d}}\).
**Proof** Since \(\Omega\subset\mathbb{S}^{d}\) is a compact set, there exists a finite family of set \(\{\Omega_{i}\}\), \(i=1,...,n^{\prime}\), with the following properties: (1) \(\Omega=\left\{\cup\Omega_{i}\right\},i=1,...,n^{\prime}\), and the interiors of the sets \(\Omega_{i}\) are disjoint where the measure \(\mu(\Omega_{i}\cap\Omega_{j})_{i\neq j}=0\). (2) There exist positive constants \(C_{11}\) and \(C_{12}\), that does not depend on \(n\), and the point \(x_{i}\in\Omega_{i}\), such that \(B(x_{i},C_{11}n^{-\frac{1}{2}-\frac{1}{m}})\cap\Omega\subset\Omega_{i}\subset B (x_{i},C_{12}n^{-\frac{1}{2}-\frac{1}{m}})\). Since \(\Omega_{i}\subset B(x_{i},n^{-\frac{1}{2}-\frac{1}{m}})\), there exists a \(\alpha^{\prime}\) such that the number of points from \(X_{1:n}^{*}\) is \(\#(\Omega_{i}\cap X_{1:n}^{*})\leq\alpha^{\prime}n\), where \(0<\alpha^{\prime}<1\).
Hereafter we follow an argument in [22]. Let \(y\in\Omega\) be such that \(\min\limits_{x_{k}\in X_{1:n}}|y-x_{k}|=\rho(X_{1:n},\Omega)\). Assume \(\rho(X_{1:n},\Omega)\geq C_{13}n^{-\frac{1}{2}-\frac{1}{m}}\), for every \(x_{i}\in\{\{X_{1:n}^{*}\}\cap\Omega_{i}\}\), we have
\[\begin{split}|y-x|&\leq|y-x_{i}|+|x_{i}-x|\leq|y-x_ {i}|+2C_{12}n^{-\frac{1}{2}-\frac{1}{m}}\leq|y-x_{i}|+\frac{2C_{12}}{C_{13}} \rho(X_{1:n},\Omega)\\ &\leq\frac{2C_{12}+C_{13}}{C_{13}}\left|y-x_{i}\right|,\end{split} \tag{48}\]
which implies
\[|y-x_{j}|^{-m}\leq\frac{2C_{12}+C_{13}}{C_{13}}\cdot\min\limits_{x\in\Omega_{ i}}\left|y-x\right|^{-m}. \tag{49}\]
The corresponding lower bound
\[\begin{split}|y-x|&\geq|y-x_{i}|-|x_{i}-x|\geq|y-x _{i}|-2C_{12}n^{-\frac{1}{2}-\frac{1}{m}}\leq|y-x_{i}|-\frac{C_{12}}{C_{13}} \rho(X_{1:n},\Omega)\\ &\geq\frac{C_{13}-C_{12}}{C_{13}}\rho(X_{1:n},\Omega).\end{split} \tag{50}\]
Consequently, \(\Omega\cap B(y,\frac{C_{13}-C_{12}}{C_{13}}\rho(X_{1:n},\Omega))\subset\Omega \setminus\bigcup_{x_{i}\in X_{1:n}^{*}}\Omega_{i}\). For each \(x_{i}\in\Omega_{i}\), from (49), we get
\[\frac{1}{\left|y-x_{i}\right|^{m}}\leq\left(\frac{2C_{12}+C_{13}}{C_{13}} \right)^{m}\frac{1}{\mu(\Omega)}\int_{\Omega_{i}}\frac{d\mu(x)}{\left|y-x \right|^{m}}. \tag{51}\]
Since \(B(x_{i},C_{12}n^{-\frac{1}{2}-\frac{1}{m}})\cap\Omega\subset\Omega_{i}\), we get \(\mu(\Omega)\geq C_{12}\cdot n^{-1}\), which implies
\[\begin{split}\mathcal{E}_{\beta}^{m}(\Omega)&=C_{ 12}\cdot n^{\frac{m}{2}+1}\leq n\cdot\sum\limits_{x_{i}\in X_{1:n}}\frac{1}{ \left|y-x_{i}\right|^{m}}\leq n\cdot\left(\frac{2C_{12}+C_{13}}{C_{13}} \right)^{m}\sum\limits_{x_{i}\in X_{1:n}}\frac{1}{\mu(\Omega_{i})}\int_{\Omega _{i}}\frac{d\mu(x)}{\left|y-x\right|^{m}}\\ &\leq n^{2}\left(\frac{2C_{12}+C_{13}}{C_{13}}\right)^{m}\int_{B (y,\frac{C_{13}-C_{12}}{C_{13}}\rho(X_{1:n},\Omega))}\frac{d\mu(x)}{\left|y- x\right|^{m}}\\ &\leq\frac{m}{m-d}\cdot\left(\frac{2C_{12}+C_{13}}{C_{13}}\right)^ {m}\cdot n^{2}\cdot\left(\frac{C_{13}-C_{12}}{C_{13}}\rho(X_{1:n},\Omega) \right)^{d-m},\end{split} \tag{52}\]
which implies
\[\left[\rho(X_{1:n},\Omega)\right]^{m-d}\leq\frac{m}{m-d}\cdot\left(\frac{2C_{12 }+C_{13}}{C_{13}}\right)^{m}\cdot n^{-\frac{m-2d}{d}}, \tag{53}\]
we get
\[\rho(X_{1:n},\Omega)\leq\left(\frac{m}{m-d}\right)^{\frac{1}{m-d}}\cdot\left( \frac{2C_{12}+C_{13}}{C_{13}}\right)^{\frac{m}{m-d}}\cdot n^{-\frac{m-2d}{2(m-d )}}. \tag{54}\]
A good estimate on the constant \(C_{10}\) for large values of \(m\) yields the following theorem regarding the asymptotic behavior of \(\mathcal{E}_{\beta}(\Omega)\) as \(m\to\infty\).
**Theorem 2.4.2**.: Suppose the compact set \(\Omega\subset\mathbb{S}^{d}\) or \(\Omega=[0,1]^{d}\). The quantities as defined in **Theorem 2.2.1**, the following limits exist as positive real numbers and satisfy
\[\lim_{m\to\infty}\frac{C_{2}}{[\mathcal{U}_{d}^{m}(\Omega)]^{\frac{1}{d}}}= \lim_{m\to\infty}\left(\lim_{n\to\infty}\frac{\mathcal{E}_{\beta}(\Omega)}{n^ {\frac{1}{m}+\frac{1}{2}}}\right)=\frac{1}{\lim_{n\to\infty}n^{\frac{1}{2}} \rho_{n}(\Omega)}. \tag{55}\]
**Proof** If \(A\subset\mathbb{S}^{d}\), there exist positive constant \(C_{11}\) and \(C_{12}\) such that \(C_{11}n^{-\frac{1}{2}-\frac{1}{m}}\leq\rho_{n}(\Omega)\leq C_{12}n^{-\frac{1} {2}-\frac{1}{m}}.\) Observe that
\[\mathcal{E}_{\beta}(\Omega)\geq\inf_{y\in\Omega}\sum_{x_{i}\in X_{1:n}^{*}} \frac{1}{|y-x_{i}|^{\frac{1}{m}+1}}=\frac{1}{\max_{y\in\Omega}\min_{x_{i}\in X _{1:n}^{*}}|y-x_{i}|^{\frac{1}{m}+1}}=\rho_{n}(\Omega)^{-1-\frac{1}{m}}\geq C_ {13}n^{\frac{1}{d}+\frac{1}{m}}. \tag{56}\]
Consequently,
\[\lim_{n\to\infty}\frac{\mathcal{E}_{\beta}(\Omega)}{n^{\frac{1}{m}+\frac{1}{2 }}}\geq\frac{1}{\liminf_{n\to\infty}(n^{\frac{1}{d}+\frac{1}{m}}\rho_{n}( \Omega))}, \tag{57}\]
which implies
\[\liminf_{m\to\infty}\left(\lim_{n\to\infty}\frac{\mathcal{E}_{\beta}(\Omega)} {n^{\frac{1}{m}+\frac{1}{2}}}\right)\geq\frac{1}{\liminf_{n\to\infty}(n^{ \frac{1}{d}}\rho_{n}(\Omega))}. \tag{58}\]
Using the same argument as [22], we now take an arbitrary point \(y\in\Omega\) such that
\[\min_{j=1,\cdots,n}|y-x_{i}|=\rho_{n}(X_{1:n}^{*}), \tag{59}\]
and set \(B_{i}:=B(y,i\cdot\rho_{n}(X_{1:n}^{*}))\setminus B(y,(i-1)\cdot\rho_{n}(X_{1: n}^{*}))\), where \(i\geq 2\). Since \(X_{1:n}^{*}\cap B(y,\rho_{n}(X_{1:n}^{*}))=\emptyset\), we have \(X_{1:n}^{*}\subset\bigcup_{i=2}^{\infty}B_{i}\). For any \(i\geq 2\) we have
\[\mathcal{E}_{\beta}(\Omega) =\max_{\Omega}\min_{x_{i},x_{j}}\left\{\sum_{i=1}^{n-1}\sum_{j=i+1 }^{n}\frac{\omega(x_{i},x_{j})}{\|\;x_{i}-x_{j}\;\|^{m}}\right\}^{\frac{1}{m}} \leq\max_{\Omega}\left\{\sum_{i=1}^{n-1}\sum_{j=i+1}^{n}\frac{\omega(x_{i},x_ {j})}{\|\;x_{i}-x_{j}\;\|^{m}}\right\}^{\frac{1}{m}} \tag{60}\] \[\leq|\omega(x_{i},x_{j})|^{\frac{1}{m}}\max\left\{\sum_{i=2}^{n-1 }\sum_{j=i+1}^{n}\frac{1}{\|\;x_{i}-x_{j}\;\|^{m}}\right\}^{\frac{1}{m}}.\]
By the property of \(B_{n}\), for any \(x\in B_{i}\), we have \(|y-x|\geq(i-1)\cdot\rho_{n}(X_{1:n}^{*})\), which implies
\[\mathcal{E}_{\beta}(\Omega)\leq|\omega(x_{i},x_{j})|^{\frac{1}{m}}\cdot\left[ \sum_{i=2}^{\infty}i^{d}\cdot(i-1)^{-1}\right]^{\frac{1}{m}}\cdot\rho_{n}(X_{1: n}^{*})^{-1}. \tag{61}\]
Dividing by \(n^{\frac{1}{m}+\frac{1}{2}}\), since \(\rho_{n}(X_{1:n}^{*})\geq\rho_{n}(\Omega)\), we get
\[\frac{\mathcal{E}_{\beta}(\Omega)}{n^{\frac{1}{m}+\frac{1}{2}}}\leq\frac{| \omega(x_{i},x_{j})|^{\frac{1}{m}}\cdot\left[\sum_{i=2}^{\infty}i^{d}\cdot(i-1 )^{-1}\right]^{\frac{1}{m}}}{n^{\frac{1}{m}+\frac{1}{2}}\rho_{n}(\Omega)}, \tag{62}\]
which implies
\[\lim_{n\to\infty}\frac{\mathcal{E}_{\beta}(\Omega)}{n^{\frac{1}{m}+\frac{1}{2} }}\leq\frac{|\omega(x_{i},x_{j})|^{\frac{1}{m}}\cdot\left[\sum_{i=2}^{\infty}i ^{d}\cdot(i-1)^{-1}\right]^{\frac{1}{m}}}{\limsup_{n\to\infty}\left(n^{\frac{1} {m}+\frac{1}{2}}\rho_{n}(\Omega)\right)}. \tag{63}\]
As \(m\rightarrow\infty\), we have
\[\limsup_{n\rightarrow\infty}\left(\lim_{n\rightarrow\infty}\frac{\mathcal{E}_{ \beta}(\Omega)}{n^{\frac{1}{m}+\frac{1}{2}}}\right)\leq\frac{1}{\limsup_{n \rightarrow\infty}\left(n^{\frac{1}{m}+\frac{1}{2}}\rho_{n}(\Omega)\right)}. \tag{64}\]
(58) and (64) imply that \(\limsup_{n\rightarrow\infty}\left(n^{\frac{1}{m}+\frac{1}{2}}\rho_{n}(\Omega)\right)\) and \(\limsup_{n\rightarrow\infty}\left(\lim_{n\rightarrow\infty}\frac{\mathcal{E} _{\beta}(\Omega)}{n^{\frac{1}{m}+\frac{1}{2}}}\right)\) exist and satisfy
\[\lim_{m\rightarrow\infty}\left(\lim_{n\rightarrow\infty}\frac{\mathcal{E}_{ \beta}(\Omega)}{n^{\frac{1}{m}+\frac{1}{2}}}\right)=\frac{1}{\lim_{n \rightarrow\infty}n^{\frac{1}{2}}\rho_{n}(\Omega)}. \tag{65}\]
Thus, (55) holds.
## 3 Weighted Chebyshev Particles MCMC
In this section, we will develop a new sampler, where the propagation of particles is derived from weighted Riesz polarization maximizing, since this quantity inherits some properties of the Chebyshev constant, these samplers are called Chebychev Particles, the sample inherits the special features presented in Section 2 when traversing in a discretized deterministic submanifolds of parameter space via pairwise interactions. We further extend it to sequential sampling in the particle Metropolis-Hastings framework for the inference of hidden Markov models, where the acceptance ratio is approximated by a pseudo-marginal Metropolis-Hastings algorithm.
### Sequential Chebyshev Particles Sampling
Finding the optimal designs of configurations is nondeterministic, especially for high dimensions, where point-by-point traversal results in exponential growth in computational load. A number of optimization algorithms were proposed for the optimal design of different configurations. Park [23] proposed a 2-stage exchange and Newton-type for optimal designs which minimize the integrated mean squared error and maximize entropy, respectively. Ye [24] further extended it by the column-pairwise algorithm. Morris and Mitchell [25] adapted simulated annealing [26] to explore the unit in a reachable domain. Inspired by [25] and [27], we propose a constrained one-point-per-time greedy algorithm for developing the sequential designs of weighted Riesz particles as follows.
(I) The choice of the initial point is crucial since it is closely related to sampling the subsequent points. For the sake of numerical stability, we take the particle with the largest average value as the initial point. We have the expectation, \(\mathbb{E}(x)=\int_{0}^{x}xf(x)dx,x\in\Omega\). The maximum point \(x_{0}\) can be obtained by \(x_{0}=\text{arg}_{x}[\max\mathbb{E}(x)]\).
(II) After we get the initial point \(x_{0}\), we will generate \(x_{2},x_{3}...,x_{n}\) sequentially. Suppose we have \(n\) points using (4). Then the \((n+1)\)th point can be obtained by
\[x_{n+1}=\underset{x}{\text{arg}}\mathcal{E}_{\beta}(\Omega,N)=\text{arg}\max _{\Omega}\min_{x_{i},x_{j}}\left\{\sum_{i=1}^{n-1}\sum_{j=i+1}^{n}\frac{ \omega(x_{i},x_{j})}{\parallel x_{i}-x_{j}\parallel^{m}}\right\}^{\frac{1}{m}}. \tag{66}\]
(III) If \(|x_{n+1}-x_{n}|\geq r_{\text{min}}(x_{1:n}^{*})\), we further develop an acceptance criterion for \(x_{n+1}\): Given \(u\sim U(u\mid 0,1)\), if \(\frac{|x_{n+1}-x_{n}|}{|x_{n}|}\geq u\), we accept \(x_{n+1}\); otherwise, we reject it.
(IV) After we get \(n\) points, we can use some statistical techniques such as regression or kriging to estimate the underlying manifold, where the density can be updated with \(\hat{f}(x)\), and \(\gamma(x)=\hat{\gamma}(x)\), we can recursively continue to generate different configurations of discrete manifolds.
### Pseudo-marginal Metropolis-Hastings Sampling
Consider a hidden Markov model, described by \(X_{t}\sim f_{\theta}(X_{t}\mid X_{t-1}),Y_{t}\mid X_{t}\sim g_{\theta}(y_{t} \mid X_{t}),\) given \(x_{0}\), \(X_{t}(t=1,2,...n)\) is a latent variable to be observed, the measurements \(Y_{t}\) are assumed to be conditionally independent given \(X_{t}\), the most objective is to estimate \(\{X_{1:t},\theta\}\). The Particle Metropolis-Hastings [28], proposed an MCMC method to randomly "walk around" in the assumed measurable \(\theta\) space and thus draw samples from the approximated posterior \(\hat{p}(X_{1:t},\theta\mid y_{1:t})\), whose closed-form \(p(X_{1:t},\theta\mid y_{1:t})=p(\theta\mid y_{1:t})\cdot p(X_{1:t}\mid y_{1:t},\theta)\) is unreachable and cannot be evaluated pointwise exactly.
We will introduce how Chebyshev particles are embedded for the following steps: For the parameter that locates at \(\{\theta,X_{1:t}\}\), a new parameter \(\{\theta^{\prime},X_{1:t}^{{}^{\prime}}\}\) is proposed from a proposal \(q(\theta^{\prime},X_{1:t}^{{}^{\prime}}\mid\theta,X_{1:t})\) with the probability of acceptance
\[\alpha=\min\left\{1,\frac{p(X_{1:t}^{{}^{\prime}},\theta^{\prime} \mid y_{1:t})q(\theta,X_{1:t}\mid\theta^{\prime},X_{1:t}^{{}^{\prime}})}{p(X_ {1:t},\theta\mid y_{1:t})q(\theta^{\prime},X_{1:t}^{{}^{\prime}}\mid\theta,X_{1 :t})}\right\}=\min\left\{1,\frac{p(y_{1:t}\mid\theta^{\prime})p(\theta^{{}^{ \prime}})q(\theta\mid\theta^{\prime})}{p(y_{1:t}\mid\theta)p(\theta)q(\theta^ {\prime}\mid\theta)}\right\}. \tag{67}\]
The optimal importance density function that minimizes the variance of importance weights, conditioned upon \(X_{t-1}^{i}\) and \(y_{t}\) has been shown [29] to be
\[q(X_{t}\mid X_{t-1}^{i},y_{t})_{opt}=p(X_{t}\mid X_{t-1}^{i},y_{t})=\frac{p(y_ {t}\mid X_{t},X_{t-1}^{i})p(X_{t}\mid X_{t-1}^{i})}{p(y_{t}\mid X_{t-1}^{i})}.\]
While sampling from \(p(y_{t}\mid X_{t},X_{t-1}^{i})\) may not be straightforward. As \(X_{1:t}\) belongs to the "deterministic" part of the discrete manifolds of the space, \(X_{1:t}\in\Omega\), the choice of importance density \(q(X_{t}|y_{t},X_{t-1}^{a_{t-1}^{(i)}})\) is from the real configuration of the minimum energy, where \(a_{t}^{(i)}\) denotes the ancestor of particle \(X_{t}^{i}\). If \(N\to\infty\), we have \(\lim_{N\to\infty}q(X_{t}|y_{t},X_{t-1}^{a_{t-1}^{(i)}})=p(X_{t}|X_{t-1}^{i},y_{ t})\). Thus, our proposal converges to the optimal importance density. We can obtain a stochastic estimator of \(p(y_{1:T}\mid\theta)\). This likelihood can be estimated by the weights
\[\hat{p}_{\theta}(y_{1:T})=\prod_{t=1}^{T}(\frac{1}{N_{x}}\sum_{i=1}^{N_{x}} \frac{p(X_{t}|X_{t-1}^{i},y_{t})}{q(X_{t}|y_{t},X_{t-1}^{a_{t-1}^{(i)}})}. \tag{68}\]
It can be shown [30] that \(\mathbb{E}[\hat{p}_{\theta}(y_{1:T})]=p_{\theta}(y_{1:T})\). The variance of the weights will be very small, this would be verified by the following experiments.
Combine (67) and (68), we can get the estimated acceptance ratio
\[\hat{\alpha}=\min\left\{1,\frac{\hat{p}(y_{1:t}\mid\theta^{\prime})p(\theta^{ {}^{\prime}})q(\theta\mid\theta^{\prime})}{\hat{p}(y_{1:t}\mid\theta)p(\theta)q (\theta^{\prime}\mid\theta)}\right\}.\]
## 4 Experiments
In this part, we will introduce the simulations where Chebyshev particles are embedded into the sequential Monte Carlo and its extension to Bayesian analysis for both the linear and non-linear models. We ran the experiments on an HP Z200 workstation with an Intel Core i5 and an \(\#82-18.04.1-\) Ubuntu SMP kernel. The code is available at [https://github.com/986876245/ChebyshevParticles](https://github.com/986876245/ChebyshevParticles).
### Linear Gaussian State Space Model
The linear model is expressed by:
\[x_{t}\mid x_{t-1}\sim g(x_{t}|x_{t-1})dx_{t},\ \ y_{t}\mid x_{t}\sim f(y_{t}|x_{t})dy_ {t}+e_{o}.\]
Where \(g(x_{t}|x_{t-1})=\phi x_{t-1}+e_{v}\), the noise from tracking \(e_{v}\sim N(0,\delta_{v}^{2})\), the noise from observations \(e_{o}\sim N(0,\delta_{o}^{2})\). Here we use (66), to compute \(\widehat{x}_{0:T}^{N}\), and \(\widehat{p}_{\theta}^{N}(y_{1:T})\) with Riesz particles instead for few evaluations.
\[g(\widehat{x}_{t}|\widehat{x}_{t-1})=\underset{x}{\text{arg}}\mathcal{E}_{ \beta}(\Omega,N^{\prime})=\text{arg}\max_{x}\min_{x_{i},x_{j}}\left\{\sum_{x \neq\widehat{x}_{i},i=1}^{t-1}\frac{\omega(\widehat{x}_{i},x)}{\|\widehat{x}_{ i}-x\|^{m}}\right\}^{\frac{1}{m}},\]
\[\omega(\widehat{x}_{i},x)\propto e^{[\gamma(\widehat{x}_{i})\gamma(x)+\beta\| \widehat{x}_{i}-x\|]-\frac{m}{2m}}.\]
For the linear Gaussian state space model, an optimal proposal distribution to propagate the particles \(x_{t}^{i},i=1,N\) can be derived [31] from
\[p_{\theta}^{\text{opt}}(x_{t}^{i}\mid x_{t-1}^{i},y_{t})\propto g_{\theta}(y_ {t}\mid x_{t}^{i})f_{\theta}(x_{t}^{i}\mid x_{t-1}^{i})=\mathbb{N}(x_{t}^{i}, \sigma^{2}[\sigma_{o}^{-2}y_{t}+\sigma_{v}^{-2}\phi x_{t-1}^{i}],\sigma^{2})\]
with \(\sigma^{-2}=\sigma_{v}^{-2}+\sigma_{o}^{-2}\). To ensure the stability of the algorithm and try to minimize the variance of the incremental particle weights at the current time step, we set \(\gamma(x)\propto p_{\theta}^{\text{opt}}(x_{t}^{i}\mid x_{t-1}^{i},y_{t})\). The latent state \(x_{t}\) can be estimated with an
unbiased quantity \(\widehat{x}_{t}^{N}=\frac{1}{N}\sum_{i=1}^{N}x_{t}^{i}\), here, \(N\) is the number of particles for estimating the state, \(N^{\prime}\) is the number of Chebyshev particles for discretizing the submanifolds. In order for the Chebyshev particles to be fully sampled, we give the indices with the remainder of \(N\) divided by \(N^{\prime},(N>N^{\prime})\) for the specific particles \(x_{t}^{i}\).
We first conduct experiments with different Chebyshev particles to discretize the submanifolds. These particles will approach different straight lines when they are mapped to a particular space shown in Figure 1 and satisfy uniform distribution from theorem 1, the parameters of the objective for Chebyshev particles: \(\{\beta=1,m=40,d=1\}\).
Then, we embed these particles into sequential Monte Carlo, the states recursively interact in the Chebyshev particles set \(\mathbb{P}\), to compare with the ground truth, we provide a simulated data record from the model with \(T=250\) observations with initial value \(\phi=0.75,\delta_{v}=1.00,\delta_{o}=0.10\) and \(\widehat{x}_{0}=0\). The estimated log-bias and log-MSE for the Chebyshev particles embedded in sequential Monte Carlo when varying the number of particles \(N\) are shown in Table 1.
Here, we extend Chebyshev particles into the pseudo-marginal Metropolis-Hastings algorithm provided in Section 3.2 for the Bayesian parameter inference of hidden Markov models. We estimate the posterior for \(\phi\), \(\phi\in(-1,1)\) describes the persistence of the state, and keep \(\delta_{v}=1.00,\delta_{e}=0.10\) fixed, the prior for \(\phi_{0}=0.75\), and specify the number of
Figure 1: Theoretical Quantiles for 40, 120 and 200 Chebyshev particles.
Chebyshev particles in the set as \(100\), which is far less than the iterations (\(\geq 2000\)), from this point, we have largely scaled the particle sets for our model, then, we just need few evaluations to infer this model. We conduct different step sizes, \(h_{1}=0.05,h_{2}=0.1,h_{3}=0.5\), the posterior estimate, the burning process and plots of the auto-correlation of a time series by lag are shown in Figure 2. The corresponding table is shown in Table 2.
### Nonlinear State Space Model
We continue with a real application of our proposal to track the stochastic volatility, a nonlinear State Space Model with Gaussian noise, where log volatility considered as the latent variable is an essential element in the analysis of financial risk management. The stochastic volatility is given by
\[X_{0}\sim N(\mu,\frac{\sigma_{v}^{2}}{1-\rho^{2}}),\ \ X_{t}\mid X_{t-1}\sim N( \mu+\rho(X_{t-1}-\mu),\sigma_{v}^{2}),\ \ Y_{t}\mid X_{t}\sim N(0,exp(X_{t})\tau),\]
where the parameters \(\theta=\{\mu,\rho,\sigma_{v},\tau\}\), \(\mu\in\mathbb{R},\rho\in[-1,1]\), \(\sigma_{v}\) and \(\tau\in\mathbb{R}_{+}\), denote the mean value, the persistence in volatility, the standard deviation of the state process and the instantaneous volatility, respectively.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline Number of particles(N) & 10 & 20 & 50 & 100 & 200 & 5000 \\ \hline \hline
\begin{tabular}{c} Estimated posterior mean \\ Estimated posterior variance \\ \end{tabular} & 0.559 & 0.769 & 0.737 & 0.696 & 0.709 & 0.717 \\ \hline \end{tabular}
\end{table}
Table 2: The estimated posterior mean and variance when varying T.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline Number of particles(N) & 10 & 20 & 50 & 100 & 200 & 5000 \\ \hline \hline
\begin{tabular}{c} Estimated posterior mean \\ Estimated posterior variance \\ \end{tabular} & 0.105 & 0.039 & 0.023 & 0.012 & 0.005 & 0.001 \\ \hline \end{tabular}
\end{table}
Table 2: The estimated posterior mean and variance when varying T.
Figure 2: Posterior estimate, burning process and ACF for different step size:\(h_{1}=0.05,h_{2}=0.1,h_{3}=0.5\).
The observations \(y_{t}=\log(p_{t}/p_{t-1})\), also called log-returns, denote the logarithm of the daily difference in the exchange rate \(p_{t}\), here, \(\{p_{t}\}_{t=1}^{T}\) is the daily closing prices of the NASDAQ OMXS30 index (a weighted average of the 30 most traded stocks at the Stockholm stock exchange) [31]. We extract the data from Quandl for the period between January 2, 2015 and January 2, 2016. The resulting log-returns are shown in Figure 3. We use SMC to track the time-series persistency volatility, large variations are frequent, which is well-known as volatility clustering in finance, from the equation (42), as \(|\phi|\) is close to \(1\) and the standard variance is small, the volatility clustering effect easier occurs. Here, the parameters of the objective for Chebyshev particles: \(\{\beta=1,m=40,d=1\}\), the size of Chebyshev particles is \(200\). The initial value is \(\mu_{0}=0,\sigma_{0}=1,\phi_{0}=0.95,\sigma_{\phi}=0.05,\delta_{v_{0}}=0.2, \sigma_{v}=0.03\) We obtain good performance that the posterior estimation can be inferred from a few evaluations, it greatly scales the computation load for high-dimensional sampling, shown in Figure 3.
## 5 Conclusion
Markov chain Monte Carlo (MCMC) provides a feasible method for inferring Hidden Markov models. However, it is often computationally prohibitive and especially constrained by the curse of dimensionality, since the Monte Carlo sampler traverses randomly taking small steps within uncertain regions in the parameter space. In this process, a large number of duplicate samples will be burned, and these duplicate samples greatly increase the computational load. We have introduced a deterministic sampling mechanism, in which all generated samples are derived from particle interactions under a weighted Riesz polarization maximizing criterion. All samples inherit the properties of both a well-separated distance and a bounded covering radius. We have embedded them into MCMC, where we have achieved high performance in our experiment of a hidden Markov model. Only a few evaluations are required, and we can extend our method into high-dimensional sampling. For future research, we will develop a kernel for the Chebyshev particles and scale the model with low complexity of computations from the perspective of equilibrium states on high-dimensional sampling.
## Acknowledgments
This was supported in part by BRBytes project.
|
2303.18116 | Pair Programming with Large Language Models for Sampling and Estimation
of Copulas | Without writing a single line of code by a human, an example Monte Carlo
simulation based application for stochastic dependence modeling with copulas is
developed using a state-of-the-art large language model (LLM) fine-tuned for
conversations. This includes interaction with ChatGPT in natural language and
using mathematical formalism, which, under careful supervision by a
human-expert, led to producing a working code in MATLAB, Python and R for
sampling from a given copula model, evaluation of the model's density,
performing maximum likelihood estimation, optimizing the code for parallel
computing for CPUs as well as for GPUs, and visualization of the computed
results. In contrast to other emerging studies that assess the accuracy of LLMs
like ChatGPT on tasks from a selected area, this work rather investigates ways
how to achieve a successful solution of a standard statistical task in a
collaboration of a human-expert and artificial intelligence (AI). Particularly,
through careful prompt engineering, we separate successful solutions generated
by ChatGPT from unsuccessful ones, resulting in a comprehensive list of related
pros and cons. It is demonstrated that if the typical pitfalls are avoided, we
can substantially benefit from collaborating with an AI partner. For example,
we show that if ChatGPT is not able to provide a correct solution due to a lack
of or incorrect knowledge, the human-expert can feed it with the correct
knowledge, e.g., in the form of mathematical theorems and formulas, and make it
to apply the gained knowledge in order to provide a solution that is correct.
Such ability presents an attractive opportunity to achieve a programmed
solution even for users with rather limited knowledge of programming
techniques. | Jan Górecki | 2023-03-31T15:02:48Z | http://arxiv.org/abs/2303.18116v1 | # Pair Programming with Large Language Models for Sampling and Estimation of Copulas
###### Abstract
Without writing a single line of code by a human, an example Monte Carlo simulation based application for stochastic dependence modeling with copulas is developed using a state-of-the-art large language model (LLM) fine-tuned for conversations. This includes interaction with ChatGPT in natural language and using mathematical formalism, which, under careful supervision by a human-expert, led to producing a working code in MATLAB, Python and R for sampling from a given copula model, evaluation of the model's density, performing maximum likelihood estimation, optimizing the code for parallel computing for CPUs as well as for GPUs, and visualization of the computed results. In contrast to other emerging studies that assess the accuracy of LLMs like ChatGPT on tasks from a selected area, this work rather investigates ways how to achieve a successful solution of a standard statistical task in a collaboration of a human-expert and artificial intelligence (AI). Particularly, through careful prompt engineering, we separate successful solutions generated by ChatGPT from unsuccessful ones, resulting in a comprehensive list of related pros and cons. It is demonstrated that if the typical pitfalls are avoided, we can substantially benefit from collaborating with an AI partner. For example, we show that if ChatGPT is not able to provide a correct solution due to a lack of or incorrect knowledge, the human-expert can feed it with the correct knowledge, e.g., in the form of mathematical theorems and formulas, and make it to apply the gained knowledge in order to provide a solution that is correct. Such ability presents an attractive opportunity to achieve a programmed solution even for users with rather limited knowledge of programming techniques.
**Keywords:** human-AI collaboration, analytically intractable problems, prompt engineering, natural language
**MSC classification:** 65C60, 68N19, 68T50
## 1 Introduction
The recent progress in solving natural language processing (NLP) tasks using large language models (LLMs) resulted in models with previously unseen quality of text generation and contextual understanding. These models, such as BERT (Devlin et al., 2018), RoBERTa (Liu et al., 2019) and GPT-3 (Brown et al.,
2020), are capable of performing a wide range of NLP tasks, including text classification, question-answering, text summarization, and more. With more than 100 million users registered in two months after release for public testing through a web portal1, ChatGPT2 is the LLM that currently most resonates in the artificial intelligent (AI) community. This conversational AI is fine-tuned from the GPT-3.5 series with reinforcement learning from human feedback (Christiano et al., 2017; Stiennon et al., 2020), using nearly the same methods as InstructGPT (Ouyang et al., 2022), but with slight differences in the data collection setup. In March 2023, the ChatGPT's developer released a new version of GPT-3.5, GPT-4 (OpenAI, 2023). In the time of the writing of this paper, GPT-4 was not freely available, so our results do not include its outputs. However, as a technical report for some of the model's properties is available, we add the relevant information where appropriate.
Footnote 1: [https://www.demandsage.com/chatgpt-statistics/](https://www.demandsage.com/chatgpt-statistics/)
Footnote 2: [https://openai.com/blog/chatgpt/](https://openai.com/blog/chatgpt/)
A particular result of the ChatGPT's fine-tuning is that it can generate corresponding code in many programming languages given a task description in natural language. This can be exploited in _pair programming_(Williams, 2001) with ChatGPT, which then offers several benefits, including:
* Enhanced productivity: ChatGPT can help automate repetitive and time-consuming programming tasks, freeing up time for developers to focus on higher-level problem-solving and creative work. On average a time saving of 55% was reported for the task of writing an HTTP server in JavaScript script in the study conducted by the GitHub Next team3 for GitHub Copilot4. The latter is another code suggestion tool that generates code snippets based on natural language descriptions, powered by an LLM similar to ChatGPT, Codex (Chen et al., 2021). Footnote 3: [https://github.blog/2022-09-07-](https://github.blog/2022-09-07-)...
* Improved code quality: Pair programming with ChatGPT can help identify errors and bugs in the code before they become bigger problems. ChatGPT can also suggest improvements to code architecture and design.
* Knowledge sharing: ChatGPT can help less experienced developers learn from more experienced team members by providing suggestions and guidance.
* Better code documentation: ChatGPT can help create more detailed and accurate code documentation by generating comments and annotations based on the code.
* Accessibility: ChatGPT can make programming more accessible to people who may not have a programming background, allowing them to collaborate with developers and contribute to projects in a meaningful way. For example, having developed a new theory that requires computations, it might be appealing and time-effective for researchers to use tools like
ChatGPT to implement the solution without the need to involve typically expensive manpower in software engineering.
Currently, there appear several studies that assess the accuracy of LLMs like ChatGPT based on a set of tasks from a particular area. For example, multiple aspects of mathematical skills of ChatGPT are evaluated in Frieder et al. (2023), with the main observation that it is not yet ready to deliver high-quality proofs or calculations consistently. In Katz et al. (2023), a preliminary version of GPT-4 was experimentally evaluated against prior generations of GPT on the entire Uniform Bar Examination (UBE)5, and it is reported that GPT-4 significantly outperforms both human test-takers and prior models, demonstrating a 26% increase over the GPT-3.5-based model and beating humans in five of seven subject areas. In Bang et al. (2023), an extensive evaluation of ChatGPT using 21 data sets covering 8 different NLP tasks such as summarization, sentiment analysis and question answering is presented. The authors found that, on the one hand, ChatGPT outperforms LLMs with so-called zero-shot learning (Brown et al., 2020) on most tasks and even out-performs fine-tuned models on some tasks. On the other hand, they conclude that ChatGPT suffers from hallucination problems like other LLMs and it generates more extrinsic hallucinations from its parametric memory as it does not have access to an external knowledge base. Interestingly, the authors observed in several tasks that the possibility of interaction with ChatGPT enables human collaboration with the underlying LLM to improve its performance.
Footnote 5: [https://www.ncbex.org/exams/ube/](https://www.ncbex.org/exams/ube/)
The latter observation is the main focus of this work. Rather than evaluating the accuracy of LLMs, we investigate ways to benefit from pair programming with an AI partner in order to achieve a successful solution of a task requiring intensive computations. Despite many impressive recent achievements of state-of-the-art LLMs, achieving a functional code is far from being straightforward; one of many unsuccessful attempts is reported at freeCodeCamp6. Importantly, successful attempts are also emerging. In Maddigan and Susnjak (2023), the authors report that LLMs together with the proposed prompts can offer a reliable approach to rendering visualisations from natural language queries, even when queries are highly misspecified and underspecified. However, in many areas, including computationally intensive solutions of analytically intractable statistical problems, a study that demonstrates benefits from pair programming with an AI partner is missing.
Footnote 6: [https://www.freecodecamp.org/news/pair-programming](https://www.freecodecamp.org/news/pair-programming)...
This work fills this gap and considers applications involving _copulas_(Nelsen, 2006; Joe, 2014) as models for stochastic dependence between random variables. These applications are known for their analytical intractability, hence, the Monte Carlo (MC) approach is most widely used to compute the involved quantities of interest. As the MC approach often involves large computation efforts, conducting a MC study requires one to implement all underlying concepts. We demonstrate how to make ChatGPT produce a working implementation for such an application by interacting with it in a natural language and using math
ematical formalism. To fully illustrate the coding abilities of ChatGPT, the human role is pushed to an extreme, and all the mentioned tasks are implemented without a single line of code written by the human or tweaking the generated code in any way. It is important to emphasize that even if the application under consideration relates to a specific area of probability and statistics, our observations apply in a wider scope as the tasks we consider (sampling from a given (copula) model, evaluation of the model's density, performing maximum likelihood estimation, optimizing the code for parallel computing and visualization of the computed results) commonly appear in many statistical applications. Also, we do not present just one way to achieve a successful solution for a given task. Most of the successful solutions are complemented with examples demonstrating which adjustments of our prompts for ChatGPT turn unsuccessful solutions to successful ones. This results in a comprehensive list of related pros and cons, suggesting that if the typical pitfalls are avoided, we can substantially benefit from a collaboration with LLMs like ChatGPT. Particularly, we demonstrate that if ChatGPT is not able to provide a correct solution due to limitations in its knowledge, it is possible to feed it with the necessary knowledge, and make ChatGPT apply this knowledge to provide a correct solution. Having all the sub-tasks of the main task successfully coded in a particular programming language, we also demonstrate how to fully exploit several impressive abilities of ChatGPT. For example, by a simple high-level prompt like "Now code it in Python.", ChatGPT correctly transpiles the code from one to another programming language in a few seconds. Also, if an error in the code produced by ChatGPT is encountered during execution, it is demonstrated that ChatGPT is not only able to identify the error, but even immediately produces a corrected version after the error message is copy-pasted to ChatGPT's web interface.
The paper is organized as follows. Section 2 presents the tasks we consider and sets up the way we interact with ChatGPT. Section 3 presents the development of the task via pair programming with ChatGPT. Section 4 summarizes the pros and cons observed during the task development, including a discussion on how to mitigate the latter, and Section 5 concludes.
## 2 Methodology
### The task
Let \((x_{ij})\in\mathbb{R}^{n\times d}\) be a sample of size \(n\) from a random vector \((X_{1},\ldots,X_{d})\sim F\), where \(F\) is a joint distribution function with the continuous univariate margins \(F_{1},\ldots,F_{d}\) and _copula_(Sklar, 1959)\(C\) implicitly given by \(F(x_{1},\ldots,x_{d})=C(F_{1}(x_{1}),\ldots,F_{d}(x_{d}))\) for \(x_{1},\ldots,x_{d}\in\mathbb{R}\). An explicit formula for \(C\) is \(C(u_{1},\ldots,\)\(u_{d})=F(F_{1}^{-1}(u_{1}),\ldots,F_{d}^{-1}(u_{d})),\ u_{1},\ldots,u_{d}\in[0,1]\). A typical application involving the MC approach and copulas assumes that \(C\) is unknown but belongs to a parametric family of copula models \(\{C_{\theta}:\theta\in\Theta\}\), where \(\Theta\) is an open subset of \(\mathbb{R}^{p}\) for some integer \(p\geq 1\). The following steps are then considered:
1. Estimate the true but unknown parameter \(\theta_{0}\in\Theta\) of \(C_{\theta_{0}}=C\), e.g., using
the pseudo maximum likelihood (ML) estimator \[\hat{\theta}=\operatorname*{argmax}_{\theta\in\Theta}\sum_{i=1}^{n}\log c_{\theta} (\hat{u}_{i1},\dots,\hat{u}_{id}),\] (1) where \(c_{\theta}\) is the density of \(C_{\theta},\ \theta\in\Theta\), \(\hat{u}_{ij}=\hat{F}_{j}(x_{ij})\) and \(\hat{F}_{j}\) is an estimate of \(F_{j}\) for \(i\in\{1,\dots,n\},\ j\in\{1,\dots.d\}\). For some copula families, e.g., for Archimedean ones, evaluation of \(c_{\theta}\) for large \(d\) is already a challenge; see Hofert et al. (2013). For pair-copula constructions, the main challenge lies in computing (1), see Hobaek Haff (2013) or Schellhase and Spanhel (2018), typically done using numerical methods like gradient descent.
2. Generate a sample \((v_{ij})\in[0,1]^{N\times d}\) from \(C_{\hat{\theta}}\), typically with \(N\gg n\). For several popular copula families, this task is also challenging, and involves different techniques for efficiently sampling from \(C_{\hat{\theta}}\); see, e.g., Hofert (2010); Hofert et al. (2018) for sampling techniques related to Archimedean and Archimax copulas and their hierarchical extensions.
3. Compute a sample from an analytically intractable distribution, e.g., from the distribution of \(\bar{X}=\frac{1}{d}\sum_{j=1}^{d}X_{j}\). Compared to the previous two points, this is a trivial task, we just need to evaluate \(\bar{x}_{i}=\frac{1}{d}\sum_{i=1}^{d}\hat{F}_{j}^{-1}(v_{ij}),\ i\in\{1,\dots,N\}\).
4. Compute the desired quantity based on \(\bar{x}_{1},\dots,\bar{x}_{N}\). For example, if \(X_{1},\dots,\)\(X_{d}\) represent risk factor changes, a quantile of the distribution function of \(\bar{X}\) represents the _Value-at-Risk_\(\operatorname*{VaR}_{\alpha}\), commonly used in quantitative risk management; see McNeil et al. (2015). Approximating \(\operatorname*{VaR}\)\((X_{1}+\dots+X_{d})\) is also trivial as it just involves computing the order statistics \(\bar{x}_{(1)}\leq\dots\leq\bar{x}_{(N)}\) and then picking out \(\bar{x}_{(\lceil\alpha N\rceil)}\), where \(\alpha\in[0,1]\) is a desired confidence level. In the same realm, the quantity known as _expected shortfall_ involves computing the average of the values \(\bar{x}_{i}\) that are larger than \(\operatorname*{VaR}_{\alpha}\), so again a computationally trivial task.
In order to clearly see that the code generated by ChatGPT indeed works as expected without the need of an experienced programmer, we deviate a bit from the above outline, while keeping the non-trivial tasks, i.e., the sampling and estimation. We thus prompt ChatGPT to generate code that does the following:
1. Generate a sample from \(C_{\theta_{0}}\), where \(\theta_{0}\in\mathbb{R}\).
2. Based on this sample, compute the ML estimator \(\hat{\theta}\) of the true parameter \(\theta_{0}\) using (1).
Then, we repeat these two steps for several values of \(\theta_{0}\), e.g., linearly spaced on some convenient interval of \(\mathbb{R}\). If the plot of the pairs of \((\theta_{0},\hat{\theta})\) is close to the identity \((\theta-\theta)\) plot, then one has strong evidence of a correct statistical sampling and estimation procedure. Finally, to allow for scaling, we ask ChatGPT to optimize the generated code for parallel computing on CPUs as well as on GPUs.
### The communication protocol
When interacting with ChatGPT, we use the web portal provided by its development team7. Also, we set up and follow this _communication protocol_:
Footnote 7: chat.openai.com
1. We prompt ChatGPT to generate code for solving a selected task in natural language and using mathematical formalism, that is, we specify the task in plain text and do not use any specific formal language. For formulas, we use plain text like psi(t) = (1 + t)^(-1/theta).
2. If the solution generated by ChatGPT is wrong, that is, does not solve the given task, we communicate the problem to ChatGPT, and ask it to provide us with a corrected solution.
3. If this corrected solution is still wrong, we feed ChatGPT with the knowledge necessary to complete the task successfully, e.g., we provide it with theorems and formulas in plain text. For an example, see the third prompt in Section 3.4.
In this way, we simulate an interaction between two humans, e.g., a client sends by email a task to a software engineer, and we play the role of the client and ChatGPT the role of the software engineer. As it is typical that the client is not aware of all details required to solve the task at the beginning of the interaction, such a communication protocol may be frequently observed in practice. The client starts by providing the (subjectively) most important features of the problem in order to minimize her/his initial effort, and then, if necessary, she/he adds more details to get a more precise solution. Importantly, this communication protocol led to a successful completion of the aforementioned tasks, which is reported in Section 3.
With regards to passing ChatGPT the required knowledge, it is important to realize that ChatGPT does _not_ have any memory to remember the previous conversation with a user. Instead, the trick for ChatGPT to appear to remember previous conversations is to feed it the entire conversation history as a single prompt. This means that when a user sends a message, the previous conversation history is appended to the prompt and then fed to ChatGPT. This prompt engineering technique is widely used in conversational AI systems to improve the model's ability to generate coherent and contextually appropriate responses. However, it is just a trick used to create the _illusion_ of memory in ChatGPT.
If the previous conversation is too long (larger than 4096 tokens8, where a token is roughly 3/4 of an English word9), it may not fit entirely within the context window that ChatGPT uses to generate responses. In such cases, the model may only have access to a partial view of the conversation history, which
can result in the model seeming like it has forgotten some parts of the conversation. To mitigate this issue, conversational AI designers often use techniques like truncating or summarizing the conversation history to ensure that it fits within the context window. The way we solve this problem in our example task is re-introducing the parts that we referred to to ChatGPT. For example, when transpiling the code from Python (Appendix A) to R (Appendix B), we first copy-paste the Python code to ChatGPT's web interface and then ask it to transpile it to R. Without having this technical limitation in mind, it is unlikely to get a correct answer/solution if we refer to the conversion part that does not fit within the context window. Finally note that according to its technical report, GPT-4 uses the context window that is 8x larger than of ChatGPT, so it can contain roughly 25,000 words. This suggests that the limitation imposed by the context window length will become less and less of a concern.
### The copula family
In order to make the example task specified in Section 2.1 precise, we choose the parametric family \(\{C_{\theta}:\theta\in\Theta\}\) to be the popular family of Clayton copulas (Clayton, 1978), given by
\[C_{\theta}(u_{1},u_{2})=(\max(u_{1}^{-\theta}+u_{2}^{-\theta}-1,0))^{-\frac{1} {\theta}}, \tag{2}\]
where \(u_{1},u_{2}\in[0,1]\) and \(-1\leq\theta<\infty\), \(\theta\neq 0\). This family of copulas is used in a wide variety of applications. To mention several recent ones, e.g., Huang et al. (2022) use it to analyse the correlation between the residual series of a long short-term memory neural network and a wind-speed series. Particularly, the maximum likelihood estimation of the copula parameter is utilized, i.e., the procedure that ChatGPT implements here in Section 3.3. In the simulation study in Michimae and Emura (2022), where copula-based competing risks models for latent failure times are proposed, the authors utilize sampling from Clayton copulas, i.e., the procedure that ChatGPT implements here in Section 3.4.
For simplicity, as well as the fact that the models with \(\theta<0\), that is, those with negative dependence, are rarely used in practice, we restrict to \(\theta\in(0,\infty)\), which allows one to rewrite (2) to
\[C_{\theta}(u_{1},u_{2})=(u_{1}^{-\theta}+u_{2}^{-\theta}-1)^{-\frac{1}{\theta}}. \tag{3}\]
The technical reason for choosing this family is its simple analytical form, which makes easier for the reader to track all the formulas we ask for and get from ChatGPT, e.g., the probability density function (PDF). Another reason is ChatGPT's relatively limited knowledge of this family. By contrast, e.g., for the most popular family of Gaussian copulas, ChatGPT was not able to generate a sampling algorithm without being fed with some necessary theory. The latter simulates a realistic situation when ChatGPT is facing a new theory/concept, e.g., one recently developed by the user. However, we would like to encourage the reader to experiment with any family of interest or even with a task that differs from our example.
Pair programming with ChatGPT
Pair programming (Williams, 2001) is a software development technique in which two programmers work together at one workstation. One, the _driver_, writes code while the other, the _navigator_ reviews each line of code as it is typed in and considers the "strategic" direction of the work. In what follows, we, the human-part of the pair, put ourselves in the role of the navigator, giving specific tasks to the driver, ChatGPT, representing the AI-part of the pair.
We begin prompting ChatGPT with tasks that require rather less lines of code in order to not overwhelm the reader with the amount of results, while still demonstrating as many abilities of ChatGPT as possible. Then, we gradually move to larger tasks until we get all of them implemented. Our prompts are in sans serif. Responses of ChatGPT are either in verbatim, if the response is code, or in _italic_, otherwise. In its first occurrence, we indicate this explicitly. Note that as ChatGPT is quite a loquacious LLM, we mostly limit the length of its responses, otherwise we would be overwhelmed with answers of unnecessary detail. For the same reason, when asking ChatGPT for code, we mostly omit its comments on the produced code. Also note that when commenting on ChatGPT's responses, we speak of it as if it were a human, e.g., it "understands", "knows" or "is aware of" something, which should be interpreted by the reader in the sense that ChatGPT produced a response that (typically very well) _mimics_ a corresponding human reaction. As ChatGPT's responses are by default non-deterministic, i.e., giving it the same prompt again, the response might slightly differ. To take this feature into account, we re-generate the response for each of our prompts three times, and if these responses are factually different from each other, we indicate it accordingly. Finally note that the whole interaction is conducted in one chat window. Once we observe that ChatGPT starts to forget the previous context due to the reasons described in Section 2.2, we re-introduce it as we describe in the same section.
In the rest of this section, we first investigate the knowledge of ChatGPT on the topic under consideration, and then we prompt it to generate code for evaluation of the density of the Clayton copula, for ML estimation of its parameter, for sampling from the copula, for creating a visualization of the example Monte Carlo approach, and for optimizing the code for parallel computations.
### Warm up
(Prompt:) What is the Clayton bivariate copula, why is it interesting, which is its analytical form and is it an Archimedean copula? Respond to these questions in at most 100 words.
(Response:) The Clayton copula is a bivariate copula function that is widely used in statistics and finance to model dependence between variables. It is interesting because it has a simple form and can capture positive dependence between variables while allowing for asymmetric tails. The analytical form of the Clayton copula is C(u, v; theta) = (max(u'(-theta) + v'(-theta) - 1, 0)) '(-1/theta), where theta is the Clayton copula parameter. The Clayton copula is an Archimedean copula, meaning that it can be expressed as the generator function of a particular class of symmetric distributions._
We see that ChatGPT can save our time by quickly and concisely summarizing basic facts about the topic of our interest. We can also limit the size of the answer, which is satisfied in this 91 words long answer. The information about the positive dependence probably follows from what we have already stated before: the negatively dependent models are rarely used in practice, which is probably reflected in ChatGPT's training data. However, several details of the answer can be discussed. In lines 3 and 4, "random variables" instead of just "variables" would be more precise. From the last sentence, it follows that an Archimedean copula can be expressed as the generator function for some symmetric distributions. This is at least confusing as Archimedean copulas are rather a particular class of copulas admitting a certain functional form _based_ on so-called generator functions. Finally, _symmetric_ distributions have their precise meaning: such a distribution is unchanged when, in the continuous case, its probability density function is reflected around a vertical line at some value of the random variable represented by the distribution. Whereas Archimedean copulas posses a kind of symmetry following from their exchangeability, they do not belong to symmetric distributions.
To investigate the limits of ChatGPT's knowledge, let us prompt it with two further questions. According to the previous response, we can speculate that it has limited knowledge on the Clayton models with negative dependence.
If the Clayton copula parameter is negative, e.g., theta = -1/2, is the corresponding Clayton copula a model for positive dependence between variables? Respond with one word.
_No._
_Correction: Yes._
_Yes._
We prompted ChatGPT to answer the same question three times and got contradicting answers. The first answer is correct, however, after asking again, ChatGPT changed its mind. Before commenting on that, let us try once again, with a bit more complex concept.
If the Clayton copula parameter is positive, e.g., theta = 1, is the survival copula of the corresponding Clayton copula a model with lower or with upper tail dependence?
_When the Clayton copula parameter is positive, i.e., theta \(>\) 0, the survival copula of the corresponding Clayton copula is a model with lower tail dependence._
If \((U_{1},U_{2})\sim C\), then the survival copula of \(C\) is the distribution of \((1-U_{1},1-U_{2})\), and thus the properties of the lower tail of \(C\) are the properties of the upper tail of the survival copula. Hence, we got an incorrect answer. After asking again, we got this response.
_For the Clayton copula, if the parameter theta is positive, the survival copula will have upper tail dependence. This is because the Clayton copula exhibits positive lower tail dependence, which means that when one variable has a very low value, the other variable tends to have a low value as well. This implies that when one variable has a very high survival time, the other variable also tends to have a high survival time, leading to upper tail dependence in the survival copula._
Again, we got contradicting answers. Based on this observation, the reader could raise the following question.
Would you agree with the following statement? ChatGPT is a large language model, and thus, if we drastically simplify it, it basically makes inference on which word comes next. However, it makes no inference on the underlying concepts.
_As an AI language model, ChatGPT is indeed designed to make predictions on the next word based on the context and patterns in the input text. However, it is also capable of capturing and inferring underlying concepts from
the text, to some extent. While it is true that ChatGPT is not capable of understanding concepts in the same way as humans do, it can still recognize patterns and relationships between words and phrases, and use this information to make inferences and generate responses that reflect some level of understanding of the underlying concepts._
This response confirms what we have seen so far, hence, any user should take these limitations into account with the utmost seriousness and be extremely careful when asking ChatGPT for some reasoning. The examples above also well illustrate that the current version of ChatGPT is definitely _not_ an appropriate tool for reasoning, which is as also observed by Frieder et al. (2023) and Bang et al. (2023). However, this by no means implies that it cannot serve as a helpful AI partner for pair programming.
### The density
It can be easily shown that the density \(c_{\theta}\) of \(C_{\theta}\) is
\[c_{\theta}(u,v)=\frac{\partial^{2}C_{\theta}(u,v)}{\partial u\partial v}=( \theta+1)(uv)^{-(\theta+1)}(u^{-\theta}+v^{-\theta}-1)^{-\frac{1}{\theta}-2} \tag{4}\]
for \(\theta>0\). Before we ask ChatGPT to generate code evaluating the Clayton copula density, it would be good to ask for a plain formula.
What is the analytical form of the probability density function of the bivariate Clayton copula? Output the formula using LaTeX.
\[c_{\theta}(u,v)=\frac{\partial^{2}C_{\theta}(u,v)}{\partial u\partial v}= \frac{\theta+1}{\theta}\bigg{(}u^{-\theta-1}v^{-\theta-1}(u^{-\theta}+v^{- \theta}-1)^{-\frac{\theta+2}{\theta+1}}\bigg{)}.\]
Out of many factually distinct responses, we present this one to illustrate how dangerous it could be to believe that ChatGPT knows or can derive the right formula. Even if it looks quite similar to (4), this is _not_ the density of \(C_{\theta}\).
This formula is not correct. Which is the analytical form of the probability density function of the bivariate Clayton copula? Output the formula using LaTeX.
\[c(u,v)=\Big{[}(u^{-\theta}+v^{-\theta}-1)^{-\frac{1}{\theta}-2}\Big{]}\cdot \big{(}u^{-\theta}v^{-\theta}\big{)}^{-\frac{1}{\theta}-1}\]
Another formula quite similar to (4), but also this one is incorrect. As already mentioned before, ChatGPT is not a good option when it comes to reasoning. So, to get the right formula, some symbolic tool is definitely preferred. However, note that ChatGPT plugins10 has been recently announced, and particularly Code interpreter, which is an experimental ChatGPT model that can use Python, handle uploads and downloads, allows for symbolic computations. Even if not freely available yet, this might be also a possible way to mitigate the problem, directly in the ChatGPT environment.
Footnote 10: [https://openai.com/blog/chatgpt-plugins](https://openai.com/blog/chatgpt-plugins)
Following our communication protocol, let us feed ChatGPT with the right formula, and ask for a corresponding function in three programming languages: 1) MATLAB11, which represents a proprietary software, 2) Python12, an open-source software popular in the AI community, and \(\mathsf{R}^{13}\), an open-source software popular in the statistical community. Note that in cases when the output is too wide, we adjust it in order to fit on the page; otherwise we do not adjust it in any other way.
Footnote 11: We used version R2020a.
Footnote 12: Version 3.9.
This is the correct formula: c(u,v) = (1 + theta) * (u * v)^(-theta-1) * (u^(-theta) + v^(-theta) - 1)^(-2-1/theta). Code it in MATLAB as Clayton-CopulaDensity(u, v, theta).
_Here's the MATLAB code for the probability density function of the Clayton copula with parameter theta evaluated at (u, v):_
function dens = ClaytonCopulaDensity(u, v, theta)
dens = (1 + theta) * (u * v)^(-theta - 1) *...
(u^(-theta) + v^(-theta) - 1)^(-2 - 1/theta);
end
_You can call this function with your desired values of u, v, and theta to evaluate the density of the Clayton copula at those values._
After feeding ChatGPT with the right formula, it immediately generated a functional code. Notice that we used quite a natural and relaxed form of conversation, e.g., like in an email.
Code ClaytonCopulaDensity in Python.
def ClaytonCopulaDensity(u, v, theta):
density = ((1 + theta) * (u * v)**(-theta - 1) *
(u**(-theta) + v**(-theta) - 1)**(-2 - 1/theta)) return density ```
And in R.
``` ClaytonCopulaDensity<-function(u,v,theta){ density<-((1+theta)*(u*v)^(-theta-1)* (u^(-theta)+v^(-theta)-1)^(-2-1/theta)) return(density) } ```
As ChatGPT takes into account the previous conversation, we could afford to be extremely concise with our prompts and still get correct solutions. In what follows, we ask for code only in MATLAB to save space. However, the equivalent code in Python and R is shown in the appendices, where all the functions can be easily identified by their names.
### The estimation
``` Having this ClaytonCopulaDensity(u,v,theta) function,couldyouwritea MATLAB code for the maximum likelihood estimator of the parameter theta? functionthetaMLE=ClaytonCopulaMLE(u,v) loglik=@(theta)sum(log(ClaytonCopulaDensity(u,v,theta))); thetaMLE=fminsearch(@(theta)-loglik(theta),1); end ```
By contrast to our struggles with the PDF, we immediately got a correct solution. This may be due to the fact that code snippets computing ML estimators occur more frequently in ChatGPT's training data. This pattern (the more general the task, the more frequently we receive a working solution on first trial) is observed also in other examples throughout this work.
### The sampling
``` In MATLAB,code a functionClaytonSample(theta,n)thatgeneratessample ofsizenfromthebivariateCaytoncopulawithparametertheta. ```
function u = ClaytonSample(theta, n) u1 = rand(n, 1); u2 = rand(n, 1); v1 = (u1.^(-theta)+u2.^(-theta)-1).^(-1/theta); v2 = (u1.^(-theta)+u2.^(-theta)-1).^(-1/theta); u = [v1, v2]; end ```
We see that u1 and u2 are drawn from the standard uniform distribution, which is typically used in such sampling algorithms. However, then these are just used as arguments of (3), resulting in two identical one-dimensional samples from the distribution of \(C(U_{1},U_{2})\), where \(U_{1}\) and \(U_{2}\) are two independent random variables with the standard uniform distribution. Note if the random vector \((U_{1},U_{2})\) would be distributed according to \(C\), then the distribution function of \(C(U_{1},U_{2})\) would be the distribution known as Kendall function; see Joe (2014, pp. 419-422). So we are witnessing a mixture of approaches related to copula modelling, which, however, do not lead to a correct solution. A sample generated with this code for \(\theta=2\) is shown at the left-hand side of Figure 1, and it is clearly _not_ a sample from the Clayton copula \(C_{\theta}\) with parameter \(\theta=2\).
In ClaytonSample(theta, n), v1 and v2 are two identical vectors. Hence, u = [v1, v2] cannot be a sample from the bivariate Clayton copula with parameter theta. Could you code a correct version of ClaytonSample(theta, n)?
``` function u = ClaytonSample(theta, n)
Figure 1: A sample from the random vector \((C(U_{1},U_{2}),C(U_{1},U_{2}))\) (left). A sample from another incorrect sampling trial of ChatGPT (center). A sample generated by the correct sampling procedure based on theory fed to ChatGPT (right). The sample was obtained by calling ClaytonSample(2, 1000).
w = rand(n, 1); v = rand(n, 1); for i = 1:n v(i) = (w(i)^(-theta) * (1 - v(i))^... (-theta/(1+theta)))^(-1/theta); end u = [w, v] ```
In the second try, ChatGPT also failed to produce a correct solution; see the sample generated by this code at the center of Figure 1. Here, the reasoning behind is even less clear. These two trivial examples again illustrate that we must be extremely careful about results produced by ChatGPT. On the other hand, this gives us a chance to show that ChatGPT can interactively learn new concepts by feeding it with the necessary theory as we will demonstrate next.
A standard approach for efficient sampling from Archimedean copulas was introduced in Marshall and Olkin (1988). Let us feed ChatGPT with the related theory and ask it for a correct solution.
``` Letpsi(t)=(1+t)^(-1/theta)beafunctionwithparameterthetain(0,infy). LetE1andE2belowtwoindependentsamplesfromthestandardexponentialdistribution,andVbeanindependentsamplefromtheGammadistributionwiththeshapeparameter1/thetaandthescaleparameter1.Then(psi(E1/V),psi(E2/V))isaamplefromthebivariateClaytoncopulawithparametertheta.CodeaMATLABfunctionClaytonSample(theta,n),whichgeneratesnsamplesfromthebivariateClaytoncopulawithparameterthetausingtheprovidedtheory. ```
functionu=ClaytonSample(theta,n) %Generatetwosamplesfromexponentialdistribution E=exprnd(1,2,n); %Generateasamplefromgammadistribution V=gamrnd(1/theta,1,1,n); %Definethefunctionpsi(t) psi=@(t)(1+t).^(-1/theta); %Applythetransformation u=[psi(E(1,:)./V);psi(E(2,:)./V)]^; end
```
Apart from producing a correct solution, which generated the sample at the right-hand side of Figure 1, this is a clear demonstration of the ability of ChatGPT to _learn_ during the interaction with the user (having in mind that this capacity is just a quite convincing illusion enabled by the prompt engineering
technique applied to ChatGPT, as discussed in Section 2.2). In contrast to the previous example, where it "only" translated the fed formula for the PDF to a more-or-less similarly looking code, this example shows that ChatGPT is able to understand even relatively complex concepts.
Such an ability makes ChatGPT a feasible tool also in cases when it faces unknown concepts. This is essential, e.g., in cases when a new theory developed by a researcher is about to be coded. Also, notice that ChatGPT saves us time by mapping our concepts, e.g., the standard exponential and gamma distribution, to existing functions in the considered programming language. Particularly, without explicitly asking, ChatGPT avoided a loop iterating over 1 to \(n\) in which one sample from the copula would be generated, which is typically slow, but generates directly \(n\) samples of E1 and E2 from the standard exponential distribution (exprnd(1, 2, n), where 1 denotes the parameter of the exponential distribution). We can thus avoid probably the most boring part of coding when we are browsing the documentation for a particular function in available libraries.
Finally, let us perform a simple check of all the previously generated functions.
Generate a script that draws a sample of size 1000 from the Clayton copula with theta = 2 and then computes an estimate theta_hat of theta using the ML estimator.
U = ClaytonSample(2, 1000); theta_hat = ClaytonCopulaMLE(U(:,1), U(:,2));
Being in the context of the previous conversation, ChatGPT exploited the ClaytonCopulaMLE function generated in Section 3.3. After executing the simple check code, we got the following error.
Error using * Incorrect dimensions for matrix multiplication. Check that the number of columns in the first matrix matches the number of rows in the second matrix. To perform elementwise multiplication, use '.*'.
Error in ClaytonCopulaDensity (line 2) dens = (1 + theta) * (u * v)^(-theta - 1) *...
Error in ClaytonCopulaMLE>@(theta)sum(log(... ClaytonCopulaDensity(u,v,theta))) (line 2) loglik = @(theta) sum(log(ClaytonCopulaDensity(u, v, theta)));
Error in ClaytonCopulaMLE>@(theta)-loglik(theta) (line 3) thetaMLE = fminsearch(@(theta) -loglik(theta), 1);
Error in fminsearch (line 200) fv(:,1) = funfcn(x,varargin{:});
Error in ClaytonCopulaMLE (line 3) thetaMLE = fminsearch(@(theta) -loglik(theta), 1);
We copy-pasted the error message to ChatGPT, e.g., gave it the prompt
We got this error message: ***
where *** was the error message, and got the following response.
_The error message suggests that the code is trying to perform matrix multiplication with the * operator, but the dimensions of the matrices do not match. This is likely happening in the line dens = (1 + theta) * (u * v)^(-theta - 1) *...._
_To fix this, we need to use element-wise multiplication with the.* operator instead of matrix multiplication with *. Here is the corrected version of the function:_
function dens = ClaytonCopulaDensity(u, v, theta) dens = (1 + theta) * (u.* v).^(-theta - 1).*... (u.^(-theta) + v.^(-theta) - 1).^(-2 - 1/theta);
end
_Note the use of.* instead of * and.^ instead of ^ to indicate element-wise operations._
ChatGPT not only detected what is wrong, but provided a corrected solution together with an explanation of the problem. Given that these trivial mistakes occur quite often in daily practice, such help that points out to the problem or even solves it can significantly save time. After executing the simple check with the corrected version of ClaytonCopulaDensity, we got theta_hat = 2.12. So far so good.
Write MATLAB code that computes and creates a plot of 20 pairs of (theta, theta_hat), where theta is from (0.1, 3) and theta_hat is the ML estimate of theta (computed using ClaytonCopulaMLE(u, v)) based on a sample of size 1000 from the Clayton bivariate copula with parameter theta (generated using U = ClaytonSample(theta, n))
n = 1000; thetas = linspace(0.1, 3, 20); theta_hats = zeros(size(thetas));
for i = 1:length(thetas) U = ClaytonSample(thetas(i), n); theta_hats(i) = ClaytonCopulaMLE(U(:,1), U(:,2)); end
plot(thetas, theta_hats, 'o'); hold on; plot([0, 3], [0, 3], '--'); xlim([0, 3]); ylim([0, 3]); xlabel('\theta'); ylabel('\theta_{hat}'); title(['Maximum likelihood estimates for '... 'Clayton copula parameter']); legend('Estimates', 'True values'); The plot generated by the response is depicted on the left-hand side of Figure 2. As the \((\theta,\hat{\theta})\) pairs are close to the identity, it gives an evidence that all previously generated code works properly. We would like to highlight that even if ChatGPT is not instructed to:
1. The parameters \(\theta\) are _linearly_ spaced in the desired interval. This is a typical choice for many visualizations.
2. It shows perfect identity by a line, also typically considered in an ideal benchmark. This clearly demonstrates that ChatGPT at least partially understands the underlying concepts, i.e., that we are estimating the true value of some parameter.
3. Typical time-consuming trivialities like limits, labels, title and legend are also shown.
All in all, this is the type of tasks where the user can substantially benefit from a collaboration with tools like ChatGPT.
However, we would like to note that two iterations of our last prompt were done before we got the presented one. In the first one, we omitted the content of the two parentheses with the function names. In that case, the output of ClaytonSample did not match the dimension of the input of ClaytonCopulaMLE. In the second iteration, we added those parentheses to the prompt, but without "U = ", and got a very similar error. We copy-pasted the error message to ChatGPT, but this time it was not succesful to provide a correct version. Finally, we added that "U = " in the prompt with the intuition that ChatGPT is not aware of the right dimensions of the output, and this way we got a working solution. The main take-away from this example is that obtaining a correct solution is an iterative process requiring careful prompt engineering. The connection to human-human collaboration is clear: until the software engineer does not understand what the researcher exactly wants, she/he will probably deliver unsatisfactory solutions.
Figure 2: The plots produced by the code generated by ChatGPT in MATLAB (top-left), Python (top-right) and R (bottom).
The response is shown in Appendix A and the corresponding plot is depicted at right-hand side of Figure 2. When a project is migrated from one programming language to another, this ability of ChatGPT could limit the related efforts/expenses dramatically.
And in R.
In this case, the transpilation to R was not successful as we again got a problem with the dimensions matching of the input of ClaytonCopulaMLE and output of ClaytonSample. We were also unsuccessful with copy-pasting the error message to ChatGPT. Clearly, we could be more precise by specifying these dimensions. However, we were deliberately underspecifying the task in order to illustrate what separates successful solutions from unsuccessful ones.
To resolve the task, we finally used a rather "brute-force" approach consisting of copy-pasting the whole code in Python to ChatGPT and asking it for the same code on R. The response is shown in Appendix B and the corresponding plot is depicted at the bottom of Figure 2. Another way we obtained a working solution was to feed ChatGPT with explicit information about the dimensions of the inputs and outputs, i.e., using a similar approach as before by adding "U = ". On the one hand, this approach is more elegant then the brute-force one, on the other hand, the brute-force approach allowed us to get a working solution in less time. A choice between them thus depends on the user's priorities.
Basically we now have an implementation of our example task in every language supported by ChatGPT. Note that an implementation of sampling from the Clayton copula in Python appeared relatively recently as a serious effort presented in Boulin (2022). Here, we got an equivalent implementation completely created by an AI. Clearly, with the abilities of ChatGPT, achieving a solution in distinct programming languages will become much less valuable.
### The parallelization
Note that as the codes for the following tasks in Python and R were rather long, we present only the MATLAB versions.
#### 3.6.1 CPUs
In MATLAB, optimize the code for computing that 20 pairs of (theta, theta_hat) for parallel computing on CPUs. Create a demonstration that shows how the time for performing the same job decreases as the number of workers increases.
% Define the range of theta values { { { { { { { { { { { { { { { { { { { { { { { { { { {{{{{{{{{{{{{{{{{{{{{{{{{{}}}}}}}}}}}}}}}}}}}}} \over{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{}}}}}}}}}}}}}}}}}}}{{{{{{{{{{{{{{{{{{{{{{ {{{{{{{{{{{ }}}}}}}}}}} \over{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{ }}}}}}}}}} }} } } } } } } } } } } } } }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\}}\}}\}}\}}\}}\}}\}}\}}\}}\}}\}}\}}\}\}}\}\}}\{\\\\\}}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\
measure time for this and that, store it in the array and then plot it in the way that...". Since such tasks probably occur many times in ChatGPT's training data, overly specifying is not necessary, which, again, can save a lot of time.
* Notice the 4 workers in numWorkers = 1:4. It is a common number of CPUs of an average office PC. This number is thus not just a pure random guess, but rather a most occurring one in ChatGPT's training data.
Note that apart from our office PC, we executed the code on an available server with 32 CPUs, which generated the plot at the right-hand side of Figure 3. As only 20 theta values (thetas) are considered, we observe that the improvement in time does not continue from the 21st worker onwards.
#### 3.6.2 GPUs
In MATLAB, optimize the function ClaytonSample for parallel computing on GPUs. Call the optimized function ClaytonSampleGPUs. Then create a script that measures the run-time of ClaytonSample(2, 1000000) and of ClaytonSampleGPUs(2, 1000000).
We got two pieces of code as response. First, ChatGPT provided an optimization of ClaytonSample for GPUs.
Figure 3: Two plots demonstrating the improvement in run-time when adding new workers, generated by the code produced by ChatGPT. The output of the code when executed on a local machine is shown on the left side. On the right side, the same when executed on an available server with 32 CPUs. For the latter, we adjusted the code with numWorkers = 1:32 and n = 10000.
function [U, V] = ClaytonSampleGPUs(theta, n) % Generate random variables on the GPU E1 = gpuArray(single(exprnd(1, n, 1))); E2 = gpuArray(single(exprnd(1, n, 1)))); V = gpuArray(single(gamrnd(1/theta, 1, n, 1))); psi = @(t) (1+t).^(-1/theta); U = gather(psi(E1./V)); V = gather(psi(E2./V)); end
Then, a code for its demonstration followed.
% Define parameters n = 10000000; theta = 2;
% Time CPU version tic data_cpu = ClaytonSample(theta, n); time_cpu = toc;
% Time GPU version tic data_gpu = ClaytonSampleGPUs(theta, n); time_gpu = toc;
% Display results disp(['CPU time:'num2str(time_cpu)'seconds']); disp(['GPU time:'num2str(time_gpu)'seconds']);
After executing the latter, the output was:
CPU time: 0.17493 seconds
GPU time: 0.093107 seconds
As can be observed, the optimization of ClaytonSample, ClaytonSampleGPUs, is based on the addition of gpuArray, which assures that the sampling from the standard exponential and gamma distributions as well as the remaining computations are performed directly on available GPUs. The outputs U and V are then gathered from the GPUs onto the client by gather. As the output of the demonstration part of the code shows, this roughly halved the non-optimized run-time.
Apart from a more efficient implementation, ChatGPT also saves our time by exempting us from inspecting whether or not each involved function is supported on GPUs. We should also not forget the educational aspect of the matter. As these optimization techniques are rather advanced, an inexperienced user
genuinely learns from these outputs (having in mind they might not always be correct). For example, without explicitly mentioning it in our prompt, the values of theta and n are stored in separate variables before they are used in ClaytonSample and ClaytonSampleGPUs. This belongs to proper coding techniques.
## 4 Summary and discussion
During the development of the working code solving our example task, we observed a considerable list of advantages from which we can benefit while pair programming with ChatGPT. In particular:
1. ChatGPT can save time by quickly and concisely summarizing basic facts about the topic of our interest, e.g., formulas or application examples, as illustrated in Section 3.1.
2. If ChatGPT is not able to provide a correct solution due to a lack of or incorrect knowledge, we can feed it with the correct knowledge, and make it use it to provide a correct solution. In Section 3.2, this approach led ChatGPT to produce a function evaluating the PDF of the copula model in three different programming languages. In Section 3.4, a working code for sampling from the copula model is generated once ChatGPT was fed by the related non-trivial theory. Particularly the latter example shows that ChatGPT is able to understand even relatively complex concepts, and clearly demonstrates that it can be applied in cases when it faces unknown concepts.
3. ChatGPT saves time by mapping simple concepts, e.g., sampling from the standard exponential and gamma distributions, to existing code (libraries, APIs, or functions) available for a given programming language, as illustrated in Section 3.4.
4. The more common the task to solve, the more successful ChatGPT in generating a correct solution. This is illustrated, e.g., in Section 3.3, where we immediately obtained code implementing the maximal likelihood estimator by a simple prompt like write code for the maximum likelihood estimator of that parameter. Another example is the transpilation of the MATLAB solution to Python in Section 3.5, or the optimization of existing code for parallel computing on CPUs and GPUs in Section 3.6.
5. ChatGPT can help in cases when an error is thrown after executing the generated code. In Section 3.4, we have seen that it not only detected what was wrong, but provided a corrected solution. Apart from saving time needed to search and fix the error, this can be crucial particularly for less experienced programmers, who could find the error too complex and eventually give up. ChatGPT helped us roughly with 1/3 of the errors we encountered. Even if not perfect, this is substantially better than no help at all.
6. ChatGPT can help with creating visualizations. In Section 3.5, it generated a visualization suggesting that all previously generated code is correct. Even if we have not asked for it, the visualization included all the typical trivia like labels, benchmarks, limits, legends, etc.
7. ChatGPT at least partially understands the underlying concepts of what we are doing. Without asking it to do so, it added to the visualization the plot of the identity (see Section 3.5), suggesting that it is aware of us trying to estimating the true value of some parameter.
8. ChatGPT can transpile code from one programming language to another also with high-level prompts like Code it in Python and And in R, demonstrated in Section 3.5. The same section also shows that if the transpilation fails (which happened with the transpilation to R), it is possible to use a quick "brute-force" solution that also accomplished the task.
9. ChatGPT can optimize the already generated code, e.g., for parallel computations. By prompting optimize that for parallel computing on CPUs, we immediately got the optimized version of the sample-estimate procedure developed in Section 3.5; see Section 3.6.1. The same section also shows that a high-level prompt like Create a demonstration of this optimization can result in code showing the impact of the optimization, again including the typically tedious but necessary trivia like labels, etc. Similarly, such an optimization together with a simple demonstration was generated also for computations on GPUs; see Section 3.6.2.
10. ChatGPT follows proper coding techniques, so the user can genuinely learn them too. We observed that the produced code is properly commented, indented, modularized, avoids code duplicities, etc.
11. ChatGPT helps the user to get familiar with the produced code faster. When providing code, ChatGPT typically surrounds it by further information explaining its main features. To save space, we mostly cut this out, however, an example can be found, e.g., in Section 3.4 in connection to the error message thrown by the simple check code.
We have also seen that pair programming with ChatGPT brings several disadvantages, which should be carefully considered. Let us summarize them and discuss possibilities to mitigate them:
1. ChatGPT in its current version (early 2023) is poor in reasoning; see Section 3.1. On two examples, we demonstrated how it responses with contradicting answers to the same question. We particularly highlight the case when it first answered _yes_ and then _no_ to the same question. Also, we demonstrated how dangerous this could be in quantitative reasoning, where it generated incorrect formulas that looked very similar to correct ones; see the PDF derivation in Section 3.2. In order to mitigate this problem, a lot of effort can be currently observed. One of the most
promising examples in the direction of quantitative reasoning is Minerva (Lewkowycz et al., 2022), an LLM based on the PaLM general language models (Chowdhery et al., 2022) with up to 540 billion of parameters. This model, released in June 2022, gained its attention by scoring 50% on questions in the MATH data set, which was a significant improvement of the state-of-the-art performance on STEM evaluation datasets; see Table 3 therein. In other works, the authors develop models fine-tuned for understanding mathematical formulas (Peng et al., 2021), or employ deep neural networks in mathematical tasks like symbolic integration or solving differential equations (Lample and Charton, 2019). Another way of mitigating the problem can be trying to exploit at maximum the current LLMs by carefully adjusting the prompt in order to get more reliable answers. This increasingly popular technique, called _prompt engineering_, involves special techniques to improve reliability when the model fails on a task14, and can substantially improve the solution, e.g., for simple math problems, just by adding "Let's think step by step." at the end of the prompt. Note that we tried this technique in the example considering the tail dependence of the survival Clayton copula in Section 3.1, however, without success, probably because the underlying concepts go beyond simple mathematics. Footnote 14: [https://github.com/openai/openai-cookbook/blob/main/techniques_to_improve_reliability.md](https://github.com/openai/openai-cookbook/blob/main/techniques_to_improve_reliability.md)
2. If ChatGPT lacks the necessary knowledge or possesses incorrect knowledge, it may generate an incorrect solution without any indication to the user. As illustrated in Section 3.4, after asking it for code for sampling from a Clayton copula model, ChatGPT first generated two routines, which were resembling proper sampling algorithms, but were entirely incorrect. Due to the opacity of the current state-of-the-art LLMs that contain tens or even hundreds of billions of parameters, the correctness of the solution can hardly be guaranteed in all cases. While there may be efforts to develop more explainable LLMs, it is unlikely that the fundamental challenges related to the complexity of language and the massive amounts of data required for training will be completely overcome. Therefore, it is essential for a human expert in the field to _always_ verify the output generated by the model.
3. Specifically, ChatGPT tends to be less successful in producing accurate solutions for tasks that are less common. This means that the opposite of advantage 4. also applies. In Section 3.2, this is demonstrated through the probability density function (PDF) of the copula model. In Section 3.4, through the sampling algorithm. To solve these issues, we provided the required theory to ChatGPT, which led to a correct solution, see the same two sections.
4. ChatGPT does not have any memory. If the conversation is too long, and thus does not fit within ChatGPT's context window, it seems that the model has forgotten some parts of the conversation. This, together with ways how to mitigate this issue, has already been discussed in Section 2.2.
Apart from ChatGPT, there are several other language models that are capable of generating code solutions from natural language inputs. One notable example is AlphaCode (Li et al., 2022), which achieved on average a ranking of top 54.3% in competitions with more than 5,000 participants on recent programming competitions on the platform Codeforces. Recently, AlphaCode has been made publicly available15, including example solutions from the mentioned contest. Another example is OpenAI Codex16, already mentioned in the introduction. In contrast to ChatGPT, these models have been developed particularly for code generation. On the one hand, it is thus possible that one can generate solutions that are better than those generated with ChatGPT. It would thus be interesting future research to compare, e.g., the successfulness of these models for solving the tasks considered in this work.
Footnote 15: [https://github.com/deepmind/code_contests](https://github.com/deepmind/code_contests)
Footnote 16: [https://openai.com/blog/openai-codex/](https://openai.com/blog/openai-codex/)
On the other hand, ChatGPT might be more convenient for many users than these models as it allows for interaction during the coding process. Unlike AlphaCode and OpenAI Codex, which generate code snippets based on natural language inputs without any further interaction, ChatGPT allows users to provide feedback and adjust the generated code in real-time. This interaction can be beneficial for several reasons. First, it allows users to clarify their intent and ensures that the generated code aligns with their goals. For example, as we have seen in Section 3.4 that considers the sampling from a Clayton copula model, if a user requests a specific functionality and the generated code does not quite match what they had in mind, the user can provide feedback to ChatGPT to adjust the code accordingly. Second, the interaction with ChatGPT can help users learn more about programming and improve their coding skills. By engaging in a dialogue with ChatGPT, users can gain insights into the logic and structure of the code they are generating, and learn how to improve their code in the future. For example, in Section 3.6.2, we could genuinely learn how to convert existing code for parallel computing on GPUs. Finally, the interaction with ChatGPT can help users troubleshoot errors and debug their code more effectively. As we have seen in Section 3.4, ChatGPT can recognize common programming mistakes, and provide feedback that helps users to identify and fix errors in their code. These reasons, together with the fact that ChatGPT can be conveniently accessed through a web portal, led us to choose ChatGPT as our pair programming AI partner.
## 5 Conclusion
In a human-AI collaboration, we developed working code that implements sampling from a copula model, estimation of its parameter, visualization suggesting that the last two tasks worked properly, and a parallelization of the code for CPUs as well as for GPUs. To illustrate the coding abilities of the AI part, represented by ChatGPT, all the mentioned tasks were implemented without a single line of code written by the human. In addition to presenting how to
achieve a successful solution for a given task, we also showed additional examples demonstrating which modifications of our prompts for ChatGPT turned failed solutions to successful ones. This resulted in a comprehensive list of related pros and cons, suggesting that if typical pitfalls can be avoided, we can substantially benefit from a collaboration with an AI partner like ChatGPT.
## Acknowledgments
The author thanks the Czech Science Foundation (GACR) for financial support for this work through grant 21-03085S. The author also thanks to Martin Holena and Marius Hofert for constructive comments and recommendations that definitely helped to improve the readability and quality of the paper.
## Appendix A The solution in Python
```
importnumpyasnp importmatplotlib.pyplotasplt fromscipy.optimizeimportfmin defClaytonSample(theta,n): E1=np.random.exponential(size=n) E2=np.random.exponential(size=n) V=np.random.gamma(1/theta,1,size=n) psi=lambdat:(1+t)**(-1/theta) U=psi(E1/V) V=psi(E2/V) returnnp.column_stack((U,V)) defClaytonCopulaDensity(u,v,theta): return((1+theta)*(u*v)**(-theta-1)* (u**(-theta)+v**(-theta)-1)**(-2-1/theta)) defClaytonCopulaMLE(data): n=data.shape[0] u,v=data[:,0],data[:,1] loglik=lambdatheta:np.sum(np.log( ClaytonCopulaDensity(u,v,theta))) thetaMLE=fmin(lambdatheta:-loglik(theta),1, disp=False) returnthetaMLE[0]
```
thetas=np.linspace(0.1,3,20)
theta_hats = [] for theta in thetas: data = ClaytonSample(theta, 1000) theta_hat = ClaytonCopulaMLE(data) theta_hats.append(theta_hat)
plt.plot(thetas, theta_hats, 'o') plt.plot(thetas, thetas, '--') plt.xlabel('True theta') plt.ylabel('Estimated theta') plt.show()
An example of a redundant code is n = data.shape[0]. As can be observed, the variable n has no use in ClaytonCopulaMLE.
## Appendix B The solution in R
```
ClaytonSample<-function(theta,n){ E1<-rexp(n) E2<-rexp(n) V<-rgamma(n,1/theta,1) psi<-function(t)(1+t)^(-1/theta) U<-psi(E1/V) V<-psi(E2/V) return(cbind(U,V)) } ClaytonCopulaDensity<-function(u,v,theta){ return((1+theta)*(u*v)^(-theta-1)* (u^(-theta)+v^(-theta)-1)^(-2-1/theta)) } ClaytonCopulaMLE<-function(data){ u<-data[,1] v<-data[,2] loglik<-function(theta)-sum(log( ClaytonCopulaDensity(u,v,theta))) thetaMLE<-optimize(loglik,c(0.1,3),maximum=FALSE)$minimum return(thetaMLE) } } thetas<-seq(0.1,3,length.out=20)
```
theta_hats <- c() for (theta in thetas) { data <- ClaytonSample(theta, 1000) theta_hat <- ClaytonCopulaMLE(data) theta_hats <- c(theta_hats, theta_hat) } plot(thetas, theta_hats, pch=19, col="blue", ylim=c(0,3), xlab="True theta", ylab="Estimated theta") abline(a=0, b=1, lty=2) Interestingly, even if this code is a direct transpilation of the code from Appendix A, the redundant code from the Python version of ClaytonCopulaMLE is not present. This hints on the ability of ChatGPT to keep only the code that is relevant.
|
2309.13018 | Dynamic ASR Pathways: An Adaptive Masking Approach Towards Efficient
Pruning of A Multilingual ASR Model | Neural network pruning offers an effective method for compressing a
multilingual automatic speech recognition (ASR) model with minimal performance
loss. However, it entails several rounds of pruning and re-training needed to
be run for each language. In this work, we propose the use of an adaptive
masking approach in two scenarios for pruning a multilingual ASR model
efficiently, each resulting in sparse monolingual models or a sparse
multilingual model (named as Dynamic ASR Pathways). Our approach dynamically
adapts the sub-network, avoiding premature decisions about a fixed sub-network
structure. We show that our approach outperforms existing pruning methods when
targeting sparse monolingual models. Further, we illustrate that Dynamic ASR
Pathways jointly discovers and trains better sub-networks (pathways) of a
single multilingual model by adapting from different sub-network
initializations, thereby reducing the need for language-specific pruning. | Jiamin Xie, Ke Li, Jinxi Guo, Andros Tjandra, Yuan Shangguan, Leda Sari, Chunyang Wu, Junteng Jia, Jay Mahadeokar, Ozlem Kalinli | 2023-09-22T17:30:28Z | http://arxiv.org/abs/2309.13018v2 | Dynamic ASR Pathways: An Adaptive Masking Approach Towards Efficient Pruning of a Multilingual ASR Model
###### Abstract
Neural network pruning offers an effective method for compressing a multilingual automatic speech recognition (ASR) model with minimal performance loss. However, it entails several rounds of pruning and re-training needed to be run for each language. In this work, we propose the use of an adaptive masking approach in two scenarios for pruning a multilingual ASR model efficiently, each resulting in sparse monolingual models or a sparse multilingual model (named as _Dynamic ASR Pathways_). Our approach dynamically adapts the sub-network, avoiding premature decisions about a fixed sub-network structure. We show that our approach outperforms existing pruning methods when targeting sparse monolingual models. Further, we illustrate that _Dynamic ASR Pathways_ jointly discovers and trains better sub-networks (pathways) of a single multilingual model by adapting from different sub-network initializations, thereby reducing the need for language-specific pruning.
Jiamin Xie\({}^{*,1}\), Ke Li\({}^{2}\), Jinxi Guo\({}^{2}\), Andros Tjandra\({}^{2}\), Yuan Shangguan\({}^{2}\),
Leda Sari\({}^{2}\), Chunyang Wu\({}^{2}\), Junteng Jia\({}^{2}\), Jay Mahadeokar\({}^{2}\), Ozlem Kalinli\({}^{2}\)+\({}^{1}\)Center for Robust Speech Systems (CRSS), University of Texas at Dallas, USA
\({}^{2}\)Meta AI, USA [email protected], [email protected] Multilingual, Automatic Speech Recognition, Sparsity, Pruning
Footnote †: Work done while Jiamin Xie was an intern at Meta AI.
## 1 Introduction
Automatic speech recognition (ASR) has become a key feature in smart devices, serving a diverse customer base [1, 2, 3]. For a successful on-device deployment, the ASR model must operate within the storage and computational constraints while delivering an optimal performance. Furthermore, the ASR model needs to support multiple languages [4, 5] to interact with users worldwide. Neural network pruning [6, 7, 8] is an effective technique for reducing the size of an ASR model with minimal performance loss [9, 10]. However, the pruning process, such as Iterative Magnitude Pruning (IMP) [6, 11] and Lottery Ticket Hypothesis (LTH) [8], involves multiple iterations of pruning and re-training to achieve the best performance. The pruning step identifies a task-specific sub-network within the original dense neural network. Subsequently, the re-training step trains this sub-network with task-specific data, mitigating the performance loss introduced in pruning. This iterative process continues until the target sparsity level is reached.
Pruning a multilingual ASR model presents specific challenges. When pruning a pre-trained dense multilingual ASR model, it can result in two scenarios, as discussed in [12]. In the first scenario, the model is fine-tuned and pruned for each language separately, resulting in multiple language-specific sparse models. While this approach optimizes performance in each language, it can increase storage requirements due to maintaining different monolingual models. In the second scenario, the multilingual model is fine-tuned and pruned using a multilingual dataset, creating a single sparse model by finding a language-agnostic pruning mask. While multilingual training can promote knowledge transfer across languages [13, 14], data imbalance [15, 16] may cause performance degradation in some languages when training a single language-agnostic sub-network. Mixing languages in a training batch can also create conflicts in weight updates with different languages fighting for model capacity, known as the negative interference effect [17, 18], making it challenging to identify an optimal language-agnostic sub-network. A recent study [12] proposes to train language-specific sub-networks (referred to as pathways) jointly within the original dense multilingual model instead of training a language-agnostic sub-network. This method employs monolingual data in a batch to fine-tune the respective pathway without interference from other languages. As these pathways overlap, the weights are updated either in a language-specific or a language-agnostic manner, surpassing the performance of language-agnostic methods. However, a drawback of the pathways method is acquiring each pathway in a separate stage that performs monolingual training and pruning, incurring a computational cost that scales linearly with the number of languages. These pathways, once obtained, remain fixed throughout the training process, lacking adaptation to the multilingual data.
In this study, we introduce an adaptive masking approach for adapting language-specific sub-networks in monolingual or multilingual pruning situations. Our proposed method re-evaluates the pruning mask dynamically during training, allowing the sub-network to align better with the training data comparing to a fixed masking approach. We first assess the benefit of applying this technique to the monolingual case, obtaining sparse monolingual ASR models. We then prune and adapt pathways by employing our approach in multilingual training, evaluating the performance of a jointly fine-tuned and pruned multilingual ASR model.
## 2 Related Works
**Multilingual Training**. The concept of training sub-networks was proposed in the context of multi-task learning [19] and has since found applications in multilingual training [20, 21, 22]. This approach has demonstrated an efficacy across various domains, including self-supervised learning (SSL) [20], machine translation [21], and language modeling [22]. Our research builds upon a recent study [12] that emphasized the effectiveness of training language-specific sub-networks for the supervised multilingual ASR task.
**Adaptive Pruning.** Previous research on adaptive pruning can be broadly categorized based on whether the pruning masks are made trainable. One approach [23] involves fine-tuning the trainable pruning masks on downstream tasks while keeping the original model weights fixed, demonstrating performance improvements over traditional fine-tuning methods. Another approach [24] focuses on re-learning the pruned weights by lifting the mask during training, allowing adjustments to the pruning mask without learning it directly. This technique was applied to fine-tune a multilingual speech SSL model for monolingual ASR tasks. In contrast to the latter approach, our study applies adaptive masking to the supervised multilingual ASR task with structured pruning, introducing novel strategies for attaining and adapting sub-networks during multilingual training.
## 3 Methodology
We first recap the concept of pruning (Section 3.1). We then illustrate current pruning methods that are foundational to our proposed approach (Section 3.2). Finally, we described our adaptive masking approach for monolingual and multilingual pruning (Section 3.3).
### Pruning recap
For a dense neural network \(f(x;\theta_{0})\) trained with input sample \(x\) and parameters \(\theta_{0}\), we denote a sub-network \(f(x;m\odot\theta_{0})\) with a binary pruning mask \(m\) and the element-wise product \(\odot\). The pruning goal is to identify the sub-network \(f(x;m\odot\theta)\) through additional training, where \(\theta\) can be the parameters obtained at any stage of training. We consider a progressive pruning schedule [7], where pruning starts from a low sparsity and incrementally steps up to the target sparsity.
### Current pruning methods
#### 3.2.1 IMP, LTH, and LAP
The iterative magnitude pruning (IMP) method [6] involves fine-tuning a dense model for a specified number of steps denoted as \(T\) while making pruning decisions based on the magnitude of weights. Here, the magnitude of a weight reflects its significance to the task, with larger values indicating higher importance. For structured pruning, we use the block-wise scheme similar to that in [25], following a block pattern of 8 \(\times\) 1. This pattern implies that eight consecutive weights within a column are pruned simultaneously. We evaluate magnitudes by the L2 norm of a weight block. To initiate the IMP procedure, we initialize model parameters \(\theta\) with pre-trained dense weights \(\theta_{0}\) and set the binary pruning mask \(m\) to all ones \(\mathbf{1}\), where \(m\in\{0,1\}^{|\theta|}\). The IMP procedure is illustrated as follows:
**Repeat**:
1. [topsep=0pt,itemsep=0pt]
2. Train \(f(x;m\odot\theta)\) for \(T\) steps, resulting in \(f(x;m\odot\theta_{T})\).
3. Prune \(p\%\) of total weights from \(m\odot\theta_{T}\) that has the smallest magnitudes. Setting the pruned positions in \(m\) to 0.
4. Assign \(\theta_{T}\) to \(\theta\) for the next iteration
**Until \(m\)** reaches the target sparsity
We note the property of a pruning mask depends on the training data. When monolingual data is used in IMP, this procedure yields a language-specific pruning mask \(m_{l}\) for a language \(l\). For multilingual data, it results in a language-agnostic pruning mask, referred to as language-agnostic pruning (LAP). Importantly, the pruning mask remains fixed at any step within the \(T\) training steps, suggesting the pruning decision is irreversible.
The lottery ticket hypothesis (LTH) method [8] modifies the Step 3 of the IMP procedure by assigning the pre-trained dense weights \(\theta_{0}\) to \(\theta\) instead of \(\theta_{T}\), referred to as a re-winding step. It assumes that a sub-network capable of achieving performance similar to the original dense network exists within the original dense architecture. Therefore, the LTH method leads to the identification of a sub-network embedded within the original dense model weights.
#### 3.2.2 ASR Pathways
The ASR Pathways [12] provides a method to fine-tune a multilingual ASR model using the language-specific sub-networks (or pathways) identified through IMP, LTH, or other pruning methods. These sub-networks are attained at the target sparsity level and remain fixed throughout training. The mini-batch is configured as monolingual, while the training includes a mixture of languages across mini-batches. This setup ensures that each mini-batch activates one pathway and updates the weights underlying the pruning mask of this pathway, denoted as \(m_{l}\odot\theta\). Since each language-specific sub-network is a part of the original dense multilingual model and gets fine-tuned together, the training process results in a final sparse multilingual ASR model.
Figure 1: Flowchart of the training and pruning process with adaptive masking enabled for monolingual data
Figure 2: Flowchart of the training and pruning process with adaptive masking enabled for multilingual data
### The adaptive masking approach
#### 3.3.1 Monolingual pruning
We propose an adaptive masking approach for monolingual pruning, yielding a language-specific pruning mask adapted with the data. We illustrate this approach as a flowchart shown in Figure 1. Within the framework of the IMP procedure, we introduce a mask adaptation step denoted as \(n\) (where \(n<T\)). During the adaptation step, we re-evaluate the sub-network configuration (_adapt_) by pruning from all weights in \(\theta_{n}\) with a portion \(p\)% that maintains the sparsity level of the current pruning mask. Next, we prune "softly" by setting the pruned weights to zero, denoted as \((\mathbf{1}-m)\ \odot\ \theta_{n}\), and make them trainable (_masked-out_). Since the masked-out weights receive updates from training, they can form new connections within the network and reveal an optimal configuration of the sub-network as the training evolves. For the pruning step, we simply raise the sparsity level and prune from all weights in \(\theta_{n}\) as opposed to pruning from weights in \(m\odot\theta_{T}\) in the IMP procedure.
#### 3.3.2 Multilingual pruning
We propose an adaptive masking approach for multilingual pruning based on the pathways training method described in [12], named _Dynamic ASR Pathways_. We use a similar adaptation step to the monolingual pruning and illustrate it in a flowchart shown in Figure 2. When a mini-batch in language \(z\) is processed, we train the sub-network of this language \(z\) and a "residual" sub-network, excluding other language-specific sub-networks. Given a language set \(L\) representing all languages in the data, we denote this pruning mask as,
\[m_{z,r}=m_{z}\cup(\mathbf{1}-\cup_{\mathit{in}\ L,l\neq z}m_{z}) \tag{1}\]
During the adaption step, we re-evaluate the language-specific sub-network by pruning from weights in \(m_{z,r}\odot\theta_{n}\) with its current sparsity level held. Since the adaptation step is monolingual, the newly adapted sub-network can become more language-specific compared to before. During the pruning step, we simultaneously prune sub-networks by pruning from weights in \(m_{z,r}\odot\theta_{T}\), iterating over each language \(z\) in the language set \(L\). Because different languages would share the "residual" sub-network depending on the data distribution, this pruning step promotes parameter sharing among sub-networks, compensating potential reductions in the adaptation step.
## 4 Experimental Setup
### Dataset
We conduct our experiments using the multilingual Librispeech (MLS) [26] dataset, which consists of multilingual speech derived from audiobooks. Our study focuses on four languages: English (EN), French (FR), Italian (IT), and Dutch (NL), with respective training audio length of 44.7k hrs, 1.1k hrs, 0.2k hrs, 1.6k hrs.
### Implementation details
We employ a streaming RNN-T model for the dense multilingual model, using 30 Emformer layers [27] with 512 input dimensions, 2048 feed-forward dimensions, and encoder layers with convolutional blocks [28]. This model has about 180 million parameters. We utilize word pieces to recognize spoken words in all four languages, totaling 1548 items. For consistency, we use the same output layer size for all training setups. The learning rate schedule is tri-stage [29] with a peak learning rate of 1e-3. For monolingual models, we conduct training for 100K, 80K, 50K, and 80K steps for EN, FR, IT, and NL, respectively. The multilingual pathway model undergoes training for 200K and 100K steps for IMP and LTH methods, respectively. We also conduct a bilingual experiment for the multilingual pathway models, where the training step is 80K. We employ an uniform data sampling scheme for the multilingual training when _Dynamic ASR Pathways_ method is compared, otherwise an non-uniform sampling scheme. The prune step \(T\) was set to be 8% of the training step for each setup, with an adaption step \(n\) of \(100\) and a prune portion \(p\) of 20% across all experiments. Pruning was applied exclusively to linear layers in the encoder Eformer and the predictor LSTM layers, with a uniform sparsity across all prunable layers [25]. We implement group lasso regularization as previously mentioned [12]. We use 16 GPUs for monolingual training and 32 GPUs for multilingual training, with a per-GPU batch size of 28.
## 5 Results
We first show baseline results from using current pruning methods (Section 5.1). We then compare the adaptive masking approach for monolingual pruning to its relative baseline (Section 5.2). Finally, we compare the adaptive masking approach for multilingual pruning (_Dynamic ASR Pathways_) to the ASR Pathways baseline (Section 5.3).
\begin{table}
\begin{tabular}{c|c|c|c|c|c c c c c|c} \hline Stage & Model & \begin{tabular}{c} Mask can \\ change? \\ \end{tabular} & Sparsity &
\begin{tabular}{c} Monolingual or \\ Multilingual training? \\ \end{tabular} & EN & FR & IT & NL & Avg. \\ \hline \hline Ref. & 56M Dense & / & 0\% & Monolingual & 12.15 & 16.00 & 27.62 & 23.23 & 19.75 \\ \hline (1) & 187M Dense & / & 0\% & Multilingual & 12.91 & 10.90 & 16.94 & 17.56 & 14.58 \\ \hline \multirow{3}{*}{(2)} & LAP & No & 70\% & Multilingual & 13.82 & 11.98 & 27.71 & 19.32 & 18.21 \\ & IMP & No & 70\% & Monolingual & 10.74 & 11.26 & 17.90 & 18.38 & 14.57 \\ & LTH & No & 70\% & Monolingual & 10.80 & 10.38 & 18.44 & 17.48 & 14.28 \\ \hline \multirow{3}{*}{(3)} & ASR Pathways (IMP-70\%) & No & 70\% & Multilingual & 11.15 & 10.68 & 17.53 & 16.90 & 14.06 \\ & ASR Pathways (LTH-70\%) & No & 70\% & Multilingual & 11.39 & 10.20 & 17.58 & 15.84 & 13.75 \\ \hline \multirow{3}{*}{(2)} & IMP & Yes & 70\% & Monolingual & **10.07** & 10.90 & 17.21 & 16.98 & 13.79 \\ & LTH & Yes & 70\% & Monolingual & 10.54 & **9.91** & **17.06** & **16.63** & **13.53** \\ \hline \end{tabular}
\end{table}
Table 1: WER (%) results on the MLS test set, pruning a dense multilingual ASR model. The proposed approach allows the mask to change in training and is compared to other pruning methods for monolingual training scenario.
### Baselines
In Table 1, we present the results of existing methods for pruning a multilingual ASR model. We breakdown these methods into three stages: 1) training a dense multilingual ASR model, 2) pruning the dense multilingual ASR model, and 3) training a sparse multilingual model. For reference, we include results of dense monolingual model. Both the IMP and LTH language-specific pruning methods achieve matching performance to the original dense multilingual model and surpass the dense monolingual models. The ASR Pathways method outperforms other methods using the language-specific masks obtained in Stage (2), promoting parameter sharing among languages.
### Adaptive masking in monolingual pruning
In the last two rows of Table 1, we present the results of using adaptive masking for monolingual pruning. Our proposed Stage (2) modified the IMP and the LTH language-specific pruning methods in Stage (2) and achieved a consistent 5.3% relative WER reduction averaged across languages. Comparing the adapted sub-networks to the fixed ones, we noticed about an 80% similarity and a 20% difference, indicating the effective adaptation occurs within a small part of the pruning masks. Our proposed Stage (2) also outperforms the sparse multilingual model obtained in Stage (3), providing an efficient alternative when storing multiple models is not a concern.
### Adaptive masking in multilingual pruning
In Table 2, we show the results of a bilingual experiment when using adaptive masking for multilingual pruning. We initialized the training with the LTH or the LAP masks at the target sparsity level (70%) and achieved a consistent improvement when only adaptation is enabled. Notably, adapting the LAP-70% mask achieves a 12.1% relative WER reduction, indicating the adaptation step has effectively turned the LAP mask to become more language-specific. We noticed a similar but improved performance when using the LTH-70% masks, suggesting these masks may be robust at a high sparsity level.
We observed the best overall performance using mask initialization at a middle sparsity level (50%) when both pruning and adaptation steps are enabled. For the LTH-50% mask initialization, our
_Dynamic ASR Pathways_ method outperformed the respective ASR Pathways baseline with a 5.8% relative WER reduction. From an analysis, we find this model results in a even lower union ratio1 (0.34) compared to its baseline (0.36), indicating a better multilingual performance is achieved using even less total effective model parameters. We believe this effect can be attributed to the pruning step introduced in our approach that increases parameter sharing (Section 3.3.2). For different LAP mask initialization, we noticed consistently a significant performance gain compared to its respective baseline. Further, it is almost matching performance to the ASR Pathways baseline using the LTH-70% masks, showing a benefit of efficiency with the language-specific pruning rounds eliminated.
Footnote 1: The union ratio indicates the ratio between surviving parameters in the union of all masks and the total parameters of the network [12]
In Table 3, we present the extended results of applying _Dynamic ASR Pathways_ to pruning for more languages, initializing from the LTH-50% masks. Our proposed approach outperforms the ASR Pathways baseline with a 2% relative WER reduction1 on average across four languages. When considering the performance across FR, IT, and NL, it achieves a notable 5.5% relative WER reduction. When initializing at a 50% sparsity level, we saved additional rounds of training and pruning for achieving a target sparsity level, showing the efficacy of applying our approach towards efficient pruning of a multilingual ASR model.
Footnote 2: Due to time limitation, this result is inferred at an early checkpoint, subject to a better future improvement
## 6 Conclusions
In conclusion, we proposed an adaptive masking approach for both monolingual and multilingual pruning. In the former case, our proposed method achieved a consistent 5.3% relative WER reduction averaged across languages and outperformed the sparse multilingual model obtained from going through an additional stage, offering a convenient trade-off between storage and efficiency. In the latter case, we showed the efficacy of our approach in pruning and adapting from different pruning mask initializations. When initialized from language-agnostic pruning masks, our Dynamic ASR Pathways method showed a consistent and comparable performance to the best performance of the ASR Pathways method that uses language-specific pruning masks, indicating a benefit of efficiency with our approach. When initialized from language-specific pruning masks at a 50% sparsity level, our Dynamic ASR Pathways method outperforms the ASR Pathways method, ranging from a 2% to 5.8% relative WER reduction. For future work, we want to scale our research of multilingual pruning for more languages and explore the option to make pruning masks learnable.
|
2309.04564 | When Less is More: Investigating Data Pruning for Pretraining LLMs at
Scale | Large volumes of text data have contributed significantly to the development
of large language models (LLMs) in recent years. This data is typically
acquired by scraping the internet, leading to pretraining datasets comprised of
noisy web text. To date, efforts to prune these datasets down to a higher
quality subset have relied on hand-crafted heuristics encoded as rule-based
filters. In this work, we take a wider view and explore scalable estimates of
data quality that can be used to systematically measure the quality of
pretraining data. We perform a rigorous comparison at scale of the simple data
quality estimator of perplexity, as well as more sophisticated and
computationally intensive estimates of the Error L2-Norm and memorization.
These metrics are used to rank and prune pretraining corpora, and we
subsequently compare LLMs trained on these pruned datasets. Surprisingly, we
find that the simple technique of perplexity outperforms our more
computationally expensive scoring methods. We improve over our no-pruning
baseline while training on as little as 30% of the original training dataset.
Our work sets the foundation for unexplored strategies in automatically
curating high quality corpora and suggests the majority of pretraining data can
be removed while retaining performance. | Max Marion, Ahmet Üstün, Luiza Pozzobon, Alex Wang, Marzieh Fadaee, Sara Hooker | 2023-09-08T19:34:05Z | http://arxiv.org/abs/2309.04564v1 | # When Less is More:
###### Abstract
Large volumes of text data have contributed significantly to the development of large language models (LLMs) in recent years. This data is typically acquired by scraping the internet, leading to pretraining datasets comprised of noisy web text. To date, efforts to prune these datasets down to a higher quality subset have relied on hand-crafted heuristics encoded as rule-based filters. In this work, we take a wider view and explore scalable estimates of data quality that can be used to systematically measure the quality of pretraining data. We perform a rigorous comparison at scale of the simple data quality estimator of perplexity, as well as more sophisticated and computationally intensive estimates of the Error L2-Norm and memorization. These metrics are used to rank and prune pretraining corpora, and we subsequently compare LLMs trained on these pruned datasets. Surprisingly, we find that the simple technique of perplexity outperforms our more computationally expensive scoring methods. We improve over our no-pruning baseline while training on as little as 30% of the original training dataset. Our work sets the foundation for unexplored strategies in automatically curating high quality corpora and suggests the majority of pretraining data can be removed while retaining performance.
## 1 Introduction
A reigning belief in machine learning is that more data leads to better performance. Recent years of progress in scaling large language models (LLMs) have shown strong evidence to support this with remarkable gains in language understanding and generation capabilities (Brown et al., 2020; Touvron et al., 2023; Kaplan et al., 2020; Anil et al., 2023). When training language models, common practice is to use massive datasets such as C4 (Raffel et al., 2020), RefinedWeb (Penedo et al., 2023), and The Pile (Gao et al., 2021). These datasets are typically compiled by scraping raw web pages from the internet, leading to a substantial portion of the text being noisy and of low quality (Dodge et al., 2021; Kreutzer et al., 2022; Luccioni and Viviano, 2021).
Practitioners have established a number of standard filtering techniques to remove low-quality examples from these datasets. These techniques are predominantly rule-based heuristics: removing
documents containing repetitive text (Zhang et al., 2022; Raffel et al., 2020; Rae et al., 2022; Hernandez et al., 2022; Penedo et al., 2023), special characters, or non-English text (Wenzek et al., 2020); ignoring data from a manually curated list of "blocklist" websites (Dodge et al., 2021; Rae et al., 2022); or eliminating documents based on certain length thresholds. While these hand-curated filters can eliminate certain noisy examples, they are not a substitute for a measure of "quality" for individual training examples, for which there are currently no established best practices (Mitchell et al., 2023).
In this work, we take a wider view and ask if we can arrive at a rigorous estimator of data quality through _data pruning_.
Data pruning attempts to isolate a subset of a larger training dataset such that a model trained on said subset preserves or improves performance over a model trained on the full dataset. To date, the majority of work on data pruning has centered on supervised computer vision settings (Qin et al., 2023; Sorscher et al., 2023; Raju et al., 2021; Paul et al., 2023; He et al., 2023), with far fewer works focusing on language. Those that have either studied the fine-tuning setting, which typically has an order of magnitude less data and thus tolerates more computational complexity (Fayyaz et al., 2022; Attendu & Corbeil, 2023; Cao et al., 2023) or based their method on hand picking high-quality corpora (Gao, 2021; Wenzek et al., 2020; Brown et al., 2020). Specifically, we try to answer the following: _Can we remove the least impactful examples from a pretraining dataset and achieve similar or better performance? Do simpler techniques for estimating data quality outperform more sophisticated and computationally expensive methods? What aspects of training dynamics signal data quality the best?_
We answer these questions by rigorously evaluating three automatic pruning metrics. One simple estimator of quality, perplexity, and two more complex, and EL2N (Paul et al., 2023) memorization factor. These methods all rely solely on model outputs and do not require a preselected high-quality
Figure 1: Demonstration of our pruning methodology.For each sequence \(z_{i}\), sized equally as the model’s context length, a pruning algorithm \(\xi\) generates score \(s_{i}\). We then choose which subset of the distribution of scores to keep: bottom, middle, or top. Finally, a new model is pretrained with the pruned data \(\hat{\mathcal{D}}_{\xi}\).
dataset. This lack of dependence on human judgments of data quality make them a promising direction for automatic selection of high quality corpora. We perform extensive experiments evaluating models ranging from 124M to 1.5B parameters across different pretrained corpora. Our contributions are the following:
1. We extensively benchmark data pruning based on perplexity, EL2N, and memorization in the LLM pretraining setting. **Surprisingly, we find the simple technique of ranking examples based on their perplexity outperforms far more complex techniques such as memorization.** A model trained on 50% of the dataset pruned based on perplexity achieves 1.33% and 1.77% improvement over the most performant models pruned to 50% of the dataset with EL2N and memorization factor respectively. A model trained on 30% of the dataset pruned with perplexity achieves a 2.1% and 1.6% improvement over the most performant models pruned to 30% of the dataset with EL2N and memorization factor.
2. To comprehensively cover multiple facets of data pruning, we provide a unified and general framework to identify and treat different data subsets present in a dataset. We compare models trained on datasets pruned to 10, 30, 50, and 70% of the training set while retaining either the bottom, middle, or top of the pruning scores' distributions. We test seven different reference models across pruning variations, investigating the impact of parameter count, training dataset, and total training steps on the reference models' pruning capabilities. Finally, we finetune a selection of our models on six tasks from the GLUE benchmark (Wang et al., 2019) to evaluate the effect of pruning on downstream generalization.
3. We test our pruning methods at scale, achieving a 1% improvement in test set perplexity using half of the dataset over a baseline model trained on the entire dataset. We show this scales to 1.5B parameter models, achieving 1.5% improvement in test set perplexity over a no-pruning baseline of the same size.
## 2 Methodology
Given a large-scale dataset \(\mathcal{D}\), we tokenize all documents and append a special <eod> token to their end. We then concatenate and split them into \(n\) sequences \(z_{i}\) of fixed length \(t\) equal to the model's context length: \(\mathcal{D}=\{z_{1},\dots,z_{n}\}\). Consider the subset of training instances \(\mathcal{P}_{\xi}\) where \(\xi\) refers to the algorithm used to select the subset. We build this subset by computing the pruning score \(Score_{\xi}(z_{i})\) for each data point \(z_{i}\). We then populate \(\mathcal{P}_{\xi}\) with instances that fit our selection criteria:
\[\mathcal{P}_{\xi}=\{z_{i}\in\mathcal{D}\ |\ Criteria(Score_{\xi}(z_{i}))\} \tag{1}\]
By removing \(\mathcal{P}_{\xi}\) from \(\mathcal{D}\), the remaining instances are described as:
\[\hat{\mathcal{D}}_{\xi}=\mathcal{D}\setminus\mathcal{P}_{\xi} \tag{2}\]
Our goal is to choose the pruning algorithm \(\xi\) such that when training a language model on the remaining subset of training instances, \(\hat{\mathcal{D}}_{\xi}\), the model's performance is not diminished:
\[\mathbb{P}_{\tau}(\mathcal{M}_{\hat{\mathcal{D}}_{\xi}})\geq\mathbb{P}_{\tau} (\mathcal{M}_{\mathcal{D}}) \tag{3}\]
where \(\mathcal{M}_{\hat{\mathcal{D}}_{\xi}}\) is the model trained on \(\hat{\mathcal{D}}_{\xi}\) and \(\mathbb{P}_{\tau}\) is the performance on task \(\tau\). We explore three metrics, perplexity, Error L2-Norm (EL2N), and memorization which we detail below in Section 2.1, and evaluate the different ways in which the metric can be employed to determine \(\mathcal{P}_{\xi}\).
In particular, we evaluate different reference models \(\tilde{\mathcal{M}}\) that are used to calculate pruning scores. Both reference models \(\tilde{\mathcal{M}}\) and trained models \(\mathcal{M}\) share the same context length to ensure consistency between the contexts for which pruning metrics are calculated and trained models are trained.
For each metric, we consider three different selection criteria to determine \(\mathcal{P}_{\xi}\) as seen in Equation 1: isolating the top, middle, or bottom percentiles of \(\mathcal{D}\) as the data to be kept. We pretrain separate models using these criteria with different percentages of the dataset to understand the dynamics and impact of each pruning metric. Since the effectiveness of these metrics in this specific context remains uncertain, we opt for these contrasting subsets to clarify the relationship between each metric and the overall model performance. Figure 1 demonstrates our experimental setup. We focus on static pruning, in which data is pruned once before training. This is in contrast to adaptive pruning, in which data is pruned as training is happening, such as in (Fayyaz et al., 2022; Park et al., 2022).
### Pruning Methods
Here, we briefly describe data pruning algorithms that we benchmark in this work. Our goal is to rigorously compare simple and computationally inexpensive ranking approaches such as perplexity and random ranking against more sophisticated and computationally expensive techniques such as memorization scores and EL2N.
#### 2.1.1 Selection via Perplexity
Perplexity measures how probable a given piece of text is based on a particular language model. For each instance \(z_{i}\) in \(\mathcal{D}\), we compute the perplexity metric as:
\[PPL(z_{i})=\exp\big{(}\frac{1}{|z_{i}|}\sum_{t_{j}\in z_{i}}NLL(t_{j})\big{)} \tag{4}\]
where \(NLL(t_{j})\) is the negative log likelihood of token \(t_{j}\) in sequence \(z_{i}\):
\[NLL(t_{j})=-\log P(t_{j}|t_{<j};\theta) \tag{5}\]
A lower perplexity score indicates that the model assigns a high probability to the text.
#### 2.1.2 Selection via EL2N
The Error L2-Norm (EL2N) score was originally proposed in a computer vision setting to identify which samples are important for learning (Paul et al., 2023). It measures each sample's importance using the model's early learning signals. We define the EL2N score on text sequences as the average \(L_{2}\) norm of the error vector, where \(\hat{y}_{i}\) is the reference model's predicted probability distribution over the vocabulary and \(y_{t}\) is the one-hot encoded representation of the ground truth:
\[\text{EL2N}(z_{i})=\frac{1}{t}\sum_{i}^{t}\|\hat{y}_{t}-y_{t}\|_{2} \tag{6}\]
We first evaluate the pruning efficacy of EL2N scores obtained from a single reference model at two different checkpoints, trained on 14% and 55% of the training dataset \(\mathcal{D}\) corresponding to 250 and
1000 steps respectively, to determine the required number of steps needed before a usable pruning signal emerges. We then train ten different reference models with different random initializations and average the EL2N score from all ten models to obtain our final EL2N score. The authors suggest that exhibiting a low EL2N score are typically those the model learns in its early stages of training, likely because they are relatively easier. Inversely, examples with higher EL2N scores are hypothesized to indicate that the model continues to incur a significant loss for them and may require additional iterations to learn.
#### 2.1.3 Memorization Ranking
Memorization in language models is a well-studied phenomenon (Carlini et al., 2023, 2021; Biderman et al., 2023a). In this work we explore memorization scores applied as a data pruning ranking. We use the memorization score as defined by Biderman et al. (2023a):
\[score(M,N)=\frac{1}{N}\sum_{i}^{N}1(z_{M+i}=\hat{z}_{M+i}) \tag{7}\]
where \(z\) is a data point, \(\hat{z}\) is a sequence of tokens predicted by the reference model, and \(1(\cdot)\) is an indicator function. A reference model is prompted with the first \(M\) tokens of a data point \(z\) to calculate the memorization score. We then greedily generate \(N\) additional tokens, \(\hat{z}\). The memorization score is the fraction of the \(N\) greedily generated tokens (\(\hat{z}_{M:M+N}\)) that match exactly with the original data point (\(z_{M:M+N}\)). For our experiments, \(M=N=32\). We note that the authors did not originally propose this as data pruning metric, but we hypothesize that it can be a valuable ranking to identity examples which require additional learning. We use reference models guaranteed to have seen the full training set to ensure the applicability of memorization scores. A high memorization score indicates the model reproduces more of the text verbatim.
#### 2.1.4 Random Pruning
We also evaluate a lower bound of expected performance: pruning a random selection of samples. This allows us to ask the question "are proposed pruning methods any better than a _random guess_?"
## 3 Experiments
### Model
We train autoregressive decoder-only Transformer models (Vaswani et al., 2023) with a standard language modeling objective. Given an input sequence of \(z_{i}=[r_{1},\cdots,r_{t}]\) from training data \(\mathcal{D}\), a language model with parameters \(\theta\) is trained to minimize the negative log-likelihood loss as defined in Equation 5. Our language models follow the traditional GPT-style architecture (Radford et al., 2018).
While training our models, we use AdamW (Loshchilov and Hutter, 2019) with linear cosine scaling and a batch size of 2048. The 124M parameter models are trained for 8000 steps, which amounts to a total of 33B tokens with a learning rate that linearly increases from 0 to 1.5e-4 over the course of training. This is approximately 4.4 epochs over the unpruned dataset. We tokenize the data with Byte Pair Encoding (Sennrich et al., 2016) with a vocabulary of 51200. Due to the memory and
computational costs of training 1.5B parameter models, our experiments at this size are trained with a batch size of 512 for 14568 steps. As such, the models see only 7.6B tokens, equivalent to a single epoch of our unpruned dataset. The learning rate for 1.5B parameter models linearly increases from 0 to 1.2e-4 over the course of training. All models use a context window length of 2048.
### Data
We use a random sample of the May 2022 snapshot of CommonCrawl1 in our experiments. After downsampling the unpruned dataset has 7.6B tokens, about 20% of the full snapshot. This downsampling is required due to the computational cost of our various ablation experiments, which each require pretraining a new model from random initialization. This dataset is _prefiltered_ using a combination of automatic and hand-crafted filters, as we aim to further improve data quality beyond common rule-based filters. The filters exclude repetitive documents, documents with percentages of special characters, and documents that contain explicit words and toxic text, similar to deduplication steps seen in Taylor et al. (2022); Kocetkov et al. (2022). Our Wikipedia dataset contains 5.3M tokens and only includes English pages.
Footnote 1: [https://data.commoncrawl.org/](https://data.commoncrawl.org/)
### Ablations
For all techniques, we compare performance when only 10%, 30%, 50%, and 70% of all data is preserved. We compare retaining the top, middle, and bottom subsets according to the pruning ranking, e.g., when retaining 30% of the bottom of the pruning metric's distribution over the training set, we calculate the 30th percentile of the pruning metric's distribution and remove all data points with perplexity above it. When retaining the middle 30%, we calculate the 35th and 65th percentile and remove all data points above and below those numbers respectively. Each ablation study(pruning method, percent data remaining, section of distribution preserved) **requires training a new model from random initialization**. We train a minimum of nine models with 124M parameters from scratch for each experimental variant.
Table 1 summarizes the perplexity pruning variations we explore in this paper. For perplexity, we
\begin{table}
\begin{tabular}{l l} \hline \hline Experimental axes & Choices \\ \hline Pruning Metric & Perplexity, EL2N, Memorization \\ Pct. Data Remaining & 10, 30, 50, 70 \\ Pruning Subset & Bottom, Middle, Top \\ Reference Model Size & 124M, 6B, 13B, 52B \\ Reference Model Epoch Perc. & 14\%, 55\%, 440\%, Full \\ Reference Model Tr. Data & CC, Wiki, Web-scale \\ Trained Model Size & 124M, 1.5B \\ \hline \hline \end{tabular}
\end{table}
Table 1: Pruning choices explored in the experiments. Under “Reference Model Training Steps”, “Full” refers to the fully trained Coherence LLMs. Under “Reference Model Training Data”, “Web-scale” refers to the significantly larger training datasets used by the Cohere reference models.
use a separate model to compute perplexity from the model trained on the pruned data. We call models used to compute the perplexity ranking _reference models_ and the models trained on the pruned datasets _pruned models_. We conduct a rigorous evaluation of what impacts the quality of the ranking by varying different factors that affect the perplexity distribution:
1. **Reference Model Size** To explore how reference model size impacts the rating quality, we compare perplexity computations using 6B, 13B, and 52B Cohere models trained on full web-scale datasets.
2. **Reference Model Training Data** To isolate the impact of training data, we compute perplexity using 124M parameter reference models trained on either CommonCrawl or Wikipedia.
3. **Total Reference Model Training Steps** To isolate the impact of early training signals, we compute perplexity and EL2N using 124M parameter models trained on CommonCrawl data for approximately 14% and 55% of total training steps. Reference models trained on CommonCrawl are trained on a non-overlapping subset from the CommonCrawl dataset that is pruned and used to train the student model.
### Evaluation
We report perplexity on a test set from the same CommonCrawl snapshot with identical prefiltering as the training data. This test set contains 266M tokens, equivalent to about 3.5% of the training set.
We also finetune a subset of our models on six different classification tasks from GLUE (Wang et al., 2019).We do not prune the task dataset, as our aim is to analyze the pruning methods' effects on pretraining. We compare performance after 8000 steps (approximately 4.4 epochs of the pretraining dataset), chosen to compare performance after models have saturated their capacity by training enough steps to plateau on validation metrics.
## 4 Results and Discussion
### Removing Easy Instances Improves Performance
Though the most competitive variant for each pruning method varies based on the subset of the scoring distribution retained (top, middle, or bottom), we observe a consistent pattern: the highest performant variants are _not_ the subsets that correspond to the "easier" data. The interpretation of the term "easy" varies according to the measurement employed. When employing the Perplexity metric, it refers to the bottom samples with the lowest perplexity. With the EL2N metric, it also pertains to the bottom samples exhibiting the lowest initial loss. In the context of memorization, it relates to the top samples that have been most thoroughly memorized.
Figure 2 demonstrates this pattern when using Perplexity. In contrast to the middle or top subsets, the bottom subset has much less variance in results between reference models of varying sizes, indicating the bottom subset may not be suitable for training. The middle experiments achieve consistently low test set perplexities for various reference model sizes and pruning ratios. Generally, performance monotonically degrades as the amount of data remaining shrinks - except
for the middle subset for the best-performing reference models. In these cases, retaining only 50% and even 30% of the dataset outperforms retaining 70% of the dataset.
Next, Figure 2(b)(a) shows the results for the EL2N metric.The middle subset is also the best variant for EL2N. While the best performing run does not outperform the baseline, the best performance is achieved when retaining 50% of the middle subset, outperforming the model trained on 70% of the dataset, similar to the results when using perplexity. As the middle subset grows, it begins to overlap with the easiest examples, degrading performance. In section 4.5, we discuss how different model checkpoints influence the effectiveness of the EL2N metric.
Finally, when using memorization factor as a pruning metric, keeping the least memorized samples (bottom subset) generally performs best. Figure 2(b)(b) shows model performances for this metric. We observe that the most competitive variant of the memorization metric is the bottom 70% of the distribution. Memorization never outperforms the no-pruning baseline.
Figure 3: Evaluation of different subset selection criteria for two pruning metrics: (a) EL2N and (b) Memorization.
Figure 2: The effect of employing reference models of different sizes on the computation of pruning perplexity scores and its subsequent influence on test set perplexity. The three subset selection approaches for each set of experiments are showcased separately (keeping bottom, middle, or top of the pruning score distribution).
### Simple Pruning Metrics Outperform More Sophisticated Approaches
In Figure 4 we present results comparing the performance of the best variant of each pruning metric: (1) retaining the middle of the distribution of Perplexity scores by the fully trained 52B reference model, (2) retaining the bottom of the distribution of the Memorization Factor (least memorized samples), and (3) retaining the middle of the distribution of EL2N scores from the 1000 step checkpoint. We also include results for our baselines: a model trained on the entirety of the training data \(\mathcal{D}\) and models trained on randomly pruned data. Our results show that training on the middle subset using Perplexity outperforms other pruning metrics across all dataset sizes. For some variants, it also outperforms training on the entire dataset. For example, at 30% and 50% of the original dataset size, Perplexity outperforms the full dataset size. Compared with the no-pruning baseline, pruning to the middle 50% of the perplexity distribution leads to a 0.97% improvement in perplexity. Using only the middle 30% of the data achieves nearly the same performance, with a 0.80% improvement over the no-pruning baseline.
Compared with random selection, pruning using Perplexity results in significantly higher model performance than random pruning across all data ratios (Figure 4). For memorization and EL2N pruning metrics, both achieve similar performances to random pruning despite being far more computationally expensive.
### Pruning Benefits from Using Larger Reference Models
Given that the most competitive variant perplexity uses a reference model to compute scores, we expect that the size of the reference model will have a significant impact on the data pruned. Figure 2 shows the trained model performances after pruning with perplexity calculated with reference models ranging from 124M to 52B parameters. We find that increasing reference model
Figure 4: The top performing variants of the different pruning methods, compared across various dataset sizes. Random pruning and no-pruning are included as baselines. Perplexity-based pruning consistently surpasses both alternative metrics and the no pruning experiments. See Section 4.2 for details on the featured variants.
size improves trained model performance over the no-pruning baseline when either the middle or top subsets are used. Data pruning using the perplexity scores generated from a 52B parameter reference model achieves a 2.2% improvement in perplexity over the best-performing trained model from the 124M parameter reference model experiments. Furthermore, for 13B and 52B reference models, we observe better performances with less training data when keeping the middle and top subsets. For both of these larger models, retaining the middle 30% and 50% of the training data produces pruned models that outperform the pruned models trained on the middle 70% of the training set.
We note that the effects of subset selection, such as the bottom subset performing worse, approximately scale with the size of the reference models. The larger reference models' bottom subset training runs perform even worse than their smaller counterparts when retaining the same percentage of the training set. This overall points to the consistent finding that larger models are better calibrated at computing a useful data pruning ranking.
### Improved Pruning Signals Result from Reference Models Trained on Cleaner Data
In this section we ask: _does the data the reference model is trained on impact the quality of the ranking?_ We compare the perplexity rankings generated by reference models trained on two different corpora: Wikipedia and CommonCrawl. We investigate whether a model trained on Wikipedia, a dataset frequently hand-picked as a high-quality dataset (Xie et al., 2023; Wenzek et al., 2020), generates more effective pruning signals for perplexity rankings. In Figure 5, we compare the performance of the two variants across different pruning percentages and subset selections. We observe that in the two optimal selection variants from the general reference models (middle and top) a model trained on Wikipedia consistently yields lower validation perplexity compared to a model trained on CommonCrawl. Wikipedia's best variant, pruning to the middle 70%, outperforms
Figure 5: Performance of different pruning strategies using two different reference models: one trained on Wikipedia and one trained on CommonCrawl. A reference model trained on Wikipedia (an example of a clean noise-free corpus) achieves consistently lower validation perplexity compared to a reference model trained on a noisier CommonCrawl in our two robust settings (middle and top).
CommonCrawl's best variant, also pruning to the middle 70%, by 0.69%. This finding overall suggests that investing in a high quality reference model to generate rankings results in more effective data pruning. Reference models trained on higher quality data are better at identifying a subset of data points most conducive to model performance.
### Early Reference Model Checkpoints Serve as Effective Scoring Models
Motivated by several works that have found that there is a signal in early training checkpoints (Paul et al., 2023; Agarwal et al., 2022; Siddiqui et al., 2022), we investigate whether early checkpoint of a reference model during training offers adequate signal for calculating discriminative pruning scores. We study perplexity and EL2N scores obtained from two early checkpoints: after training on approximately 14% and 55% of the full training dataset (250 and 1000 training steps respectively). Figure 6 showcases the results of these experiments. Examining the 14% checkpoint for both perplexity and EL2N, we notice minimal variance across percentages and subset selection criteria. Performance across subsets changes considerably less than either the 55% checkpoint or the fully trained models.
Given this, we deduce that training on only 14% of the data is inadequate for our reference model to offer precise pruning scores. In contrast, the 55% reference models perform in a similar manner to the fully trained models, performing best with the middle subset, worst with the bottom subset, and comparably with the top subset. Fully training the reference model is shown not to be necessary to uphold comparable performance. Halving the reference model training steps proves effective, enabling the utilization of early checkpoints. In practice, we expect many practitioners to use off the shelf models for computing perplexity and may not need to carry the cost of pretraining a reference model from random initialization.
We also show performance for EL2N scores averaged across 10 reference models, initialized with different random seeds. We selected the 55% reference models given our previous result.
While the best pruned models using the averaged EL2N score did not outperform the best pruned models trained on only one reference model's EL2N score, the pattern of performance more similarly mirrors what we see with the larger, fully trained reference models. Specifically, in the middle subset, using 50% of the dataset outperforms using 70%. When constrained to the bottom subset, performance more clearly monotonically degrades when using less data than when using the 55% reference model, whereas the earlier checkpoint has comparable performance when retaining 30, 50, and 70% of the data. This implies that averaging scores across reference models helps hone the
Figure 6: The impact of using an early checkpoint of the reference model in pruning based on Perplexity and EL2N metrics.
pruning signal, identifying subsets "easy" or "hard" subsets in more similar ways to larger models.
### Perplexity-based Pruning Improvements Generalize to Larger Scale Models
We take our strongest pruning variant - perplexity computed using a 52B parameter reference model while retaining the middle subset - to explore the robustness of our findings at a larger scale by validating our findings on a 1.5B model. Figure 7 shows pruning scaling from 124M to 1.5B parameter models. Training a 1.5B model, we observe that random pruning performs considerably well, even reaching levels below the no-pruning run. Nonetheless, perplexity-based pruning achieves better results than random pruning across all pruning percentages. The improvement observed with perplexity-based pruning over random pruning follows a consistent pattern for both the 124M and 1.5B models. This demonstrates the scalability of our approach to a large-scale pretraining setting.
### Downstream Evaluation on GLUE
Previously, we demonstrated various ways of pruning the pretraining data and training models with different data sizes. Considering that the pretraining stage primarily focuses on knowledge acquisition (Zhou et al., 2023), we inquire about the potential ripple effects of pruning data during pretraining when these models are subsequently finetuned on downstream tasks. To analyze the impact of different pruning strategies on LLM capabilities, we finetune and evaluate models on a subset of the GLUE tasks (Wang et al., 2019). Results are presented in Table 2. We observe that pruning the pretraining dataset consistently improves performance across all tasks. While no single pruning strategy (combining both pruning metric and percentage of remaining data) stands out as superior across all tasks, the absence of a universally dominant approach is consistent with earlier findings in the literature (Gao, 2021). We observe that retaining only 30% of the least memorized instances yields optimal results for SST2 and WNLI tasks. With perplexity based pruning, the best performance is obtained on QQP and QNLI tasks by keeping 50% and 70% of the training data, respectively. Even random pruning shows improvements in certain tasks, underscoring the significance of downsampling when handling noisy data during the pretraining stage to mitigate
Figure 7: Comparing the best performing pruning method (keeping the middle subset using a 52B parameter reference model) with random pruning at two distinct pruned model scales. The improvement in performance of a perplexity-based pruning approach carries from 124M to 1.5B parameter models.
potential learning degradation.
## 5 Related Work
### Rule-Based Data Pruning in NLP
Significant portions of web-scraped data used for language model pretraining have been shown to be of low quality, machine-generated spam, pornographic content (Kreutzer et al., 2022). Selection processes to determine what should be included in large-scale datasets have centered on rule-based filters and heuristics (Bane et al., 2022), such as keeping only text written in English (Raffel et al., 2020; Rae et al., 2022) or removing sequences containing blocklisted words (Raffel et al., 2020). There are also quality-based rules such as removing duplicated samples (Zhang et al., 2022) or filtering sentences that do not fit a certain amount of words (Raffel et al., 2020; Rae et al., 2022). Rule-based approaches for data filtering have shown controversial effects on model performance, with some works advertising improvements on language modeling capabilities (Penedo et al., 2023; Raffel et al., 2020), while others do not (Black et al., 2022; Biderman et al., 2023). Also, heuristics are prone to undesired outcomes due to their simplicity. For instance Dodge et al. (2021) show how removing blocklisted words disproportionately removes text from and about minority individuals.
### Metric-Based Data Pruning in NLP
Recent work on metric-based pruning has mainly focused on pruning data from the fine-tuning stage of LLMs (Attendu and Corbeil, 2023; Xie et al., 2023) most probably due to the prohibitive cost of
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & **Data Remaining** & **SST2** & **MRPC** & **QQP** & **QNLI** & **RTE** & **WNLI** \\ \hline \hline
**No Pruning** & 100\% & 78.15\({}_{0.002}\) & 64.32\({}_{0.021}\) & 76.55\({}_{0.001}\) & 65.40\({}_{0.006}\) & 49.69\({}_{0.024}\) & 51.56\({}_{0.040}\) \\ \hline & 70\% & 77.92\({}_{0.002}\) & 65.21\({}_{0.017}\) & 76.58\({}_{0.002}\) & 65.11\({}_{0.006}\) & 49.69\({}_{0.013}\) & 48.44\({}_{0.038}\) \\
**Random** & 50\% & 78.19\({}_{0.003}\) & 65.16\({}_{0.020}\) & 76.40\({}_{0.001}\) & 65.44\({}_{0.006}\) & 49.92\({}_{0.009}\) & 49.69\({}_{0.062}\) \\
**Pruning** & 30\% & 77.29\({}_{0.007}\) & **66.04\({}_{0.017}\)** & 76.36\({}_{0.001}\) & 65.22\({}_{0.005}\) & 51.33\({}_{0.024}\) & 50.31\({}_{0.057}\) \\ & 10\% & 76.44\({}_{0.006}\) & 65.83\({}_{0.021}\) & 75.91\({}_{0.001}\) & 64.40\({}_{0.007}\) & 50.70\({}_{0.007}\) & 50.62\({}_{0.016}\) \\ \hline & 70\% & 77.29\({}_{0.006}\) & 64.38\({}_{0.016}\) & 76.42\({}_{0.001}\) & 66.03\({}_{0.007}\) & 49.06\({}_{0.021}\) & 49.06\({}_{0.042}\) \\
**Memorization** & 50\% & 77.89\({}_{0.006}\) & 65.47\({}_{0.017}\) & 76.51\({}_{0.001}\) & 65.99\({}_{0.005}\) & 49.77\({}_{0.013}\) & 50.31\({}_{0.048}\) \\
**Bottom subset** & 30\% & **78.52\({}_{0.004}\)** & 65.89\({}_{0.016}\) & 76.48\({}_{0.001}\) & 65.91\({}_{0.006}\) & 50.31\({}_{0.009}\) & **54.38\({}_{0.061}\)** \\ & 10\% & 76.64\({}_{0.004}\) & 65.16\({}_{0.015}\) & 76.11\({}_{0.001}\) & 64.61\({}_{0.006}\) & 50.39\({}_{0.016}\) & 51.88\({}_{0.059}\) \\ \hline & 70\% & 78.61\({}_{0.008}\) & 66.46\({}_{0.018}\) & 76.93\({}_{0.001}\) & 67.00\({}_{0.005}\) & 48.67\({}_{0.017}\) & 50.00\({}_{0.058}\) \\
**EL2N** & 50\% & 79.17\({}_{0.007}\) & 65.42\({}_{0.016}\) & 76.35\({}_{0.001}\) & 62.43\({}_{0.007}\) & 51.41\({}_{0.028}\) & 51.56\({}_{0.049}\) \\
**Middle subset** & 30\% & 78.98\({}_{0.005}\) & 65.41\({}_{0.012}\) & **77.47\({}_{0.001}\)** & 68.63\({}_{0.005}\) & 49.69\({}_{0.022}\) & 55.31\({}_{0.067}\) \\ & 10\% & 78.31\({}_{0.006}\) & 63.38\({}_{0.016}\) & 76.93\({}_{0.001}\) & 65.34\({}_{0.006}\) & **51.95\({}_{0.021}\)** & 51.25\({}_{0.064}\) \\ \hline & 70\% & 78.40\({}_{0.004}\) & 64.43\({}_{0.020}\) & 76.68\({}_{0.001}\) & **66.74\({}_{0.007}\)** & 50.16\({}_{0.023}\) & 49.06\({}_{0.012}\) \\
**Perplexity (52B)** & 50\% & 78.01\({}_{0.006}\) & 64.37\({}_{0.021}\) & 76.82\({}_{0.001}\) & 66.00\({}_{0.004}\) & 50.62\({}_{0.023}\) & 50.31\({}_{0.021}\) \\
**Middle subset** & 30\% & 77.34\({}_{0.005}\) & 64.84\({}_{0.023}\) & 76.76\({}_{0.001}\) & 65.89\({}_{0.002}\) & 50.86\({}_{0.009}\) & 50.94\({}_{0.031}\) \\ & 10\% & 77.66\({}_{0.006}\) & 65.36\({}_{0.017}\) & 76.40\({}_{0.001}\) & 66.52\({}_{0.007}\) & 51.17\({}_{0.012}\) & 53.44\({}_{0.040}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Mean accuracy and standard deviation of the best variants of each pruning algorithm for GLUE classification tasks. Underlined results surpass the baseline performance with no pruning. The best results for each task are marked in bold. Results are reported for 5 runs of each model, trained for 3 epochs with a learning rate of \(1e-5\). |
2309.10539 | OpenMSD: Towards Multilingual Scientific Documents Similarity
Measurement | We develop and evaluate multilingual scientific documents similarity
measurement models in this work. Such models can be used to find related works
in different languages, which can help multilingual researchers find and
explore papers more efficiently. We propose the first multilingual scientific
documents dataset, Open-access Multilingual Scientific Documents (OpenMSD),
which has 74M papers in 103 languages and 778M citation pairs. With OpenMSD, we
pretrain science-specialized language models, and explore different strategies
to derive "related" paper pairs to fine-tune the models, including using a
mixture of citation, co-citation, and bibliographic-coupling pairs. To further
improve the models' performance for non-English papers, we explore the use of
generative language models to enrich the non-English papers with English
summaries. This allows us to leverage the models' English capabilities to
create better representations for non-English papers. Our best model
significantly outperforms strong baselines by 7-16% (in mean average
precision). | Yang Gao, Ji Ma, Ivan Korotkov, Keith Hall, Dana Alon, Don Metzler | 2023-09-19T11:38:39Z | http://arxiv.org/abs/2309.10539v1 | # OpenMSD: Towards Multilingual Scientific Documents Similarity Measurement
###### Abstract
We develop and evaluate multilingual _scientific documents similarity measurement_ models in this work. Such models can be used to find related works in different languages, which can help multilingual researchers find and explore papers more efficiently. We propose the first multilingual scientific documents dataset, _Open-access Multilingual Scientific Documents_ (Open-MSD), which has 74M papers in 103 languages and 778M citation pairs. With Open-MSD, we pretrain science-specialized language models, and explore different strategies to derive "related" paper pairs to fine-tune the models, including using a mixture of _citation_, _co-citation_, and _bibliographic-coupling_ pairs. To further improve the models' performance for non-English papers, we explore the use of _generative language models_ to enrich the non-English papers with English summaries. This allows us to leverage the models' English capabilities to create better representations for non-English papers. Our best model significantly outperforms strong baselines by 7-16% (in mean average precision).
## 1 Introduction
Although English is the predominant language in scientific publications Liu (2017), _diversity and internationalization_ in the scientific community has attracted more attention in recent years Uzuner (2008); Marquez and Porras (2020). Over 75% of researchers use English as a foreign language Baskaran (2016), and they often need to search related papers in both their native languages and in English. Think tanks and decision-making agencies also need to find related works in different languages on the same topic, e.g., natural resource management and biodiversity studies, to ensure their analyses and decisions are unbiased and consider all affected countries Steigerwald et al. (2022). As the volume of non-English papers has rapidly grown since 2000, steadily accounting for 5-10% of all scientific publications Fortunato et al. (2018); Bornmann et al. (2021); Moskaleva and Akoev (2019), the scientific community has an ever stronger need for multilingual _scientific documents similarity measurement_ (SDSM) models, so as to help researchers find, discover, and explore scientific publications in different languages more efficiently. This paper focuses on the development and evaluation of multilingual SDSM models.
The state-of-the-art SDSM models, e.g, Cohan et al. (2020); Ostendorff et al. (2022); Mysore et al. (2022), use Transformer-based Vaswani et al. (2017) text encoders to create dense representations for the papers. Starting from a _pretrained science-specialized language model_ (e.g., SciBERT Beltagy et al. (2019)), they fine-tune a _dual encoder_Gillick et al. (2018) with _contrastive learning objectives_Chopra et al. (2005); Wu et al. (2018), by using "related" and "unrelated" pairs of papers derived from citation-based heuristics or _graph embedding_ algorithms Perozzi et al. (2014); Lerer et al. (2019). These models show promising performance on several SDSM tasks, e.g., citation prediction and paper recommendation. However, all of these SDSM models were trained with English data (e.g., the S2ORC Lo et al. (2020) dataset) and hence only work for English papers.
We identify three main challenges to develop multilingual SDSM models. **(i)** There are no multilingual scientific documents datasets to train and evaluate multilingual SDSM models. **(ii)** There are no science-specialized multilingual language models. **(iii)** As the citation graphs' structures for English and multilingual papers are very different (e.g., non-English papers have much fewer citation links than English papers, see Di Bitterti and Ferreras (2017) and Table 1), it is unclear whether the "related" and "unrelated" pairs extracted by the existing methods are still effective for training
multilingual SDSM models.
In this paper, we propose both data and novel methods for the _multilingual SDSM_ problem. For data, we build the _Open-access Multilingual Scientific Documents_ (OpenMSD) dataset, which has 74M papers and 778M citations. Key statistics of OpenMSD are presented in Table 1. Three SDSM tasks - _citation_, _co-citation_[13], and _bibliographic-coupling_[14] prediction - are derived from OpenMSD. To the best of our knowledge, OpenMSD is the first multilingual scientific documents and citation relations dataset. Scripts for reconstructing the OpenMSD dataset are available at [https://github.com/google-research/google-research/tree/master/OpenMSD.1](https://github.com/google-research/google-research/tree/master/OpenMSD.1)
Footnote 1: We did not directly release the dataset due to copyright and license restrictions.
To develop multilingual SDSM models, we make explorations on three directions. **(i)** Since there are no science-specialized multilingual language models, we systematically explore different training objectives and data sources for developing such models, and benchmark their performance on multilingual SDSM tasks. **(ii)** We systematically investigate the effectiveness and limitations of the latest SDSM models, e.g., Specter [15] and SciNCL [16], in the multilingual setup, and propose new methods to enhance their performance, e.g., use a mixture of different citation-based heuristics to create training examples. **(iii)** To further improve the performance for non-English papers, we propose to use generative language models to create English summaries for non-English papers, and concatenate the summaries to the original (non-English) text, so as to leverage the model's English capabilities to create better representations for non-English papers. Our best models significantly outperform strong baselines (SOTA SDSM models on translated text) by 7-16% (in mean average precision).
## 2 Related Works
Scientific documents dataset.Several scientific documents datasets have been compiled with open-access papers. The _arXiv Dataset_ (arXiv.org, 2023) contains the metadata and PDFs of 1.7M papers, and the _PMC Open Access Subset_[1] contains the full contents of 8M papers from PubMed. Papers on the _ACL Anthology2_ have also been used to build datasets, e.g., the _ANN dataset_[11] with 14K papers and 55K citations, the _ACL ACR dataset_[1] with 11K papers, and the upcoming _ACL 60-60 dataset_[1], which will provide machine translation of 10K paper titles and abstracts randomly selected from the ACL Anthology from 2017-2021, and all the titles and abstracts from ACL 2022 (1.3K) into 60 languages. The Allen AI Institute has published the _S2ORC_ dataset [12] with 81M papers, the _SocDocs_ dataset [15] with over 120K papers and several categories of scientific tasks (classification, SDSM, recommendation), and the _S2AG_ API [13], which allows registered users to get access to the metadata (e.g., title, authors, abstract, but no full content) of 206M papers and their citations (2.5B). However, these datasets either lack citations (the PubMed- and arXiv-based datasets), or only include English papers (the other mentioned datasets).
Footnote 2: [https://aclanthology.org/](https://aclanthology.org/)
OpenMSD is the first dataset with both multilingual papers and their citations. Compared to _S2ORC_, OpenMSD has a comparable number of papers (74M in OpenMSD vs 81M in S2ORC) but 3x more full-content papers (38M in OpenMSD vs 12M in S2ORC) and 2x more citation pairs (759M in OpenMSD vs 381M in S2ORC).
Multilingual Language Models.With the huge success of Transformer-based [13] language models for English tasks, a number of multilingual variants have also been proposed. They mostly follow the same recipe (e.g., architecture, learning objectives, etc.) as their
\begin{table}
\begin{tabular}{l l l} \hline \hline & \#Papers & \\ & \(\bullet\) w/ abstracts & 74M \\ & \(\bullet\) w/ citations & 53M \\ Papers & \(\bullet\) w/ full content & 38M \\ (74M) & \(\bullet\) in English & 65M \\ & \#Abstract avg tokens & 288 \\ & \#Content avg tokens & 5448 \\ & \#Total tokens & 228B \\ & \#Languages & 103 \\ & \#Categories & 340 \\ \hline \multirow{3}{*}{Citation Pairs (778M)} & \#En\(\rightarrow\)En & 759M \\ & \#En\(\rightarrow\)nonEn & 6M \\ \cline{1-1} & \#nonEn\(\rightarrow\)En & 11M \\ \cline{1-1} & \#nonEn\(\rightarrow\)nonEn & 2.5M \\ \hline \hline \end{tabular}
\end{table}
Table 1: Key statistics of the OpenMSD dataset.
original English versions, but are pretrained with multilingual texts. Widely used models include encoder-only models like mBERT Devlin et al. (2019), XLM-R Conneau et al. (2020) and mDEBERTa He et al. (2021), encoder-decoder models like mT5 Xue et al. (2021) and mBART Tang et al. (2020), and decoder-only models like XGLM Lin et al. (2021) and BLOOM Scao et al. (2022). These models are benchmarked on multilingual datasets like XTREME Hu et al. (2020) and SuperGLUE Wang et al. (2019), which include a wide range of tasks like named entity recognition, natural language inference, and question answering. Some of them have also been fine-tuned to tackle downstream science-related tasks, e.g., multilingual acronym extraction in scientific papers Veyseh et al. (2022), and multilingual bias evaluation in social science papers Talat et al. (2022). However, there are no pretrained multilingual language models specialized for scientific documents, and there are no datasets to benchmark their performance on multilingual SDSM tasks.
SDSM models.A classic method to measure the similarity and relatedness between papers is _citation analysis_Zunde (1971); Nicolaisen (2007). Based on the citation links between papers, heuristics have been developed, e.g., _co-citation_ (two papers both cited by some common papers Small (1973)), and _bibliographic-coupling_ (two papers both cite some common papers Kessler (1963)) to find related papers. However, these methods do not work well for papers with sparse citation links, e.g., papers that are newly published, in less-studied topics, or in non-English languages.
Neural-based SDSM methods use different strategies to derive "related" and "unrelated" pairs from the citation relations, and use them to fine-tune science-specialized language models, e.g., SciBERT Beltagy et al. (2019). For example, in the Specter Cohan et al. (2020) method, if paper A cites paper B, B cites C but A does not cite C, then (A, B) is used as a positive pair while (A, C) is used as a negative pair. Ostendorff et al. (2022) proposed the _Scientific documents Neighborhood Contrastive Learning_ (SciNCL) method, which learns _graph embeddings_Lerer et al. (2019) from S2ORC's citation network; with the learned embeddings, they can measure the distance between papers on the citation graph, and hence derive "related" and "unrelated" papers. Mysore et al. (2022) proposed the _Aspire_ method, which considers the papers that are co-cited in the same sentence as positive pairs, because close proximity provides a more precise indication of the relatedness of the papers. Furthermore, as the citing sentences typically describe how the co-cited papers are related, they use the citing sentences as an additional signal to guide the model to learn on which aspects the papers are related. However, Aspire requires tools to parse the citations in papers content, which are unavailable for multilingual scientific documents. Also, all these methods are designed for English SDSM; it remains unclear whether they can be used to train multilingual SDSM models.
## 3 The OpenMSD Dataset
Data sources.The scientific documents in OpenMSD are extracted from two open-access data sources: the 202203 version of _Unpaywall snapshot3_ (with 140M data entries) and the 2022 April snapshot of the _CrossRef Metadata_Crossref (2022) (with 134M data entries). Each data entry includes the title, Digital Object Identifier (DOI), URLs and some additional meta information for a scientific publication. 130 million papers occur in both data sources, by matching DOIs. We scrape and clean the contents from the URLs, and remove the papers for which no text is scraped; 74M papers are retained, among which 38M have full content. Citation relations in OpenMSD are extracted from the 2022 October snapshot of the _OpenCitations_ dataset Peroni and Shotton (2020). It has 1.4 billion unique citation pairs, each pair identified by the DOIs of its citing and cited paper. 96% of the DOIs appear in OpenCitations can be found in Unpaywall or CrossRef. We only keep paper pairs that have both the citing and cited papers' abstract extracted (as papers without abstracts cannot be used to train SDSM models; see SS5), obtaining 778M citation pairs in the end.
Footnote 3: [https://unpaywall.org/products/snapshot](https://unpaywall.org/products/snapshot)
Footnote 4: [https://github.com/google/cld3](https://github.com/google/cld3).
Languages & Categories.We use _cld3_4 to detect the languages from papers' titles and abstracts. 103 languages were found, with English (65M) being the predominant language. Fig. 1 shows the the sizes of the top 20 languages. Papers' category labels are extracted from CrossRef; 76% papers have category labels, with each paper
having 1.4 category labels on average. 340 categories are found in total, and the size of the top 20 categories are presented in Fig. 2.
We note that OpenMSD is dominated by English resources, which account for 88% papers and 98% of citation pairs (see Table 1). A common strategy to mitigate the data imbalance is to down-sample the English papers Conneau et al. (2020), but it only works well in very large datasets like _mC4_ ((Xue et al., 2021), with 6.6B pages and 6.3T tokens). Recent works, e.g., Wang et al. (2022), even suggest that the English-predominance in the training set does not necessarily hurt the multilingual performance, because fine-tuning multilingual models only with English data can yield strong performance on multilingual tasks. For these reasons, we do not perform any down-sampling over the English resources in OpenMSD. Also, as scientific papers share many common characteristics regardless of their categories, we do not manipulate the category distributions in OpenMSD.
Data split.To use OpenMSD to develop and evaluate multilingual SDSM models, we first remove all papers that do not have citation links with any other papers, as we cannot find their "related" papers; this leaves us with 53M papers in 65 languages. To split these papers into train and test sets, a simple strategy is to randomly sample papers with a predefined ratio (e.g., 10000:1). However, the test set built with this strategy will be dominated by English papers and citations (see Table 1) and hence can hardly be used to evaluate models' performance for non-English papers. Also, the _variance_ of the evaluation results on such test sets will be high, because some small languages only have a few examples in the test set. Furthermore, such test sets cannot be used to evaluate how well the multilingual models can generalize to unseen languages, because most languages will appear in both train and test.
To tackle the aforementioned problems, we split the data into train, _in-distribution test_ (IDT), and _out-of-distribution test_ (ODT) sets. To create the train and IDT sets, we sample papers in the top-30 languages according to their distributions in the papers pool, and ensure that train and IDT has the same set of languages. The remaining papers in the top-30 languages, together with the all the papers in the other (35) languages (around 5.5K), are used to build the ODT. In addition, to avoid English-predominance in the ODT set, for English papers, we only keep those that are citing or cited by some non-English papers in the ODT set. The final train, IDT, and ODT sets have 53M, 247K and 85K papers, respectively. The languages in each split are presented in Table 2. With this split strategy, IDT can be used to evaluate the performance of multilingual models in a more "realistic" setup (as its language distribution is close to the real language distribution of the scientific documents), while ODT can be used to benchmark the models' performance for papers in non-English and unseen languages.
With the data splits, we derive three types of related paper pairs in each data split: _direct citations_ (DC), _co-citations_ (CC) and _bibliographic-coupling_ (BC) (see SS2 for definitions). These relations are widely used in citation analysis Nicolaisen (2007) as indicators for related documents. We remove pairs between papers across different splits to avoid data leakage. Also, we remove all English to English pairs in ODT, to make sure that
Figure 1: Top 20 languages in OpenMSD.
Figure 2: Top 20 categories in OpenMSD.
ODT is focused on pairs involving non-English papers. The numbers of mono-lingual and cross-lingual pairs of each relation type and in each data split are presented in Table 3.
We note that finding the related papers in IDT and ODT is much more challenging than in existing datasets like SciDocs Cohan et al. (2020) (see SS2). In SciDocs' SDSM tasks (cite, co-cite, co-view and co-read), papers are segmented into small groups, each with an anchor paper, five related papers to the anchor, and 25 randomly sampled papers; at evaluation time, models need to find the related papers for the anchor just from its group. But in IDT/ODT, papers are not segmented into groups; hence, for each paper, the models need to find its related papers from the whole paper pool (247K papers for IDT and 85K papers for ODT). We believe the setup used in IDT/ODT can better reflect the real use cases.
## 4 Pretraining Multilingual Science-Specialized Language Models
In this section, we develop science-specialized language models, which can be used as starting points to fine-tune multilingual SDSM models. We use _mT5_Xue et al. (2021) as the baseline and our initial checkpoint, as it is one of the SOTA multilingual language models and its encoder can be easily used in the SDSM task. mT5 is pre-trained on the _mC4_ dataset with the _corrupted span recovery_ (CSR) objective. In CSR, consecutive spans of input tokens are replaced with a mask token and the model is trained to reconstruct the masked-out tokens. We use mT5-base because the same size of SciBERT is used in existing SDSM models Cohan et al. (2020); Ostendorff et al. (2022).
Further Pretraining.As our target task is SDSM, we aim to develop multilingual language models optimized for SDSM. Hence, besides CSR, we also consider the _contrastive loss_ (CL) with sampled in-batch negative Henderson et al. (2017). CL encourages the model to push the positive examples closer and the negative examples apart. Formally, let \(\{(p_{i},q_{i})\}_{i=1}^{n}\) be a training batch with size \(n\), where \((p_{i},q_{i})\) is the \(i\)th pair of related documents; CL is then defined as
\[\mathcal{L}_{CL}(\theta)=\frac{-\exp[(f_{\theta}(p_{i})\cdot f_{\theta}(q_{i} ))/\tau]}{\sum_{j=1}^{n}\exp[(f_{\theta}(p_{i})\cdot f_{\theta}(q_{j}))/\tau]}, \tag{1}\]
where \(f\) is a neural encoder parameterized by \(\theta\), '-' denotes vector dot-product, and \(\tau\) is the softmax temperature. CL has shown strong performance in both pretraining Lee et al. (2019) and fine-tuning Giorgi et al. (2021); Izacard et al. (2022) dense representation models. To construct the training example pairs \((p_{i},q_{i})\), we randomly extract snippets from the abstracts and contents of all the documents in the train set, and the length of each snippet is between 10 and 256 mT5-sentence-piece tokens. Snippets extracted from the same document are treated as positive example pairs. We apply average-pooling to the output of the top transformer layer to get the document representation.
With the two learning objectives (CSR and CL) and two available datasets (mC4 and OpenMSD), we consider four setups to further pretrain mT5, as summarized in Table 4. We use the same hyperparameters as in mT5: the initial learning rate is 0.001 and decayed using the inverse square-root
\begin{table}
\begin{tabular}{l l l} \hline \hline Split & Languages & \\ \hline Train (53M) \& & \begin{tabular}{l} En, De, Fr, Ja, Es, Pt, Tr, Ru, Id, \\ It, NI, Pl, Uk, Ko, nn, Zh, Cs, \\ Hu, Lt, Da, Sv, fr, Af, Ms, Vi, \\ SI, Fi, Ro, Ar, Gl, \\ \end{tabular} \\ ODT (85K) &
\begin{tabular}{l} En, Sr, De, Fr, He, Es, Pt, Ja, Fa, \\ Ca, Lv, Tr, La, Sk, Su, Zh, Ru, \\ It, Eu, Pi, Ni, Id, Et, Ko, Cs, Bs, \\ Hu, sq, Is, No, Hi, Uk, Tl, Az, \\ Af, Lt, Bs, Mr, Ms, Sv, Be, Da, \\ Co, Mi, Oc, Vi, Cy, Fi, Ia, Kk, \\ Ku, Mk, Ro, Sl, Gl, Ga, Aa, Co, \\ Fo, Ka, El, Ky, Sw, Th, Uz \\ \end{tabular} \\ \hline \hline \end{tabular}
\end{table}
Table 2: Languages (ISO 639-1 code) in different splits of OpenMSD, ordered by their sizes in each split.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline & & Train & IDT & ODT \\ \hline \multirow{4}{*}{DC} & \#En\(\rightarrow\)En & 759M & 229K & 0 \\ & \#En\(\rightarrow\)nonEn & 6M & 2K & 3K \\ & \#nonEn\(\rightarrow\)En & 11M & 2K & 6K \\ & \#nonEn\(\rightarrow\)nonEn & 3M & 2K & 3K \\ \hline \multirow{4}{*}{CC} & \#En\(\leftrightarrow\)En & 12B & 117K & 0 \\ & \#En\(\leftrightarrow\)nonEn & 208M & 2K & 1K \\ & \#nonEn\(\leftrightarrow\)nonEn & 21M & 0.4K & 1K \\ \hline \multirow{4}{*}{BC} & \#En\(\leftrightarrow\)En & 63B & 1M & 0 \\ & \#nonEn\(\leftrightarrow\)nonEn & 1B & 11K & 7K \\ \cline{1-1} & \#nonEn\(\leftrightarrow\)nonEn & 29M & 0.4K & 1K \\ \hline \hline \end{tabular}
\end{table}
Table 3: Sizes of direct citation (DC), co-citation (CC) and bibliographic-coupling (BC) pairs in each data split. DC is a directed relation (denoted by \(\rightarrow\)), while CC/BC are non-directional relations (denoted by \(\leftrightarrow\)).
strategy, the batch size is 1K, the temperature \(\tau\) is 1, and the models are trained with 1M steps. 0.1% of the training data are randomly sampled and left out as the dev set; the checkpoints with the best performance on the dev set are used as the final models. All our experiments are performed on a cloud machine with eight TPUv3s.
Results.We compare the mT5-based models against SciBERT-base (Beltagy et al., 2019). To use SciBERT on multilingual papers, we translate the non-English papers' titles and abstracts to English with the Google Translate API5, and use SciBERT to encode the translated text. In line with (Cohan et al., 2020; Ostendorff et al., 2022), we use the title and abstract of each document as input to the pretrained models, and measure the performance of the models by their mean average precision (MAP) and nDCG@10.
Footnote 5: [https://cloud.google.com/translate](https://cloud.google.com/translate).
Results on the OpenMSD's IDT and ODT sets are presented in the top blocks in Table 5 and 6, respectively.6 First, we find that vanilla mT5 outperforms SciBERT, and we believe it is mainly because mT5 is trained with more data: SciBERT was pretrained with 3.17B tokens, while mT5 was pretrained with 6.3T tokens. Second, all further pretrained mT5-based models outperform the vanilla mT5, suggesting that all the considered data-objective combinations can benefit mT5's performance on the downstream SDSM tasks. Third, we find that the CL objective only works well with large data; this is reflected by the large performance boost from mT5 to mT5CL (which uses the large mC4 data) and the relatively small improvement from mT5 to mT5SCL (which uses the smaller OpenMSD data). The improvement from mT5CL to mT5CL2 is mostly negligible, which also suggests that the model fails to learn much from the second round of CL training with the (relatively small) OpenMSD data. Given that said, mT5CL2 is still the model with the best average performance, and hence we will use it as the initial checkpoint to fine-tune our SDSM models in the remainder of the paper,
Footnote 6: We also test all models on SciDocs; the results are presented in Table 8 in Appendix A.
## 5 Multilingual Specter Models
Specter (Cohan et al., 2020) is the first Transformer-based method specialized for English scientific NLP tasks, including SDSM. It uses the _triplet hinge loss_ to fine-tune SciBERT (Beltagy et al., 2019). Formally, given a triplet \((p_{i},q_{i}^{+},q_{i}^{-})\), where \(p_{i}\) is the anchor paper, \(q_{i}^{+}\) the positive example to the anchor, and \(q_{i}^{-}\) the negative example, the loss function is
\[\mathcal{L}_{TL}(\theta) =\max\{0,[sim(f_{\theta}(p_{i}),f_{\theta}(q_{i}^{-}))\] \[-sim(f_{\theta}(p_{i}),f_{\theta}(q_{i}^{+}))+m]\}, \tag{2}\]
where the hyper-parameter \(m\) denotes the margin, and the training examples are derived with citation-based heuristics (see SS2).
To get multilingual SDSM models, we use the Specter strategy to fine-tune mT5CL2 (see SS4). To further improve its performance, instead of only using direct citations (DCs) to extract positive pairs, we explore using co-citations (CCs) and bibliographic-couples (BCs) in addition to DCs.
* **Use the union of DC, CC, and BC pairs.** For example, we can use both DC and CC pairs as positives, denoted as \(DC\cup CC\). Because in the train set, the number of CC/BC pairs is much larger than DC (see Table 3), we down-sample the over-represented relations so as to have the same number of pairs from each relation type.
* **Use the intersection of DC, CC, and BC pairs**. Suppose a paper A cites paper B and they are both cited by another paper C, then (A, B) is both a DC and CC pair. Pairs fall into more than one relation types at the same time may have higher similarity level, compared to the pairs that only fall into one type of relation. We consider all (four) possible intersection combinations of the relation pairs
\begin{table}
\begin{tabular}{l l l l l} \hline \hline Name & InitCkpt & Obj. & Data & Notes \\ \hline mT5 & Random & CSR & mC4 & Vanilla mT5 \\ mT5CL & mT5 & CL & mC4 & mT5 optimized for generic text similarity measurement \\ mT5Sci & mT5 & CSR & OM & mT5 optimized for scientific texts \\ mT5SCL mT5 & CL & OM & mT5 optimized for scientific similarity measurement \\ mT5CL2 mT5CL & CL & OM & Further pretrain mT5CL with scientific documents \\ \hline \hline \end{tabular}
\end{table}
Table 4: Comparing mT5 and its further pretrained models. _OM_ stands for OpenMSD.
to build positive pairs: \(DC\cap CC\), \(DC\cap BC\), \(CC\cap BC\), and \(DC\cap CC\cap BC\).
We use the same strategy as Specter to extract the hard negatives. But instead of using hinge loss (Eq. (2)), we use the the CL objective (Eq. (1)) during fine-tuning, because CL can contrast each positive pair with more negative examples (as all other documents in the batch are used as negatives). Research has shown that replacing hinge loss with CL can significantly improve the performance (Giorgi et al., 2021; Izacard et al., 2022), especially when the batch size is large (Chen et al., 2020). We denote the resulting models _Multilingual Specter (mSpt)_, as they are multilingual models extending and generalizing Specter.
Baselines.We compare the mSpt models against two SOTA baselines: Specter (Cohan et al., 2020) and SciNCL (Ostendorff et al., 2022), both applied to the translated texts. To ensure a fair comparison, we re-implement these models, replace SciBERT with T5CL2 (the English-version of mT5CL2, based on T5 (Raffel et al., 2020) and further pretrained with C4 and English papers in OpenMSD), and fine-tune it with OpenMSD's training data using the CL objective. When implementing SciNCL, we increase the graph embedding dimension from 768 (their original setup) to 2048, as the larger dimension size yields better performance and 2048 is the largest dimension we manage to train with reasonable time and resources. More discussions about the SciNCL implementations are in SS6. Our preliminary experiments show that the re-implemented versions outperform the original versions by more than 30% in both MAP and nDCG. We do not re-implement Aspire (Mysore et al., 2022) because its reported performance is close to SciNCL and it needs to use papers cited in the same sentence as positive pairs (see SS2); we are not aware of tools that can reliably parse such information from multilingual papers.
To find the optimal hyper-parameters, we have
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{Citation} & \multicolumn{2}{c}{Co-citation} & \multicolumn{2}{c}{Bib-couple} & \multicolumn{2}{c}{Average} \\ & MAP & nDCG & MAP & nDCG & MAP & nDCG & MAP & nDCG \\ \hline \multicolumn{8}{l}{**Pretrained Language Models**} \\ SciBERT w/ translate & 0.81 & 1.13 & 0.42 & 0.90 & 0.44 & 1.38 & 0.56 & 1.14 \\ mT5 & 0.95 & 1.32 & 0.47 & 1.01 & 0.47 & 1.47 & 0.63 & 1.27 \\ mT5Sci & 1.41 & 1.93 & 0.71 & 1.48 & 0.66 & 2.01 & 0.93 & 1.81 \\ mT5SCL & 1.35 & 1.86 & 0.62 & 1.33 & 0.62 & 1.92 & 0.86 & 1.70 \\ mT5CL & 10.11 & 13.28 & **4.26** & **7.82** & **3.55** & **8.40** & **5.97** & 9.83 \\ mT5CL2 & **10.24** & **13.38** & 4.20 & 7.78 & 3.48 & 8.37 & **5.97** & **9.84** \\ \hline \multicolumn{8}{l}{**SOTA Baselines**} \\ Cohan et al. (2020) w/ translation & **17.87** & **22.74** & **7.16** & **12.23** & **6.15** & **12.30** & **10.39** & **15.76** \\ Ostendorff et al. (2022) w/ translation & 10.33 & 13.58 & 4.41 & 8.04 & 3.64 & 8.42 & 6.13 & 10.01 \\ \hline \multicolumn{8}{l}{**Multilingual Specter (mSpt)**} \\ mSpt\({}_{DC}\) & 18.52 & 23.51 & 7.38 & 12.52 & 6.23 & 12.41 & 10.71 & 16.15 \\ mSpt\({}_{CC}\) & 16.83 & 21.48 & 7.19 & 12.05 & 5.91 & 11.57 & 9.98 & 15.03 \\ mSpt\({}_{BC}\) & 13.05 & 16.99 & 5.49 & 9.61 & 4.97 & 10.27 & 7.84 & 12.29 \\ mSpt\({}_{DC\cup CC}\) & **19.06** & **24.15** & **7.67** & **12.81** & **6.38** & **12.44** & **11.04** & **16.47** \\ mSpt\({}_{DC\cup BC}\) & 18.70 & 23.73 & 7.29 & 12.34 & 6.16 & 12.14 & 10.72 & 16.07 \\ mSpt\({}_{CC\cup BC}\) & 15.38 & 19.74 & 6.69 & 11.35 & 5.54 & 11.10 & 9.20 & 14.06 \\ mSpt\({}_{DC\cup CC\cup BC}\) & 17.77 & 22.61 & 7.24 & 12.21 & 6.07 & 11.94 & 10.36 & 15.59 \\ mSpt\({}_{DC\cap CC}\) & 18.73 & 23.77 & 7.28 & 12.41 & 6.02 & 12.31 & 10.68 & 16.10 \\ mSpt\({}_{DC\cap BC}\) & 18.67 & 23.73 & 7.20 & 12.27 & 6.08 & 12.18 & 10.65 & 16.06 \\ mSpt\({}_{CC\cap BC}\) & 17.30 & 22.07 & 7.17 & 12.13 & 6.14 & 11.99 & 10.20 & 15.40 \\ mSpt\({}_{DC\cap NC\cap BC}\) & 18.62 & 23.66 & 7.25 & 12.39 & 6.06 & 12.15 & 10.64 & 16.07 \\ \hline \multicolumn{8}{l}{**mSpt + Enriched Documents**} \\ mSpt\({}_{DC\cup CC}\) + TopNSumm\({}_{64}\) & 19.03 & 24.13 & 7.47 & 12.53 & 6.33 & 12.33 & 10.94 & 16.33 \\ mSpt\({}_{DC\cup CC}\) + PaLM2Summ\({}_{64}\) & 19.08 & 24.19 & 7.64 & 12.82 & **6.39** & 12.43 & 11.04 & 16.48 \\ mSpt\({}_{DC\cup CC}\) + TopNSumm\({}_{128}\) & 19.09 & 24.21 & 7.67 & 12.88 & 6.38 & 12.44 & 11.05 & 16.51 \\ mSpt\({}_{DC\cup CC}\) + PaLM2Summ\({}_{128}\) & **19.22** & **24.38** & **7.70** & **12.92** & 6.38 & **12.45** & **11.10** & **16.58** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Performance (in %) on IDT. All results are averaged over 5-10 runs with different random seeds.
used batch sizes 256, 512, 1K, 2K and 4K, and initial learning rates \(10^{-n}\), where \(n=1,2,\cdots,7\). The inverse square-root learning rate decay strategy is used, with decay factor \(5\times 10^{-5}\), and the minimum learning rate is set to \(10^{-8}\). We find that batch sizes \(\geq 1K\) yield similar performance, and learning rate \(10^{-2}\) yields the best performance on the dev set. Each model (including mSpt and the re-implemented SOTA baselines) is fine-tuned for up to 100K steps, in which the first 1.5K steps are used for warm-up. Checkpoints with the best performance on the dev set are used in the end.
Results.The results on IDT and ODT are presented in Table 5 and 6, respectively. We find that the mSpt models' performance is significantly better7 than all pretrained models, suggesting that using either DC, CC or BC to fine-tune mT5CL2 can benefit the performance.8 Compared to the SOTA baselines, on both IDT and ODT, the best mSpt model yields significantly better performance. In particular, on IDT, the best mSpt model is around 5% better than the best baseline, while on ODT the margin is increased to 8%, suggesting that mSpt performs particularly better for non-English and unseen-language papers. Among the mSpt models, using both DC and CC as positives (i.e., \(DC\cup CC\)) yields the best performance, better than using of any of the relation types alone or in intersections. We believe this is because different citation relations have complementary characteristics; learning from a proper mixture of relations can help the model learn from each relation type, yielding more robust performance even with out-of-distribution data. This finding is significant as existing works only use DC [14] or
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{Citation} & \multicolumn{2}{c}{Co-citation} & \multicolumn{2}{c}{Bib-couple} & \multicolumn{2}{c}{Average} \\ & MAP & nDCG & MAP & nDCG & MAP & nDCG & MAP & nDCG \\ \hline \multicolumn{10}{l}{**Pretrained Language Models**} \\ SciBERT w/ translate & 1.53 & 1.95 & 0.81 & 1.41 & 0.40 & 0.62 & 0.91 & 1.33 \\ mT5 & 1.62 & 2.07 & 0.99 & 1.73 & 0.46 & 0.72 & 1.02 & 1.51 \\ mT5Sci & 1.97 & 2.48 & 1.34 & 2.13 & 0.59 & 0.94 & 1.30 & 1.85 \\ mT5SCL & 2.02 & 2.52 & 1.27 & 2.05 & 0.56 & 0.85 & 1.28 & 1.81 \\ mT5CL & **7.83** & **9.51** & **4.04** & **5.92** & 1.61 & 2.55 & 4.49 & 5.99 \\ mT5CL2 & 7.81 & 9.45 & 4.01 & 5.81 & **1.77** & **2.75** & **4.53** & **6.00** \\ \hline \multicolumn{10}{l}{**SOTA Baselines**} \\ Cohan et al. (2020) w/ translation & **16.54** & **19.91** & **6.89** & **9.58** & **3.47** & **5.02** & **8.97** & **11.50** \\ Ostendorff et al. (2022) w/ translation & 7.92 & 9.62 & 4.11 & 6.00 & 1.76 & 2.71 & 4.60 & 6.11 \\ \hline \multicolumn{10}{l}{**Multilingual Specter (mSpt)**} \\ mSpt\({}_{DC}\) & 17.64 & 21.15 & 7.15 & 9.89 & 3.70 & 5.33 & 9.50 & 12.12 \\ mSpt\({}_{CC}\) & 15.21 & 18.39 & 6.41 & 8.92 & 3.22 & 4.71 & 8.28 & 10.67 \\ mSpt\({}_{BC}\) & 11.72 & 14.35 & 5.08 & 7.28 & 3.00 & 4.33 & 6.60 & 8.65 \\ mSpt\({}_{DC\cup CC}\) & **18.03** & **21.63** & **7.42** & **10.11** & 3.65 & 5.26 & **9.70** & **12.33** \\ mSpt\({}_{DC\cup BC}\) & 17.77 & 21.35 & 7.09 & 9.69 & **3.73** & 5.28 & 9.53 & 12.11 \\ mSpt\({}_{CC\cup BC}\) & 14.32 & 17.32 & 6.13 & 8.48 & 3.22 & 4.70 & 7.89 & 10.17 \\ mSpt\({}_{DC\cup CC\cup BC}\) & 17.03 & 20.40 & 6.75 & 9.34 & 3.63 & 5.20 & 9.14 & 11.65 \\ mSpt\({}_{DC\cap CC}\) & 17.29 & 20.84 & 6.85 & 9.55 & 3.51 & 5.08 & 9.22 & 11.82 \\ mSpt\({}_{DC\cap BC}\) & 17.51 & 21.00 & 6.97 & 9.69 & 3.74 & **5.42** & 9.41 & 12.04 \\ mSpt\({}_{CC\cap BC}\) & 15.19 & 18.24 & 6.41 & 8.75 & 3.28 & 4.70 & 8.29 & 10.56 \\ mSpt\({}_{DC\cap NC\cap BC}\) & 16.87 & 20.16 & 6.59 & 9.37 & 3.53 & 5.10 & 9.00 & 11.54 \\ \hline \multicolumn{10}{l}{**mSpt + Enriched Documents**} \\ mSpt\({}_{DC\cup CC}\) + TopNSumm\({}_{64}\) & 18.30 & 22.02 & 7.23 & 9.87 & 3.74 & 5.35 & 9.76 & 12.41 \\ mSpt\({}_{DC\cup CC}\) + PaLM2Summ\({}_{64}\) & 18.85 & 22.68 & 7.29 & 9.97 & 3.97 & 5.68 & 10.04 & 12.78 \\ mSpt\({}_{DC\cup CC}\) + TopNSumm\({}_{128}\) & 18.55 & 22.31 & 7.23 & 9.88 & 3.99 & 5.72 & 9.92 & 12.64 \\ mSpt\({}_{DC\cup CC}\) + PaLM2Summ\({}_{128}\) & **19.46** & **23.40** & **7.64** & **10.53** & **4.05** & **5.81** & **10.38** & **13.24** \\ \hline \hline \end{tabular}
\end{table}
Table 6: Performance (in %) on ODT. All results are averaged over 5-10 runs with different random seeds.
CC (Mysore et al., 2022) pairs as positive training examples.
## 6 The Applicability of SciNCL on Multilingual SDSM
Although SciNCL (Ostendorff et al., 2022) is reported to achieve the SOTA performance in the (English-only) SciDocs benchmark, our experiments in SS5 show that it significantly underperforms the other fine-tuned models on multilingual SDSM. We investigate the reason in this section.
SciNCL uses _graph embedding_ models to derive training pairs. They first run BigGraph (Lerer et al., 2019) on the citation graph in S2ORC, so as to learn an embedding for each node (i.e., paper). With the nodes' embeddings, they use fast nearest neighbor search algorithms (e.g., (Xiong et al., 2020)) to find the top-\(K\) neighbors for each node, and extract positive and negative nodes therefrom: For example, for each paper, its \(i\)-th closest to \((i+n)\)-th closest papers are used as positives, while its \(k\)-th to \((k+n)\)-th closest papers are used as (hard) negatives, where \(i,k,n\in\mathbb{N}^{+}\) are hyper-parameters. With systematic hyper-parameter search, they find that \(i=20\), \(k=2000\) and \(n=5\) yield the best performance. When we re-implement SciNCL, we explore some other hyper-parameters but find the ones used in the original work yield the best performance.
From the analyses above, we can view the graph embedding model as a _teacher model_ and SciNCL as the _student_. Hence, to understand why the student models perform poorly, we benchmark the teacher model by running the graph embedding algorithm on the train set of OpenMSD, and evaluate its performance on ODT. Table 7 presents the performance with different graph embedding sizes. Comparing their performance with other systems on ODT (see Table 6), even with dimension size 2048 (the largest dimension size we can run in reasonable time), the graph embedding's performance is worse than all the other fine-tuned models, and we believe this causes the poor performance of the student SciNCL model. We speculate a reason for the poor performance of the graph embedding models is that the citation graph in OpenMSD is highly _heterogeneous_: For example, the citation graph is much denser in the areas with English papers (each English paper, on average, has 12 out-going and in-going citation links, respectively; see Table 1) than the areas with non-English papers (each non-English paper, on average, has only one out-going and in-going citation link in OpenMSD). Hence, the related/unrelated pairs derived from the graph embeddings fail to generalize well to papers in different languages. More rigorous investigations are required to better understand the reasons, e.g., analyzing the topological structures of the citation graphs, and systematically comparing different graph embedding algorithms. It is beyond the scope of this work and we encourage future works on it.
## 7 Enrich the Non-English Documents with English Summaries
Because OpenMSD is dominated by English papers and pairs (see SS3), models trained with OpenMSD are exposed more to English training examples. We aim to leverage the model's (relatively stronger) English capabilities to improve its (relatively weaker) performance on non-English documents. To this end, inspired by the recent works on _cross-lingual summarization_(Zhu et al., 2019; Wang et al., 2022), we propose to create English summaries for the non-English papers, and concatenate the summaries to the original (non-English) text to create _enriched documents_.
As there are no cross-lingual scientific documents summarization datasets or models available, we decide to use two _zero-shot_ methods to generate English summaries. **(i)** Using the English translation of the top-N tokens as the summary. This is a simple yet strong baseline widely used in summarization (Gao et al., 2020; Bao et al., 2022). **(ii)** Prompting a large generative language model to write English summaries. We use _Flan-PaLM2_(Anil et al., 2023) (version _Otter_ on Google Cloud API9), because recent work by Zhang et al. (2023) suggests that even smaller Flan-tuned language models can generate high-quality summaries, better than their larger but non-Flan-tuned counterparts. The English summary is then concatenated to the original text in the following format: _Title: {title_text}_. _Abstract: {(English_summary_text}) {abstract_text}_. Note that English papers are not augmented with any summaries.
Footnote 9: [https://cloud.google.com/vertex-ai](https://cloud.google.com/vertex-ai)
We consider summaries with two different lengths: 64 and 128 tokens. To get the top-N translation summaries, we simply truncate the translated abstracts to the target lengths. To prompt Flan-PaLM2 to generate summaries, we exper
iment with a few prompts and finally use two prompts to generate the short and long summaries, respectively: **(i)**_Summarize the passage below with no more than 30 words in English._ **(ii)**_Extract the three most important findings from the passage below, and translate them to English._ We find that the model tends to generate over-length summaries: the average token numbers of the summaries generated with the two prompts above are 71 and 138, respectively. Over-length tokens are removed to get the final summaries.
The enriched documents are used to train and test mSpt\({}_{DC\cup CC}\), the strongest variant of mSpt. The results of the proposed method on IDT and ODT are presented in Table 5 and 6, respectively. Firstly, we find that using the Flan-PaLM2-generated summaries consistently yields better performance than the top-N translation summaries; we believe this is because Flan-PaLM2 considers the whole abstract when generating the summaries, and hence its summaries are more informative and comprehensive than the top-N translation summaries. Secondly, compared to the SOTA baselines, using the enriched documents significantly boosting the MAP scores by 7% and 16% in IDT and ODT, respectively. Compared to the vanilla mSpt\({}_{DC\cup CC}\), using Flan-PaLM2-generated summaries yields marginally (not significantly) better performance in IDT, and significantly better performance in ODT. These results suggest that enriching the non-English papers with high-quality English summaries can significantly improve the multilingual models' performance for papers in non-English and unseen languages.
## 8 Conclusion
In this work, we proposed both datasets and novel methods for the multilingual _scientific documents similarity measurement_ (SDSM) problem. For data, we built _OpenMSD_, the first multilingual scientific documents dataset, and derived three SDSM tasks therefrom. For methods, we developed science-specialized multilingual language models optimized for the SDSM tasks, and fine-tuned them with related paper pairs derived from different strategies. Unlike existing works that use either citation Cohan et al. (2020) or co-citation Mysore et al. (2022) pairs alone, we found that using the mixture of them yields better performance. To further improve the model's performance for non-English documents, we explored the use of _generative language models_ to enrich the non-English papers with English summaries. Compared to SOTA baselines, our best model improves the performance by 7-16% in MAP.
Our dataset and methods can be applied to other tasks beyond SDSM. For example, OpenMSD can be used to pretrain general-purpose large language models to improve their performance in reasoning and science-related tasks Taylor et al. (2022); Singhal et al. (2022). The technique of enriching non-English documents with English summaries can be applied to tasks like _multilingual document clustering_Wei et al. (2008) and _cross-lingual information retrieval_Vulic and Moens (2015). More generally, we believe it provides a novel paradigm to leverage generative models to enhance the text similarity measurement models; we hope this work can encourage more research on this direction.
|
2309.11627 | GenLayNeRF: Generalizable Layered Representations with 3D Model
Alignment for Multi-Human View Synthesis | Novel view synthesis (NVS) of multi-human scenes imposes challenges due to
the complex inter-human occlusions. Layered representations handle the
complexities by dividing the scene into multi-layered radiance fields, however,
they are mainly constrained to per-scene optimization making them inefficient.
Generalizable human view synthesis methods combine the pre-fitted 3D human
meshes with image features to reach generalization, yet they are mainly
designed to operate on single-human scenes. Another drawback is the reliance on
multi-step optimization techniques for parametric pre-fitting of the 3D body
models that suffer from misalignment with the images in sparse view settings
causing hallucinations in synthesized views. In this work, we propose,
GenLayNeRF, a generalizable layered scene representation for free-viewpoint
rendering of multiple human subjects which requires no per-scene optimization
and very sparse views as input. We divide the scene into multi-human layers
anchored by the 3D body meshes. We then ensure pixel-level alignment of the
body models with the input views through a novel end-to-end trainable module
that carries out iterative parametric correction coupled with multi-view
feature fusion to produce aligned 3D models. For NVS, we extract point-wise
image-aligned and human-anchored features which are correlated and fused using
self-attention and cross-attention modules. We augment low-level RGB values
into the features with an attention-based RGB fusion module. To evaluate our
approach, we construct two multi-human view synthesis datasets; DeepMultiSyn
and ZJU-MultiHuman. The results indicate that our proposed approach outperforms
generalizable and non-human per-scene NeRF methods while performing at par with
layered per-scene methods without test time optimization. | Youssef Abdelkareem, Shady Shehata, Fakhri Karray | 2023-09-20T20:37:31Z | http://arxiv.org/abs/2309.11627v1 | # GenLayNeRF: Generalizable Layered Representations with 3D Model Alignment for Human View Synthesis
###### Abstract
Novel view synthesis (NVS) of multi-human scenes imposes challenges due to the complex inter-human occlusions. Layered representations handle the complexities by dividing the scene into multi-layered radiance fields, however, they are mainly constrained to per-scene optimization making them inefficient. Generalizable human view synthesis methods combine the pre-fitted 3D human meshes with image features to reach generalization, yet they are mainly designed to operate on single-human scenes. Another drawback is the reliance on multi-step optimization techniques for parametric pre-fitting of the 3D body models that suffer from misalignment with the images in sparse view settings causing hallucinations in synthesized views. In this work, we propose, GenLayNeRF, a generalizable layered scene representation for free-viewpoint rendering of multiple human subjects which requires no per-scene optimization and very sparse views as input. We divide the scene into multi-human layers anchored by the 3D body meshes. We then ensure pixel-level alignment of the body models with the input views through a novel end-to-end trainable module that carries out iterative parametric correction coupled with multi-view feature fusion to produce aligned 3D models. For NVS, we extract point-wise image-aligned and human-anchored features which are correlated and fused using self-attention and cross-attention modules. We augment low-level RGB values into the features with an attention-based RGB fusion module. To evaluate our approach, we construct two multi-human view synthesis datasets; DeepMultiSyn and ZIU-MultiHuman. The results indicate that our proposed approach outperforms generalizable and non-human per-scene NeRF methods while performing at par with layered per-scene methods without test time optimization.
## 1 Introduction
Novel view synthesis (NVS) of scenes with human subjects has numerous applications in telepresence, virtual reality, etc. The extensions [6, 15, 26, 37] of the well-known NeRF [21] architecture achieved competitive synthesis results using sparse views, yet suffered with human subjects due to their complex motions. NeuralBody [25] anchored NeRF with pre-fitted 3D human models to regularize the training producing more photo-realistic output. A main constraint was the inefficient per-scene optimization requirement. Recently, state-of-the-art human-based synthesis methods [3, 12, 20, 44] merged the concepts of the human model anchors and the image features to generalize to unseen poses and human identities. However, they were only designed to operate on scenes with single human subjects. Multi-human scenes introduce additional challenges due to how humans occlude each other and the complexity of their close interactions. Layered scene representations [42] are a possible solution to operate in the complex multi-person setting. Shuai et al. [30] utilized a layered architecture by representing the human entities using NeuralBody [25] and weakly supervising the human instance segmentation. Nevertheless, the method suffers from the per-scene optimization problem which hinders its applicability to wider real-world domains. Another issue with existing Human NVS methods [12, 30, 44] is the reliance on multi-step optimization methods [2, 29, 43] for the estimation of pre-fitted 3D body models. Such methods hinder the ability of end-to-end learning and suffer from error accumulation throughout the fitting steps which lead to inaccurate parameter fitting and misaligned body models and consequently hurts the synthesis quality of the novel views.
In this paper, we propose generalizable layered neural radiance fields to achieve free-viewpoint rendering of multi-human subjects, while requiring no test-time optimization for novel subjects or poses. We fuse the concepts of implicit feature aggregation and layered scene representations to synthesize novel views of complex human interactions from very sparse input streams. Specifically, we divide the scene into a set of human layers anchored by the 3D human body meshes. We then introduce a novel end-to-end trainable human-image alignment module that utilizes an iterative feedback loop [41] to correct parametric errors in
the pre-fitted human models and produces pixel-aligned human layers for better synthesis quality. For view synthesis, we extract a set of point-wise image-aligned and human-anchored features for all views and effectively aggregate them using self-attention and cross-attention modules. We also include an RGB fusion module that embeds the fused features with low-level pixel information from the images for retaining high-frequency details.
Our main contributions are summarized as follows:
* We propose a generalizable layered representation with a novel combination of three attention-based feature fusion modules for free-viewpoint rendering of multi-human scenes from sparse input views while operating on novel human subjects and poses.
* We present a novel human-image alignment module that corrects misalignment errors in the pre-fitted human models through an end-to-end trainable iterative feedback loop coupled with multi-view self-attention feature fusion.
* We surpass state-of-the-art generalizable and non-human per-scene NeRF methods while performing at par with the multi-human per-scene methods without requiring long per-scene training procedures.
## 2 Related Work
### Neural View Synthesis
Recent progress has been made in utilizing neural networks along with differentiable rendering for novel view synthesis [1, 5, 13, 32, 33, 36, 38]. NeRF [21] encapsulated the full continuous 5D radiance field of scenes inside a Multi-Layer Perceptron (MLP). They achieved photo-realistic results but failed to work on highly deformable scenes with non-static subjects. Deformable NeRF methods [23, 26] modeled the dynamic subjects by training a deformation network that transforms 3D points to a canonical space before querying the MLP. Yet, they show poor synthesis quality for human subjects with complex deformations. NeuralBody [25] anchored NeRF with a deformable human model [18] to provide a prior over the human body shape and correctly render self-occluded regions. However, they lacked generalization capabilities for novel scenes. Per-scene optimization NeRF methods [21, 25, 25, 26, 30] need to be trained from scratch on each scene which is often impractical due to the large time and computational costs. Generalizable NeRF methods [34, 35, 39] offer a solution by conditioning NeRF on pixel-aligned features generated from the input images which enhanced the results for unseen scenes with sparse input views. Recently, NHP [12] combined the 3D human mesh with image features to accurately represent complex body dynamics and generalize to novel human subjects and poses. HumanNeRF [44] enhanced the quality through efficient fine-tuning procedures and neural appearance blending techniques. However, the blending module operates on pre-scanned synthetic data with accurate depth maps and cannot be extended to real-world data. One limitation of state-of-the-art generalizable human methods [3, 12, 44] lies in the inability to be extended to multi-human scenes which are challenging due to the inter-human occlusions and interactions.
Layered scene representations [19] were proposed to handle complex scenes with multiple human subjects. ST-NeRF [42] modeled each human layer using a deformable model similar to D-NeRF [26] to achieve editable free-viewpoint rendering. Recently, Shuai et al. [30] extended ST-NeRF by modeling the human subjects using NeuralBody [25] and predicted human segmentation masks as part of the network training. The restriction of both methods is requiring per-scene training procedures for learning, yielding them inefficient to use. We tackle the existing research gap by proposing a generalizable layered scene representation for synthesizing novel views of multi-human subjects through a combination of image features and layered neural radiance fields. We achieve free-viewpoint rendering for scenes with an arbitrary number of humans from very sparse input views, while generalizing to novel subjects and poses at test time without extra optimization.
### Human Mesh Recovery
Mesh Recovery of human subjects has grabbed significant research attention due to its adoption in 3D geometry reconstruction and novel view synthesis. One direction of approaches solves the task through a multi-step optimization process which fits the parametric human models (i.e. SMPL [18]) based on 2D observations such as keypoints or silhouettes [8, 31]. Bogo et al. [2] utilized 2D joint predictions from monocular input to guide the SMPL fitting process for single-human scenes. Zhang et al. [43] tackled a more challenging multi-person setting by leveraging triangulated 3D keypoints and a two-step parametric fitting process for enhanced results. The main issues with multi-step methods are breaking the end-to-end learning and the error accumulation throughout the steps, especially in sparse-view datasets. Specifically, 2D keypoints predictions could suffer from inaccurate joints in certain views which hurts the triangulation process leading to low-quality 3D keypoint predictions. The parametric model fitting is subject to errors due to the abundance of hyperparameters [43] that require meticulous fine-tuning and the accumulated errors from the previous steps. On the other hand, regression-based approaches aim for better human-image alignment by directly regressing the body models from input images [10, 11, 16, 17, 40, 41]. PyMAF [41] introduced a feedback loop with multi-scale contexts to correct parametric deviations for producing highly aligned meshes from monocular input images for single-humans.
Existing Human NVS approaches [3, 12, 20, 25, 44] uti
ize pre-fitted 3D observations computed using multi-step optimization approaches [29, 43]. However, in sparse-view settings, the pre-fitted predictions suffer from misalignment errors that consequently hurt the quality of the synthesized views. Mihajlovi et al. [20] utilized 3D keypoints instead of body models to avoid parametric fitting errors. L-NeRF [30] introduced a time-synchronization step that accounts for the multi-view image de-synchronization by producing a per-view body model using predicted time offsets. However, they do not account for parametric errors occurring in the multi-step fitting process. In this work, we propose a novel regression-based human-image alignment module that ensures the correction of parametric errors leading to aligned body models with multi-view input.
## 3 Methodology
### Problem Definition
Given a synchronized set \(\Omega\) of frames \(I\) taken from \(B\) sparse input viewpoints of a scene with \(N\) arbitrary number of humans, such that \(\Omega=\{I_{1},..,I_{B}\}\), our target is to synthesize a novel view frame \(\{I_{q}\}\) of the scene from a query viewing direction \(q\). Each input viewpoint \(b\) is represented by the corresponding camera intrinsics \(K\), and camera rotation \(R\) and translation \(t\), where \(b=\{K_{b},[R_{b}|t_{b}]\}\). The \(N\) pre-fitted 3D human body meshes are given for each input frame. Each human \(h\) is represented using the SMPL [18] model which is a deformable skinned model defined in terms of pose and shape parameters \(\Theta^{0}_{h}\) while also being vertex-based where each model \(s_{h}\) consists of 6,480 vertices, such that \(s_{h}\in\mathbf{R}^{6,480\times 3}\). For an input view image \(I_{b}\in\mathbf{R}^{H\times W\times 3}\) with height \(H\) and width \(W\), we extract a multi-scale feature pyramid \(I^{{}^{\prime}}_{b,\{0:T-1\}}\) with \(T\) levels using a ResNet34 [9] backbone network \(f\), pre-trained on ImageNet, such that \(I^{{}^{\prime}}_{b,\{0:T-1\}}=f(I_{b})\). The operation is carried out for all input views \(b\) in \(\{1,..,B\}\). A full overview of the proposed architecture is shown in Fig. 1.
### Human-Image Alignment Module
Pre-fitted human body models can suffer from misalignment with the input images due to error accumulation throughout the multi-step fitting process [43], especially in sparse view settings, which causes hallucinations in synthesized views. We propose an alignment module that is end-to-end trainable with our NVS architecture and carries out iterative parametric correction with closed feedback [41] to ensure a better alignment of the SMPL models with the multi-view input images. The module takes
Figure 1: Overview of the GenLayNeRF approach. We consolidate a layered scene representation where each human subject is modeled using the SMPL model. Regarding our alignment module, at a step \(l\), low and high-resolution feature planes \(I^{\prime}_{1:B,\{l-1,l\}}\) are concatenated and the SMPL vertices are projected on them to produce feature-embedded vertices \(v^{l}_{1:B}\) (Concat & Project). We then diffuse the vertices to continuous spaces and query them at downsampled vertex locations to generate multi-view human features \(\tilde{v}^{l}_{1:B}\) (Diffuse & Query), which are fused using self-attention and passed along with parameters \(\Theta^{l}\) to predict the adjusted parameters \(\Theta^{l+1}\). In our NVS architecture, we project rays through the aligned scene layers and sample per-layer 3D points within the intersections areas with the layers (shown in the top view). Point-wise features are extracted and fused to output the final fused features \(\tilde{g}^{x}_{1:B}\), which are passed to the density network to predict the volume density \(\sigma(x)\), whereas the color network uses the raw RGB values \(\mathbf{r}^{x}_{1:B}\) and \(q\) to predict the color \(\mathbf{e}(x,q)\).
the pre-fitted SMPL parameters \(\Theta_{h}^{0}\) as input and returns the aligned and adjusted parameters \(\Theta_{h}^{L}\). Specifically, we employ an iterative process with \(L\) steps, such that, for a step \(l>0\), low-resolution features \(I_{b,l-1}^{{}^{\prime}}\) from level \(l-1\) for view \(b\) are upsampled using deconvolution [22] and concatenated with high-resolution feature plane \(I_{b,l}^{{}^{\prime}}\) at level \(l\) resulting in a contextualized and localized feature plane \(I_{b,l}^{{}^{\prime}}\). Human vertices \(s_{h}^{l}\) are embedded with image features by projection on the multi-view feature map, such that, \(v_{h,b}^{l}=I_{b,l}^{{}^{\prime}}[K_{b}((R_{b}s_{h}^{l})+t_{b})]\). \(v_{h,b}^{l}\in\mathbf{R}^{6,480\times C_{1}}\) represents the features of the vertices projected on feature map \(I_{b,l}^{{}^{\prime\prime}}\) for human \(h\). The preceding part corresponds to "Concat & Project" in Fig. 1.
Our target is to retrieve a compact and continuous per-human feature representation to be used for parameter adjustment. For that reason, the sparse human vertices \(v_{h,b}^{l}\) need to be diffused into a continuous space that can be queried at any location. We incorporate the SparseConvNet [7, 25] architecture which utilizes 3D sparse convolution to diffuse the vertex features into different nearby continuous spaces for every human and view. The diffused vertices are denoted as \(d_{h,b}^{l}\). To obtain the per-human features, we downsample the vertices \(s_{h}^{l}\), such that \(\tilde{s}_{h}^{l}\in\mathbf{R}^{431\times 3}\), and query the diffused vertex spaces at the downsampled locations to obtain the multi-view per-human vertex features which are then processed and flattened to obtain a compact version denoted as \(\tilde{v}_{h,b}^{l}\in\mathbf{R}^{1\times C_{2}}\). The preceding part corresponds to "Diffuse & Query" in Fig. 1. Afterward, we effectively correlate the multi-view human features using a self-attention module, such that,
\[\begin{split} mv_{h}^{l}=soft(\frac{1}{\sqrt{d_{k_{1}}}}query( \tilde{v}_{h,1:B})\cdot key(\tilde{v}_{h,1:B}^{l})^{T}),\\ \tilde{v}_{h,1:B}^{l}=mv_{h}^{l}\cdot val_{1}(\tilde{v}_{h,1:B}^{ l})+val_{2}(\tilde{v}_{h,1:B}^{l}),\\ mv_{h}^{l}\in\mathbf{R}^{B\times B},\;\tilde{v}_{h,1:B}^{l}\in \mathbf{R}^{B\times C_{2}},\end{split} \tag{1}\]
where \(key\), \(query\), and \((val_{1},val_{2})\) represent the key, query, and value embeddings of the corresponding argument features respectively, and \(d_{k_{1}}\) denotes the dimensionality of the key embedding. \(soft\) denotes the softmax operation. We carry out view-wise averaging for multi-view fusion on the view-aware human features such that, \(\hat{v}_{h}^{l}=\frac{1}{B}\sum_{b}\hat{v}_{h,b}^{l}\). Lastly, the fused per-human features are concatenated (\(\oplus\)) with the current SMPL parameters and passed to a correction MLP that predicts parameter alignment offsets \(\Delta\Theta_{h}^{l}\) which are added to the current parameters, such that,
\[\begin{split}\Delta\Theta_{h}^{l}=MLP_{align}([\tilde{v}_{h}^{l} \oplus\Theta_{h}^{l}]),\\ \Theta_{h}^{l+1}=\Theta_{h}^{l}+\Delta\Theta_{h}^{l},\end{split} \tag{2}\]
The updated parameters \(\Theta_{h}^{l+1}\) are used to retrieve the adjusted SMPL vertices \(s_{h}^{l+1}\) and are passed to the next step \(l+1\). After \(L\) steps, the aligned SMPL parameters \(\Theta_{h}^{L}\), vertices \(s_{h}^{L}\), and diffused spaces \(d_{h,1:B}^{L}\) are passed to our layered NVS architecture.
### Layered Scene Representation
Scenes with multiple humans suffer from inter-human occlusions that become evident when subjects closely interact together. A practical solution to handle complex multi-human scenarios is dividing the scene into distinct layers where each layer models an entity using a neural radiance field [19, 42]. Entities can be humans, objects, or background. Our proposed approach focuses mainly on human layers represented using the SMPL [18] model which is responsible for preserving the local geometry and appearance of humans making it possible to model their complex deformations and occluded areas.
Our aim is to render the full novel view image \(I_{q}\) from a query viewpoint \(q\). To achieve that, we first use the camera-to-world projection matrix, defined as \(\mathbf{P}^{-1}=[R_{q}|t_{q}]^{-1}K_{q}^{-1}\), to march 3D rays across the multi-layered scene. In practice, we have a ray for each pixel \(p\) in the final image, where the ray origin \(r_{0}\in\mathbf{R}^{3}\) is the camera center and the ray direction is given as \(i=\frac{\mathbf{P}^{-1}p-r_{0}}{||\mathbf{P}^{-1}p-r_{0}||}\). 3D points \(x\) are sampled across the rays at specific depth values \(z\), where \(x=r(z)=r_{0}+zi\). Since we have several human layers in the scene, we determine the intersection areas of the rays with the humans using the 3D bounding box around each layer defined by the minimum and maximum vertex points of the aligned SMPL meshes \(s_{1:N}^{L}\). We then sample depth values within the \(n_{p}\) intersecting areas only such that \(z\in[[z_{near_{1}},z_{far_{1}}],..,[z_{near_{n_{p}}},z_{far_{n_{p}}}]]\). This guarantees that the sampled points lie within the areas of the relevant human subjects as clear in the top view shown in Fig. 1.
### Feature Extraction and Attention-Aware Fusion
In our proposed approach, we extract multi-view image features for each query point \(x\) and effectively merge them using attention-based fusion modules to derive the needed spatially-aligned feature vectors. This enables us to extrapolate to novel human subjects and poses by learning implicit correlations between the independent human layers.
#### 3.4.1 Image-aligned And Human-anchored Features
Image-aligned point-wise features are extracted by projecting the point \(x\) on all the feature maps \(I_{b,L}^{{}^{\prime\prime}}\) to collect the corresponding image-aligned features for each view \(b\) denoted as \(p_{b}^{x}\). In addition, human-anchored features are beneficial for maintaining the complex geometric structure of the human body by anchoring the network on the available SMPL body priors. Existing layered scene representations [30] follow the approach of NeuralBody [25] by encoding
the vertices of human layers using learnable embeddings that are unique to each layer in each training scene. In our approach, we utilize the vertices \(v_{1:N,1:B}^{L}\) embedded with image features from the alignment module to enable a generalizable approach conditioned on the input images. The radiance field predictor is queried using continuous 3D sampled points. For that reason, we utilize the diffused vertex spaces \(d_{h,1:B}^{L}\) for each human \(h\) and transform \(x\) to the SMPL coordinate space of its corresponding human layer. Trilinear interpolation is then utilized to retrieve the corresponding human-anchored features \(g_{b}^{x}\) from the diffused spaces of each view \(b\).
#### 3.4.2 Attention-Aware Feature Fusion
To fuse the point-wise feature representations \(g_{1:B}^{x}\), \(p_{1:B}^{x}\) for point \(x\), one strategy is a basic averaging approach [27, 28]. This leads to smoother output and ineffective utilization of the information seen from distinct views. To learn effective cross-view correlations, we employ a self-attention module that attends between all the multi-view human-anchored features \(g_{1:B}^{x}\) where each feature in one view is augmented with the extra features seen from the other views. Each view feature is first concatenated with its corresponding viewing direction \(d_{b}^{\prime}\). The formulation is the same as the one shown in Eq. (1). The produced view-aware human-anchored features are denoted as \(\hat{g}_{1:B}^{x}\).
We additionally make use of the rich spatial information in the image-aligned features by carrying out cross-attention from the view-aware human-anchored features to the image-aligned features. The similarity between the multi-view image features and the per-view vertex features is used to re-weigh the image features and embed them with the vertex features. The fused features \(\tilde{g}_{1:B}^{x}\) are calculated with a formulation similar to Eq. (1). The detailed formulation of our cross-attention and self-attention modules are shown in the supplementary material. Afterward, we carry out view-wise averaging, such that \(\tilde{g}^{x}=\frac{1}{B}\sum_{b}\tilde{g}_{b}^{x}\), to generate the final fused feature representation for \(x\).
### Radiance Field Predictor
**Color Network**. To predict the color \(\mathbf{c}\) of point \(x\), we use the query viewing direction \(q\) to model the view-dependent effects [21]. In addition, we explicitly augment the high-level features with low-level pixel-wise information to leverage the high-frequency details in the images. This has been achieved with an RGB fusion module which concatenates the high-level features with the encoded raw RGB values \(\mathbf{r}_{b}^{x}\) for each view \(b\). RGB values from closer input views are assigned higher weights by cross-attending \(q\) with the input viewing directions \(d_{1:B}^{\prime}\) such that,
\[\begin{gathered}\tilde{c}^{x}=MLP_{c_{1}}(\tilde{g}_{1:B}^{x}; \gamma(q);p_{1:B}^{x}),\\ \tilde{c}_{1:B}^{x}=\{[\tilde{c}^{x}\oplus\gamma(\mathbf{r}_{1}^{x })],...,[\tilde{c}^{x}\oplus\gamma(\mathbf{r}_{B}^{x})]\},\\ rgb_{att}^{x}=soft(\frac{1}{\sqrt{d_{k_{2}}}}query(q)\cdot key(d_{1: B}^{\prime})^{T}),\\ \mathbf{c}(x,q)=MLP_{c_{2}}(rgb_{att}^{x}\cdot val_{1}(\tilde{c}_ {1:B}^{x})),\\ rgb_{att}^{x}\in\mathbf{R}^{1\times B},\end{gathered} \tag{3}\]
**Density Network**. We predict volume density \(\sigma(x)\) for point \(x\) using the fused feature \(\tilde{g}^{x}\), such that, \(\sigma(x)=MLP_{\sigma}(\tilde{g}^{x})\).
\(MLP_{\sigma}\), \(MLP_{c_{1}}\), and \(MLP_{c_{2}}\) consist of fully connected layers described in the supplementary material. \(\gamma:\mathbf{R}^{3}\rightarrow\mathbf{R}^{(6\times l)+3}\) denotes a positional encoding [21] with \(2\times l\) basis functions and \(d_{k_{2}}\) is set to 16.
### Layered Volumteric Rendering and Loss Functions
Layered volumetric rendering is used to accumulate the predicted RGB and density for all points across human layers. The points in intersecting areas \(n_{p}\) the layers are sorted based on their depth value \(z\) before accumulation. The detailed formulation is shown in the supplementary material. Given a ground truth novel view image \(I_{q}^{gt}\), all network weights are supervised using the L2 Norm (\(||.||\)) photometric loss. In addition, we include two losses to explicitly supervise the training of our alignment module weights. Given a set of pseudo ground truth 2D keypoints \(J^{gt}\), we derive the predicted 2D keypoints \(\tilde{J}\) from the adjusted vertices \(s^{L}\) following PyMAF [41] and minimize the keypoint difference weighted by the ground truth confidence of each body joint. We also include a regularization term on the SMPL parameters to avoid large parametric deviations. The final loss function for our network is written as,
\[\mathbf{L}=\lambda_{ph}||I_{q}^{gt}-I_{q}||+\lambda_{kpts}||J^{gt}-\tilde{J}|| +\lambda_{reg}||\Theta^{L}||, \tag{4}\]
## 4 Experiments
In this section, we introduce the datasets, baselines, experimental results, and ablation studies. Details about our training procedure are in the supplementary material.
### Datasets
The existence of readily-available open-source multi-human view synthesis datasets is limited. To solve this challenge, we construct two new datasets, ZJU-MultiHuman and DeepMultiSyn. Both datasets will be published to be used by multi-human view synthesis methods. We also include a subset of the single-human ZJU-MoCap dataset for diversity. Extra details on the datasets are included in the supplementary material.
**DeepMultiSyn.** The DeepMultiSyn dataset is an adaptation of the 3D reconstruction dataset published by DeepMultiCap [45]. We take the raw real-world multi-view sequences and process them for novel view synthesis. There exist 3 video sequences of scenes containing 2 to 3 human subjects captured from 6 synchronized cameras. Following NeuralBody [25], we use EasyMoCap [29] to fit the SMPL human models for all the subjects in the available frames. Additionally, we predict the human segmentation masks following [14] to separate the humans from the background. This dataset is considered challenging due to the existence of close interactions and complex human actions such as boxing, and dancing activities.
**ZJU-MultiHuman.** The ZJU-MultiHuman dataset consists of one video sequence with 600 frames taken from 8 uniformly distributed synchronized cameras. The video sequence was published online [29] with the calibration files. The captured scene contains 4 different human subjects. Similar to DeepMultiSyn, we predict the SMPL models and segmentation masks utilizing [14, 29].
### Baselines
We compare our proposed approach with generalizable and per-scene NeRF methods.
**Comparison with generalizable NeRF methods.** Generalizable human-based NeRF methods [3, 12, 20, 44] operate only on scenes with single humans. We choose to compare against NHP [12] after adjusting it to work on multi-human scenes by using the segmentation masks to render a separate image for each individual in the scene. We then superimpose the human images based on their depth to render the novel view image. Regarding non-human methods, PixelNeRF [39] is the first to condition NeRF on pixel-aligned features for generalization. IBRNet [35] and SRF [4] additionally utilize image-based rendering and stereo correspondences, respectively, to achieve generalizable properties. All methods were trained on all human scenes simultaneously.
**Comparison with per-scene methods.** We evaluate our performance compared to the multi-human layered scene representation approach [30], denoted as L-NeRF. We also compare against D-NeRF [26] and the original NeRF [21] method. All of the mentioned approaches are trained on each scene separately with the same train-test splits.
### Experimental Results
Our evaluation spans three generalization settings as follows:
**Seen Models, Seen Poses.** In this setting, we test on the same human subjects and poses that the model is trained on. Tab. 0(a) indicates the results in terms of the per-scene and generalizable baselines. Regarding the generalizable approaches, our method exhibits the best overall performance on both datasets on all metrics. For the per-scene approaches, our proposed method performs at par with the state-of-the-art per-scene baseline (L-NeRF), while effectively saving computational and time resources by taking 50 hours to converge on all the scenes simultaneously compared to 144 hours for per-scene training. After per-scene finetuning, our method surpasses L-NeRF on both datasets. Qualitative comparisons for the per-scene methods are included in the supplementary material.
**Pose Generalization.** We additionally test all approaches on the same human subjects seen during training, but with novel poses. L-NeRF is a human-based method that generalizes to novel poses, therefore, we include it in the comparison. On both datasets, Tab. 0(b) shows that our approach outperforms all the generalizable NeRF methods on all metrics. L-NeRF lags behind our method on the DeepMultiSyn dataset due to the complex novel poses which validates the pose generalization ability of our method on challenging motions. In Fig. 2 and Fig. 4, IBRNet fails to model the full body of the human subjects properly, while NHP fails to represent areas of occlusions where
\begin{table}
\begin{tabular}{|l|l|c|c|c|c|} \hline \multirow{3}{*}{} & \multirow{3}{*}{Method} & \multicolumn{2}{c|}{DeepMultiSyn} & \multicolumn{2}{c|}{ZJUMultiHuman} \\ \cline{3-6} & & PSNR & SSIM & PSNR & SSIM \\ \hline \multicolumn{6}{|l|}{_(a)_ _Seen Models,_ _Seen Poses_} \\ \hline \multirow{3}{*}{\(S\)} & NeRF & 15.49 & 0.497 & 16.42 & 0.525 \\ & D-NeRF & 17.08 & 0.702 & 18.53 & 0.748 \\ & L-NeRF* & 24.04 & 0.858 & 25.10 & 0.903 \\ \cline{2-6} & **Ours\({}_{ft}\)** & **25.05** & **0.889** & **25.21** & **0.916** \\ \hline \multirow{3}{*}{\(G\)} & PixelNeRF & 14.81 & 0.534 & 19.74 & 0.629 \\ & SRF & 20.39 & 0.724 & 17.87 & 0.657 \\ \cline{1-1} & IBRNet & 19.45 & 0.741 & 20.03 & 0.766 \\ \cline{1-1} & NHP* & 20.91 & 0.698 & 21.75 & 0.813 \\ \cline{1-1} \cline{2-6} & **Ours** & **24.01** & **0.859** & **25.02** & **0.901** \\ \hline \hline \multicolumn{6}{|l|}{_(b)_ _Seen Models,_ _Unseen Poses_} \\ \hline \(S\) & L-NeRF* & 22.12 & 0.825 & 23.02 & 0.871 \\ \hline \multirow{3}{*}{\(G\)} & PixelNeRF & 14.14 & 0.520 & 16.88 & 0.560 \\ & SRF & 18.07 & 0.663 & 17.93 & 0.680 \\ \cline{1-1} & IBRNet & 18.01 & 0.710 & 19.84 & 0.772 \\ \cline{1-1} & NHP* & 20.26 & 0.677 & 20.64 & 0.791 \\ \cline{1-1} \cline{2-6} & **Ours** & **23.45** & **0.862** & **23.76** & **0.882** \\ \hline \hline \multicolumn{6}{|l|}{_(c)_ _Unseen Models,_ _Unseen Poses_} \\ \hline \multirow{3}{*}{\(G\)} & PixelNeRF & 13.12 & 0.457 & \multirow{3}{*}{Not Applicable} \\ & SRF & 13.95 & 0.548 & \\ \cline{1-1} & IBRNet & 18.80 & 0.672 & \\ \cline{1-1} & NHP* & 19.51 & 0.678 & \\ \cline{1-1} \cline{2-6} & **Ours** & **21.03** & **0.802** & \\ \hline \end{tabular}
\end{table}
Table 1: Comparison with generalizable and per-scene NeRF methods on the DeepMultiSyn and ZJU-MultiHuman Datasets. ”G” and ”S” denote generalizable and per-scene methods, respectively. ”*” refers to human-based methods. PSNR and SSIM metric values are the greater the better. ”ft” refers to finetuning.
subjects highly overlap. However, our method successfully models the body shapes and can handle overlapping areas which validates the effectiveness of the layered scene representation in the generalizable multi-human setting. Fig. 3 shows how L-NeRF fails to properly render the appearance of subjects when presented with complex unseen poses.
**Human Generalization.** A challenging setting would be testing on human subjects and poses not seen during training. This was done on the DeepMultiSyn dataset by leaving out two different scenes for testing. Tab. 1c validates that our method has the best generalization capability as it outperforms all other methods by a large margin. The bottom row of Fig. 2 and Fig. 5 show that our method better represents the main body features of the novel human subjects. IBRNet fails to render some body parts like the legs, while NHP suffers from more blur artifacts, especially in overlapping areas. In the supplementary material, we show that our method surpasses NHP by a large margin even on single humans in the ZJU-MoCap dataset for both pose and human generalization settings.
### Ablation Studies
**Effect of Human-Image Alignment.** We evaluate the impact of the proposed human-image alignment module on the synthesis quality. Quantitatively, Sec. 5 shows the superior enhancement offered by the alignment module (align) on both metrics. In Fig. 6, we demonstrate the large misalignment between the pre-fitted SMPL model and the image
Figure 4: Comparison with generalizable methods on **seen models/unseen poses** for the ZJU-MultiHuman Dataset.
Figure 5: Qualitative comparison on **unseen models/unseen poses** on the DeepMultiSyn dataset.
Figure 3: Comparison with a per-scene multi-human method [30] on **seen models/unseen poses** on the DeepMultiSyn Dataset.
Figure 2: Comparison with generalizable methods on **seen models/unseen poses** [top row] and **unseen models/unseen poses** [bottom row] for the DeepMultiSyn Dataset.
which caused severe hallucinations in the synthesized image (areas with red boxes). Our module successfully aligns the SMPL model with the images leading to higher-quality synthesis results. We include additional results of our module in the supplementary material.
**Effect of Fusion modules.** We assess the effect of different fusion modules on the synthesis results. From Sec. 5, the second row uses the cross-attention module (crs) in Sec. 3.4.2 and it shows a noticeable improvement over doing basic average pooling in the first row. This indicates the effectiveness of the correlation learned between the vertex and image features. The addition of the self-attention module (slf) in Sec. 3.4.2 in the third row led to the incorporation of multi-view aware features and achieved a slight enhancement on both metrics. The fourth row adds the raw RGB fusion module (rgb) in the Color Network presented in Sec. 3.5. It enhances the performance, especially on the SSIM metric, validating the importance of utilizing low-level information.
**Effect of Number of Views.** We evaluate the performance of our proposed approach when given a different number of input views at test time. Sec. 5 indicates that using 4 views leads to an enhancement in both metrics due to the extra information available. Decreasing the number of views gradually degrades the performance. However, using only one input view, our method outperforms all the generalizable NeRF methods in Tab. 1 that use 3 input views.
## 5 Limitations & Future Work
Several enhancements to our proposed method could be investigated further. As our two proposed datasets were sufficient to show the generalization capability of our method, there is room for improvement by elevating the diversity in terms of the number of scenes, camera views, distinct humans, and complex actions. This would lead to better generalization capabilities on broader challenging scenarios. Furthermore, our method suffers from blur artifacts representing human clothing details such as skirts as seen in Fig. 3. One could experiment with integrating a deformation model [26] to represent small deformations such as textured clothing. In addition, adjustments could be made to allow for human-image alignment for more complex body models such as SMPL-X [24]. Lastly, a research direction could explore the optimization of the body model parameters from scratch with multi-view time synchronization taken into consideration.
## 6 Conclusion
We introduce a generalizable layered scene representation for free-viewpoint rendering of multi-human scenes using very sparse input views while operating on unseen poses and subjects without test time optimization. We additionally present a novel end-to-end human-image alignment module that corrects parametric errors in the pre-fitted body models leading to pixel-level alignment of human layers with the input images. Regarding view synthesis, we divide the scene into a set of multi-human layers. We then generate point-wise image features and human-anchored features and utilize a combination of cross-attention and self-attention modules that effectively fuse the information seen from different viewpoints. In addition, we introduce an RGB fusion module to embed low-level pixel values into the color prediction for higher-quality results. We assess the efficacy of our approach on two newly proposed multi-human datasets. Experimental results show that our method outperforms state-of-the-art generalizable NeRF methods in different generalization settings and performs at par with layered per-scene methods without long per-scene optimization runs. We also validate the effectiveness of our alignment module by showing its significant enhancement on the synthesis quality. Our module could be integrated with existing SMPL-based synthesis methods to elevate the performance by improving the human-image alignment.
\begin{table}
\begin{tabular}{|c c c c c|c c|} \hline crs & slf & rgb & align & V. & PSNR \(\uparrow\) & SSIM \(\uparrow\) \\ \hline & & & & 3 & 20.92 & 0.7860 \\ \hline ✓ & & & & 3 & 21.45 & 0.8005 \\ \hline ✓ & ✓ & & & 3 & 21.98 & 0.8093 \\ \hline ✓ & ✓ & ✓ & & 3 & 22.19 & 0.8361 \\ \hline ✓ & ✓ & ✓ & ✓ & 3 & **23.45** & **0.8620** \\ \hline \hline ✓ & ✓ & ✓ & ✓ & 1 & 21.98 & 0.8091 \\ \hline ✓ & ✓ & ✓ & ✓ & 2 & 22.32 & 0.8379 \\ \hline ✓ & ✓ & ✓ & ✓ & 4 & 23.72 & 0.8711 \\ \hline \end{tabular}
\end{table}
Table 2: Ablation study results on **seen models** and **unseen poses** for the DeepMultiSyn dataset. "# V.” denotes the number of views.
Figure 6: Visualization of the output of our human-image alignment module (\(SMPL_{aligned}\)) given the misaligned pre-fitted model (\(SMPL_{original}\)). "\(tgt\_walign\)” and ”\(tgt\_w/oalign\)” denote the rendered image with/without our alignment module. |
2309.09068 | Recovering Missing Node Features with Local Structure-based Embeddings | Node features bolster graph-based learning when exploited jointly with
network structure. However, a lack of nodal attributes is prevalent in graph
data. We present a framework to recover completely missing node features for a
set of graphs, where we only know the signals of a subset of graphs. Our
approach incorporates prior information from both graph topology and existing
nodal values. We demonstrate an example implementation of our framework where
we assume that node features depend on local graph structure. Missing nodal
values are estimated by aggregating known features from the most similar nodes.
Similarity is measured through a node embedding space that preserves local
topological features, which we train using a Graph AutoEncoder. We empirically
show not only the accuracy of our feature estimation approach but also its
value for downstream graph classification. Our success embarks on and implies
the need to emphasize the relationship between node features and graph
structure in graph-based learning. | Victor M. Tenorio, Madeline Navarro, Santiago Segarra, Antonio G. Marques | 2023-09-16T18:23:14Z | http://arxiv.org/abs/2309.09068v1 | # Recovering Missing Node Features With Local Structure-Based Embeddings
###### Abstract
Node features bolster graph-based learning when exploited jointly with network structure. However, a lack of nodal attributes is prevalent in graph data. We present a framework to _recover completely missing node features for a set of graphs_, where we only know the signals of a subset of graphs. Our approach incorporates prior information from both graph topology and existing nodal values. We demonstrate an example implementation of our framework where we assume that node features depend on local graph structure. Missing nodal values are estimated by aggregating known features from the most similar nodes. Similarity is measured through a node embedding space that preserves local topological features, which we train using a Graph AutoEncoder. We empirically show not only the accuracy of our feature estimation approach but also its value for downstream graph classification. Our success embarks on and implies the need to emphasize the relationship between node features and graph structure in graph-based learning.
Victor M. Tenorio\({}^{*}\), Madeline Navarro\({}^{\dagger}\), Santiago Segarra\({}^{\dagger}\), and Antonio G. Marques\({}^{*}\)\({}^{*}\) Dept. of Signal Theory and Communications, King Juan Carlos University, Madrid, Spain
\({}^{\dagger}\) Dept. of Electrical and Computer Engineering, Rice University, Houston, USA Graph Signal Processing, Local Structure Embeddings, Missing Feature Generation
## 1 Introduction
For practical applications in chemistry [1, 2], medicine [3], and many others [4], data can be naturally represented as interconnected entities using graphs. Supervised learning on graphs aims to predict characteristics using both graph structure and, in some cases, node features, also known as graph signals. These features can improve graph-based predictions when jointly used with graphs, not only when the nodal values are semantically relevant but also when the observations on nodes and the graph structure are dependent [5].
The relationship between node features and their underlying graph is well-studied for node-level tasks. These tasks typically entail predicting nodal characteristics for a single graph where a subset of features is known. Classifying nodes in the semi-supervised learning setting requires the influence of the underlying graph on nodal values to propagate known information to unlabeled nodes [6].
Graph signal reconstruction is a common task in graph signal processing in which partially observed features and a known graph signal model are applied together for downstream tasks, such as approximating hidden node values [7, 8, 9]. In these cases, a portion of the features is known, even if a minority, and the graph topology informs how existing values provide information about those hidden.
The task of recovering completely missing graph signals for a given graph is far less explored. As nodal values are critical for predictions and necessary for the implementation of graph neural networks (GNNs), we require a method to accurately estimate unknown node features [5, 10]. Unlike node-level predictions for which we have partial nodal observations, we cannot use the underlying graph structure to propagate existing information and infer missing values [6, 11]. In such cases, the graph may belong to a family of graphs whose structural and nodal characteristics are related. Many graph-level tasks consist of such data, such as predicting molecular structures and identifying characteristics of social networks [2, 4]. Existing works often characterize graph families by shared random graph models [12] or latent embedding spaces [6]. However, these works enforce a global relationship between the graphs and their signals, requiring knowledge of the entire graph.
Even under the assumption of a shared graph family, the relationship between each of the graphs and its node features is largely decoupled for graph-level learning tasks. For example, methods that interpolate between labeled graphs for improving classification typically treat graphs and node features separately [13, 14, 15]. Approaches that do aim to replace missing graph signals typically rely on values that possess solely topological features with no additional nodal information [5, 16], and many use unrelated values that do not incorporate structure [17, 18]. We empirically show that such approaches
Figure 1: Embeddings for nodes in graphs from the AIDS molecule dataset. Each point is a node embedding based on local structural characteristics, such as degree. Nodes corresponding to graphs of different classes are shifted in the embedding space, implying that local structure is correlated with molecule class for the AIDS dataset.
are suboptimal, even when we include structural information.
Given a set of graphs where only a subset has known node features, we present a framework to recover completely missing node features. In particular, we propose _node feature recovery for graph-level tasks incorporating both graph topology and known feature values_. We demonstrate an implementation of this approach assuming that node features depend on local graph structural characteristics, differently from previous approaches that largely rely on aggregating information from a node's neighborhood, for example via low-pass graph filters [19, 20]. Thus, we train a Graph AutoEncoder (GAE) to learn a node embedding space that preserves the local structural characteristics for each node, visualized in Fig. 1. In this setting, feature values are assumed to be closer when node neighborhoods are similar, so nodes with known features can effectively provide the most realistic feature estimates for those with similar local topologies.
Our contributions are as follows.
1. We present an approach to learn completely missing node features whose values are assumed to be dependent on graph structure, and we exhibit the approach in practice through the setting where features depend on local structure.
2. We demonstrate that for many graph classification benchmark datasets, local node structure is indeed indicative of class.
3. We empirically validate the ability of our method to not only accurately learn missing node features using a set of graphs with known features, but we also demonstrate the value of recovering accurate node features for downstream tasks.
## 2 Background
In this section, we provide the necessary background on graph-based learning, along with a review of existing approaches on node representation learning and addressing missing node features. We start with some basic notation. A graph \(\mathcal{G}=\{\mathcal{V},\mathcal{E}\}\) comprises a set of nodes \(\mathcal{V}=\{1,\ldots,N\}\) and a set of edges \(\mathcal{E}=\{(n_{1},n_{2})|n_{1},n_{2}\in\mathcal{V}\}\). Graphs can be conveniently represented by the so-called adjacency matrix \(\mathbf{A}\). For edges in \(\mathcal{G}\), \((n_{1},n_{2})\in\mathcal{E}\) iff \(A_{n_{1},n_{2}}=1\). In machine learning and signal processing setups, data is often associated with each of the nodes. In particular, let \(\mathbf{X}\in\mathbb{R}^{N\times F}\) be a data matrix, whose entry \(X_{n,f}\) represents the value of feature (signal) \(f\) at node \(n\). The \(n\)th row of \(\mathbf{X}\) is typically referred to as the data features associated with node \(n\) and the \(f\)th column of \(\mathbf{X}\) as the \(f\)th graph signal. On top of these _node features_, one can also associate a number of _topological features_, such as centrality values or clustering coefficients, with each of the nodes [21].
**Node representation learning.** Learning node representations (embeddings) has been a prevalent topic of research in the GSP literature almost since its inception [22, 23]. Since the adjacency matrix \(\mathbf{A}\) is an alternative representation of \(\mathcal{G}\), each node can be (perfectly) represented in an \(N\)-dimensional vector space using the corresponding row of \(\mathbf{A}\). As a result, node representation algorithms typically aim to learn representations in a lower-dimensional space, that is, they aim at learning a matrix \(\mathbf{Z}\in\mathbb{R}^{N\times P}\) with \(P<N\). The ultimate goal when designing \(\mathbf{Z}\) is to sufficiently characterize nodal behavior in the context of the graph application at hand. Countless approaches to learn nodal representations include algorithms from random walks [24] to GNNs [6, 16].
Most of these approaches learn node embeddings based on node proximity: the closer nodes \(n\) and \(n^{\prime}\) are in the graph, the more similar their embeddings \(\mathbf{z}_{n}\) and \(\mathbf{z}_{n^{\prime}}\) are. Recent works have begun to emphasize learning node embeddings based on the role of each node in the graph, guided by its topological features [5, 21]. Under this setting, we may transform structural similarities between nodes into geometric relationships in the embedding space. Inspired by this concept, we draw on such structure-based embeddings to identify which nodes are similar for sharing feature values.
**Missing node features.** Previous works dealing with missing feature data consider partially missing node features, where only some entries of the feature matrix \(\mathbf{X}\) are observed. In such formulations, the task, known as feature imputation [25, 26, 27] or graph signal interpolation [7, 8, 9], is to learn the missing entries of \(\mathbf{X}\). The full matrix \(\hat{\mathbf{X}}\) (which contains now both the given and estimated values) is then applied to a downstream task, usually for node-level tasks on a single graph, such as node classification. Feature imputation has been approached by graph spectral approaches [7], kernel approaches [9], propagating the known features [8, 25] or using GNNs [26, 27]. Differently from these works, we aim to solve a more difficult graph-level version of this problem, where for a subset of (or all the) features, we do not have access to any of the nodes, and we must infer the entire set of nodal values (either multiple columns of \(\mathbf{X}\) or the full matrix itself) from data associated with other graphs. As detailed in Sec. 3, this paper considers the case of having access to a set of graphs, a subset of which have no observed node features (that is, no access to any of the values of \(\mathbf{X}\)), which is common in social networks, for example [10].
Figure 2: Schematic of the proposed methodology. First, we compute a feature matrix \(\mathbf{F}^{(i)}\) for each graph \(\mathcal{G}^{(i)}\) based on structural characteristics. Then, we train a GAE on \(\mathbf{F}^{(i)}\) to produce node embeddings \(\mathbf{Z}^{(i)}\) and corresponding graph embeddings. The latter are used to measure similarity among graphs, and find the most similar graphs to \(\mathcal{G}^{(i)}\), collected in \(\mathcal{Q}^{(i)}\). The node embeddings are used to find, among the nodes of the graphs in \(\mathcal{Q}^{(i)}\), those nodes which are the closest to each node \(n\) from \(\mathcal{G}^{(i)}\), collected in \(\mathcal{N}_{n}^{(i,j)}\). Finally, we estimate node features in \(\mathcal{G}^{(i)}\) by averaging the features of the closest graphs and their nodes, resulting in realistic yet accurate node feature estimates.
In the setting of completely missing node features, other works replace these features with carefully crafted random matrices [17, 18], position-dependent values [28, 29, 30], or structural properties, such as the degree [5, 16], surprisingly showing that GNNs are still able to obtain great performance without meaningful node features and only using the graph structure. However, random and constant features are independent of class labels, and we empirically demonstrate that using random or structural node features is suboptimal. Moreover, our approach using a realistic estimate of node features exhibits superior performance.
## 3 Methodology
We introduce our proposed approach to estimate missing node features using structural information and known node features, which is visualized in Fig. 2. While the schematic in Fig. 2 illustrates the use of local structure for sharing nodal values, other assumptions can easily be made to associate nodes for feature learning.
Consider a graph dataset \(\mathcal{T}=\{(\mathcal{G}^{(i)},\mathbf{X}^{(i)},\mathcal{Y}^{(i)})\}_{i=1}^ {T}\), where for the \(i\)-th sample we have the graph \(\mathcal{G}^{(i)}\) with \(N_{i}\) nodes, \(\mathbf{X}^{(i)}\in\mathbb{R}^{N_{i}\times F}\) is a matrix of node features of length \(F\), and \(y^{(i)}\) is the associated label. Let \(\mathcal{T}_{\mathrm{miss}}\subset\mathcal{T}\) be a subset of \(\mathcal{T}\) with missing features, where for every \((\mathcal{G},\mathbf{X},y)\in\mathcal{T}_{\mathrm{miss}}\), we only know the duplex \((\mathcal{G},y)\), and define \(\mathcal{T}_{\mathrm{full}}=\mathcal{T}\bigcap\mathcal{T}_{\mathrm{miss}}\). Our focus in this work is to recover the missing features \(\mathbf{X}\) for every \((\mathcal{G},\mathbf{X},y)\in\mathcal{T}_{\mathrm{miss}}\), resulting in a set \(\hat{\mathcal{T}}_{\mathrm{miss}}\) consisting of triplets \((\mathcal{G},\hat{\mathbf{X}},y)\) with approximated features \(\hat{\mathbf{X}}\). Subsequently, we may use \(\hat{\mathcal{T}}=\mathcal{T}_{\mathrm{full}}\cup\hat{\mathcal{T}}_{\mathrm{ miss}}\) for downstream tasks such as graph classification.
Our approach consists of two steps: we first learn a node embedding space preserving graph structural information through which we compute node similarity, and then we predict the values of missing node features using nearby node embeddings. While we select local structural characteristics as the topological features of interest, note that our framework is amenable to any choice of embedding space that allows us to relate nodes based on similarity.
**Node embedding space.** We first obtain a latent space with which we compute node similarity. We train a GAE to generate node embeddings characterized by node roles, that is, their local structure [21]. More precisely, the GAE consists of a graph convolutional network (GCN) as the encoder to learn from structural characteristics, both global and local, and a multilayer perceptron (MLP) as the decoder to invert the embedding process. For a graph \(\mathcal{G}^{(i)}\), the GCN encoder takes as input \(\mathcal{G}^{(i)}\) and a corresponding feature matrix \(\mathbf{F}^{(i)}\in\mathbb{R}^{N_{i}\times F}\) containing structural information from \(F\) features. We apply the features in [21], including local characteristics such as node degree and clustering coefficient, although any features may be used to emphasize different structural behavior. The parameters \(\mathbf{\Theta}\) of the GAE \(f_{\mathbf{\Theta}}\) are trained to minimize the loss between the output of the GAE and the input feature matrix
\[\min_{\mathbf{\Theta}}\|\mathbf{F}^{(i)}-f_{\mathbf{\Theta}}(\mathbf{F}^{(i )},\mathcal{G}^{(i)})\|_{\mathcal{F}}^{2},\]
where \(f_{\mathbf{\Theta}}\) represents the GAE, whose output is computed as \(f_{\mathbf{\Theta}}(\mathbf{F},\mathcal{G}^{(i)})=\mathrm{MLP}_{\mathbf{ \Theta}_{2}}(\mathrm{GNN}_{\mathbf{\Theta}_{1}}(\mathbf{F},\mathcal{G}^{(i)}))\), where \(\mathbf{\Theta}=\{\mathbf{\Theta}_{1},\mathbf{\Theta}_{2}\}\) and the GNN is defined via the following recursion [6]
\[\mathbf{H}^{(\ell)}=\sigma\left(\tilde{\mathbf{A}}\mathbf{H}^{(\ell-1)} \mathbf{\Theta}^{(\ell)}\right), \tag{1}\]
where \(\mathbf{H}^{(\ell)}\) are the hidden features at layer \(\ell\); \(\mathbf{\Theta}_{1}=\{\mathbf{\Theta}^{(\ell)}\}_{\ell=1}^{L}\) are the learnable parameters of the \(L\) layers; and \(\sigma\) is a pointwise non-linearity. We let \(\mathbf{A}\) denote the adjacency matrix of the input graph \(\mathcal{G}^{(i)}\), and we define \(\hat{\mathbf{A}}=\mathbf{A}+\mathbf{I}\), \(\hat{\mathbf{D}}=\mathrm{diag}(\hat{\mathbf{A}}\mathbf{1})\), and \(\hat{\mathbf{A}}=\hat{\mathbf{D}}^{-1/2}\hat{\mathbf{A}}\hat{\mathbf{D}}^{-1/2}\). Note that the matrix multiplication in (1) can be understood as a low-pass graph filtering [19], where the nodes average their own value with the values of their neighbors. As a result, the embeddings based on (1) will promote similar representations for nodes whose local structural features are similar. A visualization of the resultant node embeddings from graphs in the molecular classification dataset AIDS [31] is shown in Fig. 1.
Given the GAE, we obtain node embeddings for graph \(\mathcal{G}^{(i)}\) as \(\mathbf{Z}^{(i)}=\mathrm{GNN}_{\mathbf{\Theta}_{1}}(\mathbf{F}^{(i)}, \mathcal{G}^{(i)})\in\mathbb{R}^{N_{i}\times P}\), where \(P\) is the dimension of the embedding space. We further define graph embeddings \(\mathbf{z}^{(i)}\in\mathbb{R}^{P}\) by computing the average across the node dimension, that is, \(\mathbf{z}^{(i)}=\frac{1}{N_{i}}\sum_{k=1}^{N_{i}}\mathbf{z}^{(i)}_{k}\in \mathbb{R}^{P}\) where \(\mathbf{z}^{(i)}_{k}\in\mathbb{R}^{P}\) is \(k\)th row of \(\mathbf{Z}^{(i)}\).
With the node and graph embeddings for every graph in \(\mathcal{T}\), we associate \(\mathcal{G}^{(i)}\in\mathcal{T}_{\mathrm{miss}}\) with nearby graphs and nodes with respect to the embedding space. Let \(\mathcal{Q}^{(i)}\subset\mathcal{T}_{\mathrm{full}}\) be the set of the \(\bar{Q}\) nearest graphs to \(\mathcal{G}^{(i)}\) with the same label, that is, for every \(\mathcal{G}^{(j)}\in\mathcal{Q}^{(i)}\), we have that \(y^{(i)}=y^{(j)}\). More specifically, all the graphs in \(\mathcal{Q}^{(i)}\) are closer to \(\mathcal{G}^{(i)}\) than those with the same label but not in \(\mathcal{Q}^{(i)}\) using as distance the error between their embeddings \(\|\mathbf{z}^{(i)}-\mathbf{z}^{(j)}\|_{2}\). For nearby nodes, we similarly let \(\mathcal{N}^{(i,j)}_{n}\) be the set containing the \(\bar{N}\) nodes of graph \(\mathcal{G}^{(j)}\in\mathcal{Q}^{(i)}\) closest to node \(n\) of graph \(\mathcal{G}^{(i)}\in\mathcal{T}_{\mathrm{miss}}\). That is, all of the \(\bar{N}\) nodes from graph \(\mathcal{G}^{(j)}\) in \(\mathcal{N}^{(i,j)}_{n}\) are closer (in terms of the same distance previously defined) to node \(n\) in graph \(\mathcal{G}^{(i)}\) than those not in \(\mathcal{N}^{(i,j)}_{n}\).
**Predicting missing graph signals.** Once we are able to compute node similarity, we predict the missing node features using the nearest graphs and nodes. We propose to generate the features \(\hat{\mathbf{X}}^{(i)}\) of \(\mathcal{G}^{(i)}\in\mathcal{T}_{\mathrm{miss}}\) as the average of the features in the closest nodes and graphs to \(\mathcal{G}^{(i)}\) in \(\mathcal{T}_{\mathrm{full}}\) with respect to their embeddings. Let \(\mathbf{C}^{(i,j)}\in\mathbb{R}^{N_{i}\times N_{j}}\) be the transformation matrix mapping the features from \(\mathcal{G}^{(j)}\in\mathcal{Q}^{(i)}\) to \(\mathcal{G}^{(i)}\), where the entry at the \(n\)th row and \(\ell\)th column is
\[C^{(i,j)}_{n,\ell}=\left\{\begin{array}{ll}\frac{1}{\mathcal{N}^{(i,j)}_{n}} &n\in\{1,...,N_{i}\},\ell\in\mathcal{N}^{(i,j)}_{n}\\ 0&\mathrm{otherwise}\end{array}\right. \tag{2}\]
\begin{table}
\begin{tabular}{c|c c c c c c} & MUTAG & AIDS & PROTEINS & ENZYMES & ogbg-molbbbp & ogbg-molbace \\ \hline Zeros & \(1.00\pm 0.00\) & \(1.00\pm 0.00\) & \(1.00\pm 0.00\) & \(1.00\pm 0.00\) & \(1.00\pm 0.00\) & \(1.00\pm 0.00\) \\ Ones & \(2.450\pm 0.00\) & \(6.083\pm 0.00\) & \(1.414\pm 0.00\) & \(1.414\pm 0.00\) & \(0.806\pm 0.00\) & \(0.803\pm 0.00\) \\ Random & \(1.510\pm 0.004\) & \(3.550\pm 0.003\) & \(0.965\pm 0.002\) & \(0.961\pm 0.003\) & \(0.896\pm 0.000\) & \(0.895\pm 0.000\) \\ Degree & \(0.938\pm 0.001\) & \(1.414\pm 0.006\) & \(0.899\pm 0.003\) & \(0.891\pm 0.001\) & \(0.975\pm 0.000\) & \(0.984\pm 0.000\) \\ LSE-NG (\(\bar{Q}=1\)) & \(0.631\pm 0.040\) & \(0.790
For each node \(n\) in \(\mathcal{G}^{(i)}\), the product \(\mathbf{C}^{(i,j)}\mathbf{X}^{(j)}\) computes the average of the features from the nodes in \(\mathcal{N}_{n}^{(i,j)}\), that is, the closest nodes to \(n\) from \(\mathcal{G}^{(j)}\). Given the set of closest graphs \(\mathcal{Q}^{(i)}\) to \(\mathcal{G}^{(i)}\), we compute our node feature estimates for \(\mathcal{G}^{(i)}\) as
\[\hat{\mathbf{X}}^{(i)}=\frac{1}{\bar{Q}}\sum_{j\in\mathcal{Q}^{(i)}}\mathbf{C }^{(i,j)}\mathbf{X}^{(j)}. \tag{3}\]
These features complete the triplet \((\mathcal{G}^{(i)},\hat{\mathbf{X}}^{(i)},y^{(i)})\) for every estimate in \(\hat{\mathcal{T}}_{\mathrm{miss}}\), allowing us to use these graphs for downstream tasks such as graph classification.
## 4 Results
We showcase the capabilities of our proposed approach for missing feature generation. We demonstrate our method in comparison with several baselines in numerical experiments, both for node feature learning and downstream graph classification.
**Datasets.** We use six real-world data benchmarks: MUTAG, AIDS, PROTEINS and ENZYMES from the TUDataset collection [31], and ogbg-molbace and ogbg-molbppp from the OGBG collection [32]. Not only are these standard benchmark datasets for classification tasks, they also provide node features, which allows us to test the hypothesis that node features are relevant for the classification task as well as evaluate our proposed approach. The datasets contain either graphs representing molecules (MUTAG, AIDS, ogbg-molbace and ogbg-molbppp), where nodes represent atoms and edges represent chemical bonds, or proteins (PROTEINS) and enzymes (ENZYMES), where nodes represent structural elements and edges encode node proximity.
**Experimental setup.** We split the dataset with 10% of the data for validation; 10% for testing; 30% for training, or \(\mathcal{T}_{\mathrm{full}}\); and 50% as missing, that is, the set of graphs with missing node features \(\mathcal{T}_{\mathrm{miss}}\). We conduct 15 random realizations of these splits, and the results presented in Tables 1 and 2 list the mean and standard deviation of the metric of interest (to be defined next) across every realization.
We demonstrate the efficacy of our method on both feature generation and downstream graph classification. Our approach that uses the local structure-based node embeddings of nearest neighbors, denoted "LSE-NN", is compared to several baselines. Alternatives to "LSE-NN" include classical approaches: "Degree" denoting node degree, "Ones" for features of all ones, "Zeros" for features of all zeros, "Random" with features sampled uniformly at random on \([0,1]\).
Our method "LSE-NN" exploits not only similar graphs for estimating missing node features but also nodes with similar local structures. We highlight the benefits of such an approach by also comparing it to a modification denoted "LSE-NG". For this variant, we obtain the nearest graphs \(\mathcal{Q}^{(i)}\) for \(\mathcal{G}^{(i)}\) as for "LSE-NN", but we then assign feature values to each node \(k\) in \(\mathcal{G}^{(i)}\) by taking node features uniformly at random from the nodes of graphs \(\mathcal{G}^{(j)}\in\mathcal{Q}^{(i)}\). This is equivalent to replacing \(\mathbf{C}^{(i,j)}\) in (3) with a random permutation matrix \(\mathbf{P}^{(i,j)}\in\{0,1\}^{N_{i}\times N_{j}}\). Thus, "LSE-NG" predicts node features using similar graphs but does not align nodes by local structure.
**Feature generation performance.** We first compare the ability of each method to recover the original node features. The results presented in Table 1 show the node feature estimation error as \(\|\hat{\mathbf{X}}-\mathbf{X}\|_{F}^{2}/\|\mathbf{X}\|_{F}^{2}\), where \(\hat{\mathbf{X}}\) denotes the estimated features and \(\mathbf{X}\) the true ones. We see that our architecture consistently beats the alternatives, achieving a lower error in every dataset considered, in some cases by a large margin. This shows that the true node features, which are assumed to be the optimal features for downstream tasks, are best recovered by our architecture, without the need for partial observations on the graphs with missing features.
**Graph classification performance.** We also assess the utility of the estimated features for graph classification. The results are shown in Table 2, where we present label prediction accuracy using a GNN model trained with the estimated features. We choose the Graph Isomorphism Network (GIN) [16] as our GNN architecture, a standard model for graph classification. For this task, we also add an additional baseline "Not using \(\mathcal{T}_{\mathrm{miss}}\)", where we train the GIN only on the subset of graphs \(\mathcal{T}_{\mathrm{full}}\) with known node features, ignoring the graphs in \(\mathcal{T}_{\mathrm{miss}}\) with missing features. In all cases but the MUTAG dataset, the best performance is achieved using the learned GAE embeddings, either from randomly copying nodes from the nearest graphs "LSE-NG", which enjoys the best performance on PROTEINS dataset, or by using the nearest nodes "LSE-NN", which obtains superior performance for all other datasets. Thus, not only do we infer missing node features accurately, but the estimates are sufficiently realistic to bolster classification performance when we do not observe the node features of many graphs.
## 5 Conclusion
In this work, we proposed a framework to recover completely missing node features for a set of graphs. We implemented this framework for estimating features that are characterized primarily by local graph structure. To this end, we presented a node embedding space using only local topological features. The embedding space provided a node similarity metric with which we estimated missing node features using similar nodes from nearby graphs. Our estimates aid graph classification when features are missing, emphasizing the need for accurate nodal characteristics. In the future, we will generalize to applications such as graph data augmentation, where we can generate synthetic graphs with realistic node features. Our work connecting node features and graph structure can bolster the success of graph-based learning by exploiting not only structural information but also values explicitly embedded therein.
\begin{table}
\begin{tabular}{c|c c c c c c} & MUTAG & AIDS & PROTEINS & ENZYMES & ogbg-molbpp & ogbg-molbace \\ \hline True Features & 83.25 \(\pm\) 9.52 & 97.00 \(\pm\) 1.49 & 72.74 \(\pm\) 3.75 & 36.17 \(\pm\) 4.97 & 85.53 \(\pm\) 2.38 & 74.38 \(\pm\) 3.49 \\ \hline Zeros & 78.75 \(\pm\) 9.34 & 95.92 \(\pm\) 1.87 & 70.09 \(\pm\) 5.25 & 25.50 \(\pm\) 7.15 & 77.15 \(\pm\) 2.63 & 53.33 \(\pm\) 5.25 \\ Ones & 82.25 \(\pm\) 9.55 & 94.45 \(\pm\) 1.86 & 69.25 \(\pm\) 5.26 & 24.08 \(\pm\) 5.10 & 77.77 \(\pm\) 3.26 & 53.55 \(\pm\) 4.50 \\ Random & 81.50 \(\pm\) 8.23 & 94.30 \(\pm\) 2.80 & 70.75 \(\pm\) 4.66 & 22.92 \(\pm\) 6.34 & 77.61 \(\pm\) 3.46 & 53.73 \(\pm\) 4.20 \\ Degree & 81.00 \(\pm\) 12.41 & 90.60 \(\pm\) 4.88 & 69.65 \(\pm\) 5.38 & 25.42 \(\pm\) 6.32 & 79.71 \(\pm\) 3.40 & 56.34 \(\pm\) 4.77 \\ Not using \(\mathcal{T}_{\mathrm{miss}}\) & **84.75**\(\pm\) **8.73** & 95.62 \(\pm\) 1.56 & 71.28 \(\pm\) 4.87 & 21.83 \(\pm\) 6.75 & 83.27 \(\pm\) 2.13 & 67.23 \(\pm\) 5.27 \\ \hline LSE-NG (\(Q=1\)) & 83.00 \(\pm\) 7.48 & 95.75 \(\pm\) 1.65 & 71.68 \(\pm\) 3.76 & 24.08 \(\pm\) 5.01 & 82.94 \(\pm\) 1.90 & 69.02 \(\pm\) 4.10 \\ LSE-NG (\(Q=3\)) & 81.50 \(\pm\) 9.89 & 95.17 \(\pm\) 1.70 & **72.79**\(\pm\) **4.89** & 24.42 \(\pm\) 5.30 & 83.72 \(\pm\) 1.37 & 70.28 \(\pm\) 2.94 \\ \hline LSE-NN (\(Q=1\)) & 81.50 \(\pm\) 9.63 & **96.10**\(\pm\) **1.44** & 71.15 \(\pm\) 4.57 & 23.83 \(\pm\) 6.10 & 83.01 \(\pm\) 1.54 & **71.63**\(\pm\) **4.12** \\ LSE-NN (\(Q=3\)) & 78.25 \(\pm\) 8.26 & 95.97 \(\pm\) 1.63 & 70.53 \(\pm\) 4.07 & **26.00**\(\pm\) **6.13** & **84.08**\(\pm\) **2.10** & 70.33 \(\pm\) 3.57 \\ \end{tabular}
\end{table}
Table 2: Accuracy in the test dataset obtained by the baselines and by the approach presented in this work in the downstream task (graph classification). The best performances (excluding those obtained using the true features) are **bolded**. |
2309.05652 | An Effective Two-stage Training Paradigm Detector for Small Dataset | Learning from the limited amount of labeled data to the pre-train model has
always been viewed as a challenging task. In this report, an effective and
robust solution, the two-stage training paradigm YOLOv8 detector (TP-YOLOv8),
is designed for the object detection track in VIPriors Challenge 2023. First,
the backbone of YOLOv8 is pre-trained as the encoder using the masked image
modeling technique. Then the detector is fine-tuned with elaborate
augmentations. During the test stage, test-time augmentation (TTA) is used to
enhance each model, and weighted box fusion (WBF) is implemented to further
boost the performance. With the well-designed structure, our approach has
achieved 30.4% average precision from 0.50 to 0.95 on the DelftBikes test set,
ranking 4th on the leaderboard. | Zheng Wang, Dong Xie, Hanzhi Wang, Jiang Tian | 2023-09-11T17:43:11Z | http://arxiv.org/abs/2309.05652v1 | # An Effective Two-stage Training Paradigm Detector for Small Dataset
###### Abstract
Learning from the limited amount of labeled data to the pre-train model has always been viewed as a challenging task. In this report, an effective and robust solution, the two-stage training paradigm YOLOv8 detector (TP-YOLOv8), is designed for the object detection track in VIPriors Challenge 2023. First, the backbone of YOLOv8 is pre-trained as the encoder using the masked image modeling technique. Then the detector is fine-tuned with elaborate augmentations. During the test stage, test-time augmentation (TTA) is used to enhance each model, and weighted box fusion (WBF) is implemented to further boost the performance. With the well-designed structure, our approach has achieved \(30.4\%\) average precision from \(0.50\) to \(0.95\) on the DelftBikes test set, ranking 4th on the leaderboard.
## 1 Introduction
Object detection is a versatile technology that finds extensive use in numerous fields. However, the training of detectors require a large amount of annotated data, which is labor-intensive and expensive. In order to improve data efficiency, the 4th Visual Inductive Priors for Data-Efficient Deep Learning Workshop(VIPriors) is introduced as an ICCV 2023 workshop. In particular, the use of model weights pre-trained on large-scale datasets is not allowed. Contestants should train a detector on the given dataset from scratch. The dataset used in the object detection track is DelftBikes[3], which contains 10,000 bike pictures, with 22 densely annotated parts per image, where some parts may be missing. The dataset is split into 8,000 training and 2,000 testing images.
We experiment with various models. The challenge does, however, limit the usage of extra data and pre-trained weights. Finally, the lightweight detector YOLOv8[4] is selected as our baseline. In order to address the data deficiency problem, a two-stage training method is designed in which the backbone of the detector is pretrained with unsupervised masked image modeling(MIM)[7, 2] method and then the detector is fine-tuned with elaborate data augmentations. In the pretraining stage, an MIM technique named SparK is utilized to help the detector learn better features by integrating prior knowledge about positions into the detector. Then we load the pretrained encoder as the backbone
Figure 1: Visual examples of our data augmentation methods. (a) The original image and ground truth boxes. (b) The image after color jittering. (c) The image is cut and pieced together from segments of three additional pictures. (d) The image is mixed with an additional picture to a certain proportion.
and finetune it with elaborate augmentation method such as Color Jitter, Mosaic[1] and Mix-up[8]. The visual examples of our data augmentation methods are shown in Figure 1. In the testing stage, TTA and weighted box fusion[6] are implemented to further boost the detection performance. Our approach achieves \(30.4\%\) AP on the DelftBikes test set, ranking 4th on the leaderboard.
The implementation details of our method are introduced in the following sections.
## 2 Methodology
For a given image, we first use MIM pre-trained encoder to extract image features. Then we send the extracted features to the YOLOv8 to detect the objects. In this section we first illustrate the pre-training method named SparK[7] in Sec.2.1. Then augmentations used in fine-tuning stage is introduced in Sec.2.2. And the TTA and model ensembling strategy is presented in Sec.2.3. The framework s depicted in Figure 2.
### Pre-training Stage
The challenge dataset DelftBikes is relatively small. Experiments show that direct training on the training set leads to overfitting. In order to avoid this issue and fully utilize the positional prior knowledge, we follow the popular masked image modeling approach to pre-train the backbone of YOLOv8.
The masked image modeling initially extending the success of BERT from transformers to vision transformers, is largely increasing the performance of ViTs. However, the masked approach is difficult to be applied for hierarchical convolutional models. SparK treats unmasked patches as sparse voxels and uses sparse convolution to encode them. SparK also employs a hierarchical decoder to make full use of the advantage of the convolution net's hierarchy, which makes masked modeling well-suited for any convolution net, and brings a performance leap on downstream tasks.
To adapt convolution to irregular masked input, visible patches are gathered into a sparse image and encoded by sparse convolution. To pre-train a hierarchical encoder, SparK engages a UNet-style architecture[5] to decode multi-scale sparse feature maps, where all empty positions are filled with mask embedding. SparK's reconstruction target is per-patch normalized pixels with an \(L^{2}\)-loss.
### Fine-tuning Stage
After pre-training, we dessert the decoder in the SparK and use the encoder as the backbone of the detector. We also follow the augmentation methods in the Yolo series, such as mosaic and mix-up to improve the generalization of the detector.
The size of some object categories in the DelftBikes dataset is relatively small. With multi-scale training and testing and tweaking the architecture to detect on a larger feature map, i.e., the YOLOv8-p2, our model achieves better performance, especially on small objects. The ablation study is displayed in Sec.4.2.
### Testing Stage
Before testing, we augment the images in the test set with methods like RandomFlip and Resize, which brings more scale-wise diversity. After acquiring all the detection results, we sort them by AP on the validating set and implement weighted box fusion on models with the best performance.
Figure 2: Our two-stage training paradigm TP-YOLOv8. To perceive relative position distribution between bicycle parts, we first pre-train the backbone with the MIM technique in the blue part. Then the pre-trained weight is loaded for fine-tuning on the detection task using elaborate augmentations within the yellow background. In the testing stage, TTA and model ensembling methods are implemented to further boost the detection performance depicted in the green area.
## 3 Experiment
### Implement Detail
All of our experiments are conducted on 1 V100 GPU. In the pre-training stage, the backbone of YOLOv8 is pre-trained by reconstructing images from multi-scale encoded features with a batch size of 64 and a mask ratio of 60 for 1,000 epochs. After pre-training, we load the pre-trained weight and fine-tune the detector with annotated images. During the fine-tuning stage, we set the learning rate of the backbone to 0.001 and that of other parts of the detector to 0.01. The detector YOLOv8 is trained for 100 epochs with several different settings. In the testing stage, models with different hyperparameters are evaluated on the validation set and we take the best-performing models for weighted box fusion. In the end, we take the top 30 models for model ensembling, improving the mAP by \(1.2\%\).
### Result Analysis
In the 2023 VIPriors object detection challenge, our team, wokots, showcased its expertise and dedication by securing a commendable 4th position on the Table 1. Analyzing our scores in detail, we achieved an Average Precision (AP) score of 0.30 at an Intersection over Union (IoU) threshold ranging from 0.50 to 0.95. This score, while being competitive, was just a hair's breadth behind the 3rd place team, HHHa, who managed an AP of 0.31. At an IoU threshold of 0.50, our model demonstrated an impressive AP score of 0.63, indicating a strong ability to detect objects with a significant overlap with ground truth. For a more stringent IoU threshold of 0.75, our AP score stood at 0.25, reflecting our model's capability to maintain precision even under stricter evaluation criteria. Diving deeper into size-specific evaluations, our model exhibited a nuanced understanding of object scales. For small objects, which often pose challenges due to their limited visibility and intricate details, we achieved an AP of 0.11. For medium-sized objects, our AP was a robust 0.27, and for large objects, which require the model to capture broader contexts, our AP was a solid 0.24.
We evaluated different Yolo variants on the validation set under the same set of hyperparameters, including Yolov8M/L and Yolov5M/L and the scale-enhanced versions of these models. As shown in Table 3, These scale-enhanced configurations, especially Yolov8L-p2 consistently outperformed their base counterparts across all metrics. Finally we choose Yolov8L-p2 as our baseline model.
In our ablation study, we evaluated the effectiveness of our augmentation strategies and the MIM pre-training. We also experiment with a bigger mask ratio and pre-training on images with the bikes cut out. The ablation experiments are illustrated in Table 2.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline Rank & User & [email protected]:0.95 & [email protected] & [email protected] & AP@(small) & AP@(medium) & AP@(large) \\ \hline
1 & w12 & 0.35 (1) & 0.66 & 0.29 & 0.15 & 0.31 & 0.25 \\
2 & GroundTruth & 0.33 (2) & 0.69 & 0.28 & 0.16 & 0.27 & 0.27 \\
3 & HHHa & 0.31 (3) & 0.64 & 0.25 & 0.12 & 0.28 & 0.24 \\
**4** & **wokots** & **0.30 (4)** & **0.63** & **0.25** & **0.11** & **0.27** & **0.24** \\ \hline \hline \end{tabular}
\end{table}
Table 1: The 2023 VIPriors object detection challenge final leaderboard.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Model & mosaic & mixup & pre-train & mask ratio & [email protected]:0.95(\%) & [email protected](\%) & [email protected](\%) \\ \hline Yolov8-p2 & ✗ & ✗ & ✗ & - & 30.0 & 62.4 & 24.2 \\ - & ✗ & ✗ & ✗ & - & 30.1 & 62.5 & 24.4 \\ - & ✗ & ✗ & ✗ & - & 30.1 & 62.4 & 24.7 \\ - & ✗ & ✗ & ✗ & cut 0.75 & 30.2 & 62.5 & 24.6 \\ - & ✗ & ✗ & ✗ & cut 0.60 & 30.1 & 62.3 & 25.0 \\ - & ✗ & ✗ & ✗ & whole 0.75 & 30.2 & 62.8 & 24.8 \\ - & ✗ & ✗ & ✗ & whole 0.60 & 30.3 & 62.6 & 24.9 \\ Ensemble & - & - & - & - & 31.4 & 64.5 & 26.3 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The ablation experiments of our method.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Method & [email protected]:0.95 & [email protected] & [email protected] \\ \hline Yolov8M & 29.5 & 62.4 & 23.0 \\ Yolov8L & 29.6 & 62.4 & 24.1 \\ Yolov5M & 29.6 & 61.9 & 24.1 \\ Yolov5L & 29.5 & 62.0 & 23.8 \\ Yolov5L-p2 & 30.0 & 62.7 & 24.4 \\ Yolov8M-p2 & 30.1 & 62.5 & 24.7 \\ Yolov8L-p2 & 30.2 & 62.6 & 24.9 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Experimental results on DelftBikes validation set.
## 4 Conclusion
In our method, a two-stage training paradigm named TP-YOLOv8 is proposed. The usage of masked image modeling unsupervised pre-training strategy injects prior knowledge of relative positions of bicycle parts into the model, greatly improving the performance of the models. Besides, test-time augmentation and weighted box fusion are implemented to further boost the performance. These combined methodologies and techniques have proven their efficacy, as evidenced by our commendable achievement of securing the 4th position in the VIPriors object detection challenge. This accomplishment underscores the potential and effectiveness of our proposed method in the realm of object detection.
|
2301.01216 | An end-to-end multi-scale network for action prediction in videos | In this paper, we develop an efficient multi-scale network to predict action
classes in partial videos in an end-to-end manner. Unlike most existing methods
with offline feature generation, our method directly takes frames as input and
further models motion evolution on two different temporal scales.Therefore, we
solve the complexity problems of the two stages of modeling and the problem of
insufficient temporal and spatial information of a single scale. Our proposed
End-to-End MultiScale Network (E2EMSNet) is composed of two scales which are
named segment scale and observed global scale. The segment scale leverages
temporal difference over consecutive frames for finer motion patterns by
supplying 2D convolutions. For observed global scale, a Long Short-Term Memory
(LSTM) is incorporated to capture motion features of observed frames. Our model
provides a simple and efficient modeling framework with a small computational
cost. Our E2EMSNet is evaluated on three challenging datasets: BIT, HMDB51, and
UCF101. The extensive experiments demonstrate the effectiveness of our method
for action prediction in videos. | Xiaofa Liu, Jianqin Yin, Yuan Sun, Zhicheng Zhang, Jin Tang | 2022-12-31T06:58:41Z | http://arxiv.org/abs/2301.01216v1 | # An End-to-End Multi-Scale Network for Action Prediction in Videos
###### Abstract
In this paper, we develop an efficient multi-scale network to predict action classes in partial videos in an end-to-end manner. Unlike most existing methods with offline feature generation, our method directly takes frames as input and further models motion evolution on two different temporal scales. Therefore, we solve the complexity problems of the two stages of modeling and the problem of insufficient temporal and spatial information of a single scale. Our proposed End-to-End Multi-Scale Network (E2EMSNet) is composed of two scales which are named segment scale and observed global scale. The segment scale leverages temporal difference over consecutive frames for finer motion patterns by supplying 2D convolutions. For observed global scale, a Long Short-Term Memory (LSTM) is incorporated to capture motion features of observed frames. Our model provides a simple and efficient modeling framework with a small computational cost. Our E2EMSNet is evaluated on three challenging datasets: BIT, HMDB51, and UCF101. The extensive experiments demonstrate the effective-ness of our method for action prediction in videos.
**Index terms: action prediction, multi-scale network, end-to-end method.**
## I Introduction
The goal of action prediction in videos is to predict the class label of an ongoing action from an observed part of it over temporal axis so far[1]. It is a subset of a broader research domain on human activity analysis. Different from conventional action recognition with fully executed actions[2][3][4], it is more challenging to predict the action label in ongoing actions due to the incompleteness of actions and the continuous evolution of actions. It has attracted a lot of research attention because of its wide application in some scenarios with high real-time requirements, such as human-machine interaction, security surveillance, etc.
Although the previous work has achieved promising results by adopting a two-stage approach, there generally had problems of complex modeling and feature redundancy. The previous method separated feature extraction from predictive modeling[5][6][7][8][9][10][11][12]. This separation operation makes the spatio-temporal representation obtained may deviate from the action prediction. Moreover, it complicates the model design. Secondly, because the feature is generated offline, the complete action must be divided into fixed segments in advance, which not only results in the redundancy of the feature in the time dimension, but also is not applicable to the evolving action.
Therefore, in this paper, we propose an end-to-end method, which effectively reduces the complexity of the model and introduces more fine-grained spatio-temporal information. We designed the end-to-end network from three aspects, sampling method, local spatio-temporal information representation, and long-term time sequence fusion. In order to adapt the end-to-end structure to the evolving motion, we first changed the preprocessing and feature generation method, which will be described in Part 3. Second, to reduce computational consumption to achieve end-to-end structure, we use 2D convolution instead of two-stream networks or 3D convolutions to extract local spatio-temporal features. Finally, to enhance the temporal information of action evolution, we present an observed global scale to fuse the historical evolution information of actions.
Similar to the application of spatial multi-scale in image field, multi-scale research in the temporal dimension is also increasing in video analytics. Compared to images, the variation of temporal scales in videos poses additional challenges. How to effectively utilize the motion evolution information at different time scales has gradually gained attention in video motion analysis. Feichtenhofer[4] et al. proposed SlowFast network for video recognition. Their method utilizes two branches, a slow pathway with low frame rate and a fast pathway with high frame rate, to capture spatial semantics and motion at fine temporal resolution. Wang[13] et al. proposed an efficient multi-scale model for action recognition, which utilizes short-term and long-term temporal difference modules to capture both short-term and long-term motion information better.
Most of the existing action prediction methods are insufficient to focus on multi-scale temporal, making them fail to capture fine-grained temporal information. They use a fixed frame rate to sample each partial video, and use a fixed temporal scale for feature generation and modeling[1][5][6][7][8][9][11]. Although these methods simplify the
processing of the input of feature generation and reduce the computation to a certain extent, they ignore the evolution of action. Too much fine-grained information will be lost, and the spatio-temporal information in the video cannot be fully utilized.
Our method takes both the local evolution information between adjacent frames and the global evolution information of the entire observed video sequence into account. Therefore, we design two temporal scales to increase fine-grained timing information. Firstly, the segment scale uses RGB frames with temporal difference to capture temporal information in each segment. Secondly, the observed global scale uses LSTM module to fuse all the observed action evolution information. Through modeling in short-term and long-term time scales, our method can be mining more fine-grained temporal information without increasing the computational load.
Our E2EMSNet provides a simple yet effective framework for the problem of ongoing action prediction in videos. In summary, our main contributions lie in the following three aspects:
\(\bullet\) We propose a simple end-to-end approach for action prediction in videos. To the best of our knowledge, this is the first work focusing on this problem.
\(\bullet\) We investigate two scales in the temporal dimension to model the evolution of actions, and propose a segment summarization and propagation framework. The segment scale is used to model the local evolution of the action, and the observed global scale is used to model the global evolution of the action.
\(\bullet\) We achieve a trade-off of efficiency and effectiveness. We achieve state-of-the-art performance on several datasets while using only 2D convolutions framework and RGB format of features.
## II Related Work
### _Action Recognition_
Action recognition methods take fully observed videos as input and output labels of human actions. Action recognition has been extensively studied in past few years[2][3][4][13][14]. These studies can be roughly divided into two categories. Methods in the first category are two-stream CNNs, which was first proposed in[15]. It used two inputs of RGB and optical flow to model appearance and motion information separately in videos with a late fusion. In addition, follow-up research has adopted two RGB inputs sampled at different FPS or carefully designed temporal modules for efficiency, including Non-local Net[16], STM[17], SlowFast[4], and Correlation Net[18]. The second method is to use 3D CNNs[19][20]. It proposed 3D convolution and pooling to learn spatiotemporal features from videos directly. Several variants adopted a 2D + 1D paradigm to reduce the computation cost of 3D convolution, which implement by decomposing 3D CNNs into a 2D convolution and a 1D temporal convolution[21][22][23]. Several works focused on designing more powerful and efficient temporal modules, such as TSM[14], TAM[24], TEA[25], and TDN[13]. More recent works tried clip-based architecture search for video recognition, focusing on capturing appearance and motion or context information in a more fine-grained and efficient manner[13][26]. Although these methods mainly learned features for the videos with full action executions, their core ideas have certain reference significance for ongoing action prediction in videos.
### _Action Prediction_
Action prediction methods were proposed to predict the action given a partially observed video. [9] was the first work along these lines, they formulated the problem probabilistically and proposed a dynamic bag-of-words approach, modeling how feature distributions of activities change as observations increase. In the last decade, researchers approach this task from various perspectives and can be grouped into three major divisions[27]. The first method can be formulated as one-shot mappings from partial observations to groundtruth labels of full observations. The basic assumption underlying these methods is that a partial observation of an action video provides sufficient information to define the appropriate overall action class regardless of the unobserved part. Follow-up research work[28][29][6][30] adopted more robust features, hierarchical extractions, and learning-based classifiers to perform more fine-grained analysis of an initial partial observation for better performance. The second division is knowledge distillation-based methods. These methods distill the information from the full observations into partial observations[31][5][11][32]. These methods attempted to lend power from unobserved data in training to either enrich the feature representation of partial data or encourage the classifiers to easily recognize partial data. Another way to exploit future information is by propagating the partial observation into the future in a temporal extrapolation fashion[33][34][12][35][36]. For example, [12] learned to propagate frame-wise residuals in feature space to complete partial observation.
Fig. 1: Relevant definitions in action prediction in videos: full video, partial video, segments, and observation ratio.
### Multiple temporal scales for action analysis in videos
Temporal sequence forecasting usually faces the following situations for scenarios with insignificant periodic motion: long-term forecasts need to consider trend information (long-term dependencies), and short-term forecasts need to consider fine-grained volatility (short-term dependencies). The current difficulty is how to model long-term dynamic dependencies and consider long-term and short-term dependencies. There are two methods currently. The main existing method is hierarchical modeling, which is achieved by establishing hidden layers of different granularities[37][38][39][40][41] or decomposing the original data to obtain data of different granularities[42][43]. The second method is designing the gate mechanism, which achieved by modifying the internal structure of RNN[44]. We inherit this idea that both long-term and short-term dependencies in video must be carefully considered, and a trade-off approach is adopted.
## III Our Method
In this section, we detail our approach to mining ongoing action evolution information in videos using multiple scales in an end-to-end fashion. Specifically, we first describe the problem formulation. Then, we elaborate on our end-to-end framework and method for multi-scale modeling of ongoing action sequences.
### Problem formulation
Given a video containing human motion (the video may contain arbitrary incomplete motion), the goal is to predict the class label. We follow the problem formulation in the[31], which has been widely adopted in subsequent work[5][7][11]. As shown in Fig. 1, Given a _full video_\(X[1:T]\) with complete action execution, 1 represents the first frame of the video, and \(T\) represents the last frame. We use \(x[1,t],t\in[1,T]\) to simulate the action execution in video from 1 to \(t\), defined as _partial video_. In order to facilitate quantitative experiments, we usually divide a full video into \(K\) segments, each containing \((T/K)\) frames. Assuming that the action is executed to the \(kth,k=[1,2,...,K]\) segment, the _observation ratio_ is defined as \(r=k/K\). As defined above, as shown in Fig.1, the full video \(X\), is divided into \(K\) segments. Among them, the partial video marked with green has an observation ratio \(r=k/K=2/10=0.2\), and it can be considered that its action has been executed 20%.
### Data processing
We adopt a data processing method different from the previous method. As shown in Fig. 2, the upper part is the data processing method used in the previous method. They first divided a complete video \(X\) into \(K\) segments, and combined segments into partial videos to simulate action evolution. Then the partial video is sampled to extract the spatio-temporal representation. The problem caused by this is that each partial video needs to be separately extracted for spatio-temporal representation, which divides the continuous evolution of action. The feature extraction of partial videos with higher observation rates cannot use the previous partial videos with lower observation rates. It will cause redundancy in the time dimension. At the same time, with the increase in the observation rate, the temporal information will become more and more sparse. Compared with them, we directly extract the local spatio-temporal representations of each segment. In this way, the previous spatio-temporal information can be continuously used with the evolution of actions. This makes our model more robust to action duration, and more abundant spatio-temporal information can be obtained.
Fig. 2: Differences in data processing between our method and previous methods. The upper is the data processing method used in the previous method, and the lower is the data processing strategy used in our method.
### _Network architectures_
In this subsection, we elaborate on our network structure. Due to the data processing method mentioned in the previous section and the design of network structure, we can model action evolution in a finer-grained manner without increasing the computational load. First, we introduce how to extract short-term features for short time windows, which we call the segment scale. Then, we introduce how to fuse the segment scale to generate observed global features for the observed local videos.
**Segment scale.** Compared with images, video is a dynamic sequence of pictures arranged in time, so the temporal context relationship of frames and the spatial relationship organization of a single frame need to be considered simultaneously. For extracting and fusion of two kinds of relations in local time windows, directly stacking frames as input will bring a lot of redundant information. This method is inefficient. Moreover, it will introduce too much noise and reduce the robustness of the model. If only a single image frame is used as input, the dynamic information of the temporal window will be lost. RGB temporal difference turned out to be an efficient alternative modality to optical flow as motion representation [45][13]. To extract the spatio-temporal features of each local temporal window, we adopt the idea in[13] as a short-term feature extraction module. Different from action recognition, in the action prediction problem, we cannot get the spatio-temporal information after the current frame, so we only keep the short-term TDM (temporal difference module) in[13]. Specifically, for each segment, we randomly sample 5 frames \(I=\{I_{r-2},I_{r-1},I_{r-1},I_{r+1},I_{r-1}\}\), then the RGB difference information of these frames is down-sampled, and the 2D convolutions network is used to obtain the depth feature \(S(I_{i})\), as expressed in Equation (1).
\[S(I_{i})=\textit{Upsample}(\textit{CNN}(\textit{Downsample}(D(I_{i})))) \tag{1}\]
At the same time, to preserve the original frame-level representation as much as possible, we fuse the original features \(I_{i}\) with \(S(I_{i})\) after convolutions (in our actual experiment, the original feature passes through a layer of 2D CNN, as shown in Equation (2)).
\[S(\textit{fuse})=S(I_{i})+\textit{CNN}(I_{i}) \tag{2}\]
The fused feature is fused again with the feature from RGB difference (Equation (3)). Finally, the feature of each segment is obtained, which is the representation of segment scale.
\[S(out)=\textit{CNN}(S(\textit{fuse}))+\textit{CNN}(\textit{Downsample}(D(I_{ i}))) \tag{3}\]
**Observed global scale.** In action prediction, the action evolution of the human body is an ongoing sequence of information, and we use the observation rate to simulate its progress. Therefore, the segments are temporally sequential, and the representative actions can only evolve from front to back. In the previous section, we model the local spatio-temporal action of each segment. More logically, as time progresses, each segment's local temporal window is added to the historical sequence before it. Therefore, the crux of the problem is how to effectively utilize all observed segments to reconstruct the historical global evolution.
Fig. 3: Overview of End-to-End Multi-scale Network. Given a full video, split it into K segments. For each segment, a CNN-based module extracts the local motion evolution to achieve more fine-grained modeling, which we call the segment scale. Then, temporal modeling is performed on each segment in chronological order to model the observed global action evolution, which we call the observed global scale.
Moreover, in the actual scene, the evolution of the action cannot know its end time and duration, which means that the overall length of the history is uncertain. Therefore, it is natural to use the variable-length input characteristics of LSTM to model the global spatiotemporal characteristics of historical observations, as shown in formula (4).
\[Y(i)=L(S(out)) \tag{4}\]
As shown in Fig. 3, when the action evolves to the third segment, the LSTM adds the short-term time window of the third segment to the historical observation in the time dimension. Implemented the observed global evolution to model the first three segments progressively. In this way, the spatiotemporal relationship in each segment can be modeled in a more fine-grained manner, and the subsequent segments are modeled in a progressive manner to model the historical global history without additional computational consumption.
## IV Experiments
In this section, we present the experiment results of our framework. First, we describe the evaluation datasets and implementation details. Then, we compare our E2EMSNet with state-of-the-art methods.
### _Datasets_
We evaluate our method on three video datasets: BIT[46], HMDB51[47] and UCF101[48]. **BIT** consists of 8 classes of human interactions (bow, boxing, handshake, high-five, bug, kick, pat, push), with 50 videos per class. Videos are captured in realistic scenes with cluttered backgrounds, partially occluded body parts, moving objects, and variations in subject appearance, scale, illumination condition, and viewpoint. Even though BIT has a limited number of classes and videos, it is a complex dataset because of their backgrounds and the similarity of the beginning and ending scenes. The ratio of videos between training and testing is 17:8. **HMDB51** is a large-scale human action recognition dataset that comprises 51 daily action categories. It contains some fine-grained human facial motions, such as smiling, laughing, etc, in static background windows, which are not seen in other comparable datasets, and challenges the spatiotemporal modeling of actions. There are 6766 video clips with at least 102 videos for each class. There are three official data splits. **UCF101** is a dataset collected from Youtube and trimmed for action recognition (each video contains exactly one action). It includes 101 distinct action classes and 13320 overall video clips with at least 100 videos for each category. All videos are divided into 25 groups and updated with the setup of Three Train/Test Splits.
### _Implementation details_
Thanks to our end-to-end network structure design, we can easily generalize to various video datasets. In experiments, we use ResNet50 with the short-term module in [13] to build segment scale. On the three datasets, we simulated the action evolution with the observation rate from 0.1 to 1, with a step size of 0.1, to obtain ten segments, and use each segment as a segment scale. Our network structure can use any length and number of segments as the segment scale. For each segment, we randomly sample 5 frames for computing RGB differential information. We employ convolutional layers pre-trained on kinetics400, and set dropout to reduce overfitting. We first convert the video into video frames, and each video frame is resized to have shorter side in [256, 320] and a crop of 224\(\times\)224 is randomly cropped. We use two NVIDIA GeForce RTX 3090s to train our model. On the BIT dataset, we follow the official settings to divide the training set and test set. Specifically, in each category, 34 videos are used as the training set, and 16 videos are used as the test set. On the HMDB51 dataset, we follow the standard evaluation protocol using three training/testing splits, and report the average accuracy over three splits. On the UCF101 dataset, we use the first 15 groups of videos for model training, the following 3 groups for model validation, and the remaining 7 groups for testing.
### _Comparison with the state of the art_
In this subsection, we compare out E2EMSNet with those state-of-the-art methods, including DBoW[9], MTSSVM[28], MMAPM[31], Deep-SCN[5], AAPNet [49], RGN-KF[12], RSPG + AS-GCN[8], AORAP[50], and AASE +JOLO-GCN[51] on the BIT dataset, MTSSVM[28], Global-local[52], AKT[7], STRR[30] on the HMDB51 dataset, MTSSVM[28], DeepSCN[5], AAPNet[49], Teacher-Student[11], RGN-KF [12], RSPG + AS-GCN[8], SPR-Net[53], JVS + JCC + JFIP[32], STRR (ResNet18) [30], and Xinxiao Wu et al.[54] on the UCF101 dataset. We reported the results of these compared methods provided by authors.
Table I illustrates the accuracy of action prediction and compares our method with several state-of-the-art methods on the BIT dataset. As seen from the results, our method achieves significant improvements in observation rates from 0.1 to 1. This can be explained by the fact that our method can make reliable predictions on actions as the actions evolve.
Table II shows the experimental results on the HMDB51 dataset, and table III shows the experimental results on the UCF101 dataset. Thanks to the design of our segment scale, action evolution can be modeled in a more fine-grained way. As shown in the table, at 0.2 of observation rate, the accuracy rate on HMDB51 dataset is increased by more than 10%, and the accuracy rate on UCF101 in increased by more than 3% except the results in[32]. This means that our method can better predict its class in the early stages of the action. As the observation rate increases, our method can achieve a more competitive performance, although the performance improvement is limited.
At the same time, we have to admit that on the HMDB51 and UCF101datasets, although our method has achieved relatively good performance when the observation rate is low, as the action continues to evolve and the temporal scale continues to grow, our model is limited in the later observation ratios. We think that the modeling ability of observed global scale for long time windows is insufficient.
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Input} & \multirow{2}{*}{Feature-dim} & \multicolumn{8}{c}{Observation Ratio} \\ \cline{5-12} & & & 0.1 & 0.2 & 0.3 & 0.4 & 0.5 & 0.6 & 0.7 & 0.8 & 0.9 & 1.0 & Avg. \\ \hline MTSSVM[28] & \multirow{3}{*}{RGB} & \multirow{3}{*}{3D-CNN} & \multirow{3}{*}{45.02} & \multirow{3}{*}{26.70} & \multirow{3}{*}{33.80} & \multirow{3}{*}{37.80} & \multirow{3}{*}{38.80} & \multirow{3}{*}{38.80} \\ Global-local[52] & & & & & & & & & & \\ \cline{5-12} AKT[7] & & & & & & & & & & \\ \cline{5-12} STRR[30] & & & & & & & & & & & \\ \hline
**E2MSNet (Ours)** & \multirow{3}{*}{RGB} & \multirow{3}{*}{2D-CNN\(+\)LSTM} & **59.21** & **60.52** & **62.23** & **64.47** & **64.73** & **64.86** & **64.86** & **65.26** & **65.13** & **65.39** & **63.67** \\ \hline \hline \end{tabular}
\end{table} TABLE II: The ACCUARY (%) OF DIFFERENT ACTION PREDICTION METHODS ON HMDB51 DATASET AT DIFFERENT OBSERVATION RATIOS FROM 0.1 TO 1. NOTE THAT THE MISSING VALUE IS BECAUSE THE EXPERIMENTAL RESULTS OF THE CORRESPONDING OBSERVATION RATI ARE NOT PROVIDED IN THE ORIGINAL PAPER.
### _Ablation study_
Here, we provide more evaluation results on the UCF101 dataset.
**Influence of multi-scale architecture.** TableIV. Illustrates the results of the ablation study for different scale architecture. First, we introduce the details of the ablation study. Then, we analyze the effects of multi-scale architecture by comparing the results with different settings.
'The segment scale only' uses the CNN-based module for action prediction. 'The segment scale + observed global scale' uses the CNN-based and LSTM modules to learn different scale information. In the first setting, for action clips with different observation rates, we sample 5 frames and use the segment scale only for prediction. In the second setting, we adopt a complete structure with segment scale and observed global scale. Even though the average accuracy difference is insignificant, the multi-scale structure is essential for ongoing action prediction. Results of 'The segment scale only' has little discrimination under different observation rates, as shown in Fig 4. This indicates that its feature representation and discriminative degree for different observation rates are insufficient. At the same time, due to the sparse sampling of long-time scales, we believe this manner will perform worse for complex actions and actions with long duration. Conversely, adding observed global scale and changing the sampling strategy will make the prediction process more cognitive (As the observation rate increases, the confidence of the prediction should be increasing.). Moreover, due to the more fine-grained feature extraction for actions, it has better robustness to complex and long-duration actions.
**Influence of hyperparameters.** Finally, we briefly introduce the experimental results on UCF101 dataset under different hyperparameter settings. To ensure a single variable, we have conducted comparative experiments on the following hyperparameters, and the results are shown in Table V.
### _Analysis of the performance of different actions_
We follow the grouping of the UCF101 dataset and divide it into five groups: Human-Object interaction, Body-Motion only, Human-Human interaction, Playing musical instruments, and Sports. We selected three action categories under each group, for a total of fifteen action categories, to visually analyze their classification results. We selected the following action categories: Blowing Candles, Blow Dry Hair, Cutting In Kitchen, Apply Eye Makeup, Baby Crawling, Pull Ups, Haircut, Head Massage, Punch, Playing Guitar, Playing Piano, Playing Violin, Basketball, Basketball Dunk, Biking. We keep two modules, segment scale and observed global scale, and only modify and retrain the last classification layer. The confusion matrix of the results of 15 actions at progress level of 20% is shown in Fig. 5. It can be seen intuitively from the figure that our model still has stable prediction performance for action prediction in different scenarios, even in the very early stage of actions. Only a few actions (Haircut, Blow Dry Hair, and Head Massage) with very similar external features were mispredicted. As shown in Fig6, it is an appearance comparison of Haircut, Blow Dry Hair, and Head Massage. It can be seen that three actions are difficult to distinguish, resulting in the problem of mispredicted.
Fig. 4: Prediction accuracy (%) under two scale settings on UCF101 dataset.
## V Conclusion
In this paper, we have proposed a network model, E2EMSNet, for action prediction in videos. We propose two temporal scales, segment scale and observed global scale, to model the evolution of actions, and fuse the two scales into an end-to-end framework. A stack of 2D convolutional layers with input of RGB difference is introduced to model the local evolution of actions in a more fine-grained way. Next, the LSTM layer fuses each segment scale in the temporal dimension into an observed global scale to model the long-term evolution of actions. After experimental validation and analysis, our method possesses powerful local scale modeling capability to model ongoing actions. However, due to the growth of the time scale and the increasing noise, our observed scale cannot achieve the global modeling ability we expected for the evolving actions, which will also be the focus of our future work.
Fig. 5: Confusion matrix of the result of 15 classes at progress level of 20% on UCF101 dataset.
Fig. 6: Appearance comparison of Haircut, Blow Dry Hair, and Head Massage. |
2309.13756 | Internal magnetic fields in 13 red giants detected by asteroseismology | While surface fields have been measured for stars across the HR diagram,
internal magnetic fields remain largely unknown. The recent seismic detection
of magnetic fields in the cores of several Kepler red giants has opened a new
avenue to understand better the origin of magnetic fields and their impact on
stellar structure and evolution. We aim to use asteroseismology to
systematically search for internal magnetic fields in red giant stars and to
determine the strengths and geometries of these fields. Magnetic fields are
known to break the symmetry of rotational multiplets. In red giants,
oscillation modes are mixed, behaving as pressure modes in the envelope and as
gravity modes in the core. Magnetism-induced asymmetries are expected to be
stronger for g-dominated modes than for p-dominated modes and to decrease with
frequency. After collecting a sample of 2500 Kepler red giant stars with clear
mixed-mode patterns, we specifically searched for targets among 1200 stars with
dipole triplets. We identified 13 stars exhibiting clear asymmetric multiplets
and measured their parameters, especially the asymmetry parameter and the
magnetic frequency shift. By combining these estimates with best-fitting
stellar models, we measured average core magnetic fields ranging from 20 to
150kG, corresponding to 5% to 30% of the critical field strengths. We showed
that the detected core fields have various horizontal geometries, some of which
significantly differ from a dipolar configuration. We found that the field
strengths decrease with stellar evolution, despite the fact that the cores of
these stars are contracting. Even though these stars have strong internal
magnetic fields, they display normal core rotation rates, suggesting no
significantly different histories of angular momentum transport compared to
other red giant stars. We also discuss the possible origin of the detected
fields. | Gang Li, Sébastien Deheuvels, Tanda Li, Jérôme Ballot, François Lignières | 2023-09-24T21:18:08Z | http://arxiv.org/abs/2309.13756v1 | # Internal magnetic fields in 13 red giants detected by asteroseismology
###### Abstract
Context:Magnetic fields affect stars at all evolutionary stages. While surface fields have been measured for stars across the HR diagram, internal magnetic fields remain largely unknown. The recent seismic detection of magnetic fields in the cores of several _Kepler_ red giants has opened a new avenue to better understand the origin of magnetic fields and their impact on stellar structure and evolution.
Aims:The goal of our study is to use asteroseismology to systematically search for internal magnetic fields in red giant stars observed with the _Kepler_ satellite, and to determine the strengths and geometries of these fields.
Methods:Magnetic fields are known to break the symmetry of rotational multiplets. In red giants, oscillation modes are mixed, behaving as pressure modes in the envelope and as gravity modes in the core. Magnetism-induced asymmetries are expected to be stronger for gravity-dominated modes than for pressure-dominated modes, and to decrease with frequency. Among _Kepler_ red giants, we searched for stars that exhibit asymmetries satisfying these properties.
Results:After collecting a sample of \(\sim\)2500 Kepler red giant stars with clear mixed-mode patterns, we specifically searched for targets among \(\sim\)1200 stars with dipole triplets. We identified 13 stars exhibiting clear asymmetric multiplets and measured their parameters, especially the asymmetry parameter \(a\) and the magnetic frequency shift \(\delta\nu_{\rm g}\). By combining these estimates with best-fitting stellar models, we measured average core magnetic fields ranging from \(\sim\)20 to \(\sim\)150 kG, corresponding to \(\sim\)5% to \(\sim\)30% of the critical field strengths. We showed that the detected core fields have various horizontal geometries, some of which significantly differ from a dipolar configuration. We found that the field strengths decrease with stellar evolution, despite the fact that the cores of these stars are contracting. Additionally, even though these stars have strong internal magnetic fields, they display normal core rotation rates, suggesting no significantly different histories of angular momentum transport compared to other red giant stars. We also discuss the possible origin of the detected fields.
Conclusions:
## 1 Introduction
Understanding the creation and evolution of magnetic fields is one of the main challenges faced by modern stellar physics. An important effect of magnetic fields on stellar evolution is that they are efficient at transporting angular momentum (Cantiello et al., 2014; Rudiger et al., 2015; Fuller et al., 2019; Gouhier et al., 2022). Thus, they influence the internal rotation of stars, and in turn the transport of chemical elements. Surface magnetic fields have been observed across the Hertzsprung-Russell diagram (Landstreet, 1992; Donati and Landstreet, 2009). These measurements, together with numerical simulations, establish the presence of dynamo-generated magnetic fields in convective regions (Donati and Landstreet, 2009; Auriere et al., 2015). A small fraction (\(5-10\%\)) of intermediate- and high-mass stars with radiative envelopes harbours strong kilogauss surface fields that remain stable over decades (Wade et al., 2012; Braithwaite and Spruit, 2017). These fields, which are thought to result from the star formation process, can subsist thanks to negligible ohmic diffusion and are potential progenitors for the magnetic white dwarfs and neutron stars (Ferrario et al., 2020). Another potentially widespread class of magnetic intermediate-mass stars has been identified with the detection of \(\sim\)1 Gauss fields in a few stars (Lignieres et al., 2009; Blazere et al., 2016). Although it seems plausible that magnetic fields pervade much of stellar interior plasmas, the absence of direct measurements of internal fields has posed a significant obstacle to the study of their properties and their impact on stellar evolution. Fortunately, we have asteroseismology to help us explore the interior of stars (e.g. Aerts et al., 2010).
Asteroseismology has yielded measurements of various physical processes within stars, one of which is stellar rotation at various stages of their evolution: on the main sequence (e.g., Kurtz et al., 2014; Benomar et al., 2015; Van Reeth et al., 2016), in the subgiant and red giant branch (RGB) phases (Beck et al., 2012; Mosser et al., 2012; Deheuvels et al., 2014; Triana et al., 2017; Gehan et al., 2018; Deheuvels et al., 2020; Kuszlewicz et al., 2023), in the core-He burning phase (Mosser et al., 2012; Deheuvels et al., 2015) and in white dwarfs (Hermes et al., 2017). In all of these phases, it was concluded that stars rotate more slowly than predicted by theoretical models (Zahn, 1992; Eggenberger et al., 2012; Ceillier et al., 2013; Marques et al., 2013; Ouazzani
et al. 2019; Li et al. 2020). This shows that there must be additional not-yet-identified processes that efficiently carry angular momentum inside stars, beyond the classical hydrodynamic processes (Cantiello et al. 2014; Fuller et al. 2014; Belkacem et al. 2015; Spada et al. 2016; Pincon et al. 2016; Eggenberger et al. 2017, 2019). One of the main solutions proposed is the presence of magnetic fields in radiative cores.
The advent of space missions partly dedicated to asteroseismology has yielded the high frequency resolution and photometric accuracy required to characterise stellar oscillation modes. Thanks to the _Kepler_ mission (Borucki et al. 2010), solar-like oscillators have been discovered in tens of thousands of stars (Bedding et al. 2010; Yu et al. 2018). Their oscillations are excited stochastically by the outer envelope convection similar to the Sun. Most of these solar-like oscillators are post-main-sequence stars that exhibit mixed dipole (\(l=1\)) modes, which arise from the coupling between the outer pressure modes and the interior gravity modes (Bedding 2014). Mixed modes enable us to probe the physics from the stellar core to surface, for example, to distinguish evolutionary stages (Bedding et al. 2011; Mosser et al. 2011), infer previous processes such as mergers or mass transfers (Deheuvels et al. 2022; Rui & Fuller 2021; Li et al. 2022c), or measure the internal rotation, as mentioned above. Around 20% of red giants exhibit suppressed dipole mixed modes (Mosser et al. 2012a, Garcia et al. 2014, Stello et al. 2016). It was suggested that this phenomenon could be caused by central magnetic fields exceeding the critical field intensity \(B_{\rm c}\) above which magneto-gravity waves no longer propagate in the core (Fuller et al. 2015; Stello et al. 2016; Rui & Fuller 2023). However, this interpretation is still a topic of debate (Mosser et al. 2017a; Loi 2020).
From a theoretical perspective, it has been known for decades that magnetic fields impact stellar oscillations (Gough & Thompson 1990; Hasan et al. 2005). Similarly to rotation, they break the degeneracy of oscillation modes with same degree \(l\) but different azimuthal order \(m\). Rotational effects alone produce multiplets (triplets for \(l=1\) modes) that are generally symmetric with respect to the central \(m=0\) component when the rotation rate is not too fast (that is, when second-order effects are negligible). The effects of magnetic fields on mixed modes in red giants has been addressed, considering the simple case of dipolar fields with different radial profiles, either aligned with the rotation axis (Gomes & Lopes 2020, Mathis et al. 2021, Bugnet et al. 2021) or inclined (Loi 2021; Mathis & Bugnet 2023). These studies showed that magnetic fields are expected to break the symmetry of rotational multiplets, and they produced estimates of the minimal field intensities required to detect magnetic asymmetries, defined as \(\delta_{\rm asym}=\nu_{m=-1}+\nu_{m=1}-2\nu_{m=0}\) (Deheuvels et al. 2017).
Observation breakthrough has only been recently achieved. Li et al. (2022a) reported the detection of clear asymmetries in the \(l=1\) multiplets of three _Kepler_ red giants, which they found could only be accounted for by the presence of strong magnetic fields in their cores. For this purpose, they extended previous theoretical works to magnetic fields with arbitrary configurations and found that the magnetic asymmetries of multiplets can be either positive or negative depending on the field topology (all the configurations studied before them yielded positive asymmetries). They measured radial field intensities ranging from 30 to 130 kG in these stars, and placed constraints on their topology. If the magnetic field is strong enough, it can also significantly modify the regular spacing of g-mode period, which can be used to detect them (Li et al. 2022a; Bugnet 2022). Deheuvels et al. (2023) thus detected even stronger core fields in 11 _Kepler_ red giants, with intensities that are comparable to the critical field strength \(B_{\rm c}\).
These studies naturally raise the question of the prevalence of magnetic red giants, which can shed light on the origin of these fields. In this study, we systematically searched for asymmetries in the rotational multiplets of _Kepler_ red giants with detected oscillations. The paper is organised as follows. In Sect. 2, we present the method we used to search for multiplet asymmetries among _Kepler_ red giants and we list the underlying assumptions. This leads us to identify 13 targets with multiplet asymmetries that exhibit all the features expected in the presence of an internal magnetic field. In Sect. 3, we fit asymptotic expressions of mixed modes including rotational and magnetic perturbations to the observations for these stars. We thus obtain estimates of the average field strength in the core, as well as constraints on its horizontal topology. In Sect. 4, we discuss the implications of these results for the origin and evolution of internal magnetic fields, and their impact on angular momentum transport. Sect. 5 is dedicated to conclusions.
## 2 Systematic search for multiplet asymmetries in Kepler data
### Assumptions
The power spectra of mixed modes in red giants are complex. Specific methods were derived based on asymptotic expressions of mixed modes (Shibahashi 1979, Unno et al. 1989) to identify the modes and recover general properties of p and g modes. In this study, we used methods that are derived from those prescribed by Mosser et al. (2015a), Vrard et al. (2016) and Gehan et al. (2018) in order to search for asymmetric multiplets within _Kepler_ data. These methods assume that rotational multiplets are symmetric, so they needed to be adapted. They also make use of the regularity in the period spacings of asymptotic g modes. For field intensities comparable to those found by Li et al. (2022a), this assumption remains approximately correct. However, stronger fields can significantly modify this regularity, as already mentioned in Section 1, and it is likely that the methods that we use in this study are ill-suited to detect such strong fields. This introduces an observational bias, which is further discussed in Section 4.6.
We also assume that the effects of non-axisymmetry of the magnetic field on oscillations are small. If it is not the case, multiplets can be split into \((2l+1)^{2}\) components, instead of \((2l+1)\) in the axisymmetric case (Gough & Thompson 1990,Loi 2021,Li et al. 2022a). Dipole multiplets can thus have up to nine components instead of three. Li et al. (2022a) have shown that this effect arises only if the ratio between the magnetic frequency shift and the rotational frequency shift exceeds unity. We note that this was not the case for the three red giants studied in Li et al. (2022a) (we found ratios that do not exceed \(\sim 0.6\)). For these stars, the magnetic perturbations of the oscillation frequencies are expected to be indistinguishable, whether the field is axisymmetric or not. In this study, we restrict our search to stars in the same regime, postponing the search for stars showing the seismic signature of non-axisymmetric magnetic fields to a future work. This also introduces an observational bias (see Section 4.6).
The following subsections describe the different steps of the method that we applied to search for multiplet asymmetries, in the framework exposed above. The results obtained in this section serve as the basis for the measurements of magnetic field strengths and topology reported in Section 3.
### Data reduction and sample selection
We used the _Kepler_ 4-yr long-cadence data to calculate the power spectra, which were downloaded from the Mikulski Archive for Space Telescopes (MAST, [https://archive.stsci.edu](https://archive.stsci.edu)). We calculated the power spectra density (PSD) (Lomb 1976; Scargle 1982; Kjeldsen & Bedding 1995), the global asteroseismic parameters (\(\nu_{\rm max}\), \(\Delta\nu\)), and the background properties following the processes used in the SYD pipeline (Huber et al. 2009; Chontos et al. 2021).
We visually inspected about 8000 red giant branch (RGB) stars reported by Yu et al. (2018) and Gehan et al. (2018) to select the stars with good patterns of \(l=1\) mixed modes. During the visual inspection, our primary focus is on identifying stars that exhibit clear and distinct peaks between each \(l=0\) and \(l=2\) p mode. Some stars do not display any discernible peaks in their mixed-mode regions, a phenomenon referred to as'suppressed \(l=1\) mixed modes' as mentioned in the Introduction. Additionally, the pattern of mixed modes becomes unclear for more evolved red giants because of their smaller asymptotic period spacing and the effects of radiative damping, which become large for g-dominated modes (e.g. Grosjean et al. 2014). To ensure the selection of stars with well-defined mixed-mode features, we excluded red giant stars in such unfavorable cases, resulting in a cutoff for \(\Delta\nu\) at approximately 7 \(\mu\)Hz. Consequently, we have identified and retained around 2500 RGB stars for further investigation.
### Identification of azimuthal order \(m\)
#### 2.3.1 Streched periods
The first step of the method consists of identifying the azimuthal order \(m\) of the detected modes. For this purpose, it is convenient to use the so-called "stretched" periods introduced by Mosser et al. (2015a). These authors have shown that the period spacing between consecutive dipole mixed modes can be expressed as \(\Delta P=\zeta\Delta\Pi_{1}\), where \(\zeta\) represents the fraction of the g-mode inertia over the whole inertia (Goupil et al. 2013) (\(\zeta\) tends to unity for pure g modes and to zero for pure p modes), and \(\Delta\Pi_{1}\) is the asymptotic period spacing of pure g modes. The frequencies \(\nu\) are transformed into the so-called "stretched periods" \(\tau\) using the differential equation
\[{\rm d}\tau=\frac{1}{\zeta}\frac{{\rm d}\nu}{\nu^{2}}, \tag{1}\]
so that the mixed modes are equally spaced by \(\Delta\Pi_{1}\). When representing the stretched periods in an echelle diagram folded with \(\Delta\Pi_{1}\), the modes of the same azimuthal order \(m\) are expected to align nearly vertically in this diagram.
#### 2.3.2 Asymptotic expression of \(\zeta\)
To estimate the value of \(\zeta\) for the detected modes, we used an asymptotic expression of this quantity, as is now commonly done (e.g., Mosser et al. 2015a, Li et al. 2022a). We briefly recall the steps of the procedure. Following the work of Shibahashi (1979) and Unno et al. (1989), the implicit asymptotic relation of mixed modes is expressed as
\[\tan\theta_{\rm p}=q\tan\theta_{\rm g}, \tag{2}\]
where \(q\) is the coupling factor between p and g components of the modes (e.g. Mosser et al. 2017b), and \(\theta_{\rm p}\) and \(\theta_{\rm g}\) are the phases for the pure p and g modes.
The phase of the pure p modes is written as
\[\theta_{\rm p}=\pi\frac{\nu-\nu_{\rm p}}{\Delta\nu(n_{\rm p})}, \tag{3}\]
where \(\nu_{\rm p}\) is the pure p-mode frequency and \(\Delta\nu(n_{\rm p})\) is the local frequency separation at the radial order \(n_{\rm p}\) (Mosser et al. 2015b). We used the detected \(l=0\) and \(l=2\) mode frequencies 1 to derive an expression of \(\nu_{\rm p}\) for dipole mode frequencies. From asymptotic expressions,
Footnote 1: We fit a Lorentzian profile to each detected \(l=0\) and \(l=2\) modes to obtain their frequencies (Anderson et al. 1990).
\[\nu_{\rm p}=\left[n_{\rm p}+\frac{l}{2}+\varepsilon+\frac{\alpha}{2}(n_{\rm p }-n_{\rm max})^{2}\right]\Delta\nu-l(l+1)D, \tag{4}\]
where \(\varepsilon\) is the phase term and \(n_{\rm max}=\nu_{\rm max}/\Delta\nu\) is the radial order at the max power frequency. The term \(D\) describes the small separation \(\delta\nu_{02}\) between \(l=0\) and 2 modes. We fit the expression given by Eq. 4 the frequencies of the detected \(l=0\) and \(l=2\) modes. The fit results are listed in Table 1 for the stars that are discussed in the following sections of the paper. Since the pure \(l=1\) p modes are not observable, and the small separation ratio \(\delta\nu_{02}/\delta\nu_{01}\) deviate from the solar value (three) with stellar evolution (Lund et al. 2017), we add an extra free parameter \(f_{\rm shift}\) to the expression of \(\nu_{\rm p}\) given in Eq. 4 only for dipole mode frequencies, that is:
\[\nu_{\rm p,l=1}=\nu_{\rm p}+f_{\rm shift}. \tag{5}\]
We use \(\nu_{\rm p,l=1}\) to compute the dipole p-mode phase \(\theta_{\rm p}\) in eq. 3. The initial guess of \(f_{\rm shift}\) is 0.6 \(\mu\)Hz and it will be set to be a free parameter in further fitting steps.
The phase of the pure g modes is
\[\theta_{\rm g}=\frac{\pi}{\Delta P_{\rm g}}\left(\frac{1}{\nu}-P_{\rm g}\right), \tag{6}\]
in which \(\Delta P_{\rm g}\) is the local period spacing and \(P_{\rm g}\) is the pure g-mode period (Mosser et al. 2015b). Without considering any perturbations by rotation and magnetism, the pure g-mode period is equally spaced in period, shown as
\[P_{\rm g}=\Delta\Pi_{1}\left(n_{\rm g}+\varepsilon_{\rm g}\right) \tag{7}\]
for \(l=1\) g modes, where \(n_{\rm g}\) is the g-mode radial order and \(\varepsilon_{\rm g}\) is the g-mode phase.
As shown by Mosser et al. (2015b) and Hekker & Christensen-Dalsgaard (2017), the \(\zeta\) function can then be expressed as
\[\zeta=\left[1+\frac{\nu^{2}}{q}\frac{\Delta\Pi_{1}}{\Delta\nu}\frac{1}{\frac{ 1}{q}\sin^{2}\theta_{\rm p}+\cos^{2}\theta_{\rm p}}\right]^{-1}. \tag{8}\]
#### 2.3.3 Identification of \(m\) with stretched echelle diagrams
For all the stars of our sample, we selected the peaks with signal-to-noise ratio larger than ten to plot the initial stretched echelle diagram (eqs. 1 and 8), which is used to measure the asymptotic period spacing \(\Delta\Pi_{1}\) and identify azimuthal orders. In this step, we set \(q=0.15\), as a typical value for hydrogen-shell-burning (HSB) stars (Mosser et al. 2017b) and varied \(\Delta\Pi_{1}\) to produce vertical ridges in the stretched echelle diagram. The modes with
different \(m\) are not exactly equally spaced by the period spacing \(\Delta\Pi_{1}\). As shown by Mosser et al. (2015a), they have slightly different spacings \(\Delta\tau_{m}\) in the stretched period diagram
\[\Delta\tau_{m}=\Delta\Pi_{1}\left(1+2m\frac{\mathcal{N}}{\mathcal{N}+1}\frac{ \delta\nu_{\rm rot,core}}{\nu_{\rm max}}\right), \tag{9}\]
with \(\mathcal{N}=\Delta\nu/(\Delta\Pi_{1}\nu_{\rm max}^{2})\), where \(\delta\nu_{\rm rot,core}\) is the rotational splitting of pure g modes, that is, half the core rotation frequency. Multiple ridges may thus appear. We measured their period spacing \(\Delta\tau_{m}\) by slightly changing \(\Delta\Pi_{1}\) and identified the \(m\) value as described by Eq. 21 in Mosser et al. (2015a) (\(m=1\) or \(-1\) for doublets, and \(m=1\), \(0\), \(-1\) for triplets). About 1200 stars with clear triplets were used for the subsequent analysis, and we also obtained about 800 stars with doublets (their asymmetries cannot be measured due to the lack of \(m=0\) modes). The rest \(\sim\)500 stars do not show splittings.
Figure 1 shows the stretched echelle diagram of KIC 5792889, which is selected randomly from our sample. Three distinct ridges are visible, each marked by a different symbol denoting their azimuthal order (refer to the caption of Fig. 1). However, this star does not show magnetism-induced asymmetries, as a result, the ridge corresponding to \(m=0\) appears halfway between the \(m=1\) and \(m=-1\) ridges.
In the presence of a core magnetic field, the oscillation modes undergo a frequency shift that depends on \(|m|\) (see Sect. 3). If the core field is strong enough, we anticipate that it can modify the ordering of the \(m\)-components in a multiplet. This would invalidate our identification of \(m\) base on Eq. 9. For all the stars where multiplet asymmetries were detected, we thus investigated alternate identifications of \(m\), assigning \(m=0\) to the external components of the multiplet. In all these cases, these alternate identifications led to poor fits to the observations, so we ruled out the possibility of having a different ordering of the components in the multiplets.
### Identification of the rotational multiplets
In order to measure multiplet asymmetry, we then needed to identify modes that belong to a same multiplet, that is, modes that share common values of \(l\) and \(n\) but have different values of \(m\). This step can be complicated when the rotational splitting is comparable to or larger than the frequency spacing between modes of consecutive radial order \(n\). In such cases, the nearest three modes no longer form a single multiplet, making the identification more difficult, as illustrated in Figure 2 for KIC 7749842.
To identify rotational multiplets, we fit an asymptotic expression of mixed modes including rotational effects to the detected modes. For given values of \(\Delta\Pi_{1}\), \(e_{\rm g}\), \(q\), and \(f_{\rm shift}\), the frequencies of the unperturbed modes are given by solving Eq. 2. We then add rotational splittings to these modes. The rotational splittings \(\delta\nu_{\rm R}\) can be expressed as
\[2\pi\delta\nu_{\rm R}=0.5\Omega_{\rm core}\xi+\Omega_{\rm env}\left(1-\zeta \right), \tag{10}\]
where \(\Omega_{\rm core}\) and \(\Omega_{\rm env}\) are the mean rotation rates (in unit of angular frequency) in the core and the outer envelope (Goupil et al., 2013; Deheuvels et al., 2014). This expression leads to symmetric rotational multiplets, contrary to what we generally expect for red giants harbouring core magnetic fields. However, if the effects of non-axisymmetry of the magnetic field on the frequency shifts are negligible (as we have assumed in Sect. 2.1), the mode frequencies of azimuthal order \(m=\pm 1\) are affected in the same way. In this case, the frequency spacing between these two components remains equal to \(2\delta\nu_{\rm R}\), as in the non-magnetic case. We thus used only the \(m=\pm 1\) modes to perform the fit.
We ran a Markov chain Monte Carlo (MCMC) method to optimise the parameters using the python package emcee (Foreman
Figure 1: Stretched échelle diagram of KIC 5792889 that does not show any magnetism-induced perturbation. The x-axis is the stretched periods \(\tau\) modulo \(\Delta\Pi_{1}\approx 81.6\) s. The peaks with S/N\(>\)10 are shown by the grey points (\(l=0\) and \(l=2\) modes have been removed). The green ’+’ stands for the \(m=1\) modes. The red ’-’ stands for the \(m=-1\) modes. The blue ’\(\bullet\)’ shows the \(m=0\) modes. The best-fitting results are plotted by the cross.
Figure 2: Stretched échelle diagram of KIC 9467102, which does not show any magnetism-induced perturbation. The symbols are the same as fig. 1. We show this star as an example because its splittings overlap seriously.
Mackey et al. 2013). There are in total six parameters that will be optimised. We applied uniform priors to the parameters and defined their ranges for optimisation in the MCMC as follows:
1. Asymptotic period spacing \(\Delta\Pi_{1}\). [\(\Delta\Pi_{1,\rm init}-0.5\,\rm s,\Delta\Pi_{1,\rm init}+0.5\,\rm s\)] where \(\Delta\Pi_{1,\rm init}\) is the initial guess of the period spacing from the stretched echelle diagram.
2. Coupling factor \(q\). [0.08, 0.25].
3. Phase of g mode \(\varepsilon_{g}\). [-0.1, 1.1].
4. Dipole (\(l=1\)) frequency shift \(f_{\rm shift}\). [\(0.3\,\rm\mu Hz,\,1.0\,\rm\mu Hz\)].
5. Core angular frequency \(\Omega_{\rm core}\). [\(0\,\rm\mu Hz,\,20\,\rm\mu Hz\)].
6. Envelope angular frequency \(\Omega_{\rm env}\). [-1 \(\rm\mu Hz,\,1\,\rm\mu Hz\)].
The prior ranges of these parameters were determined based on several previous analyses of large samples of red giant stars (Mosser et al. 2017b, 2018; Gehan et al. 2018; Triana et al. 2017). The MCMC algorithm maximises the likelihood function defined as follows:
\[\ln L=-\frac{1}{2}\sum_{m=1,0,-1}\sum_{i}\left[\frac{(\nu_{m,i}^{\rm obs}-\nu _{m,i}^{\rm gal})^{2}}{\sigma_{m,i}^{2}}+\ln\left(2\pi\sigma_{m,i}^{2}\right) \right], \tag{11}\]
where \(\nu_{m,i}^{\rm obs}\) is the \(i^{\rm th}\) observed frequency with azimuthal order of \(m\) and \(\nu_{m,i}^{\rm gal}\) is the calculated frequency. At this stage, the observed frequencies are estimated as the mean of the nearby points in the PSD whose signal-to-noise ratio (S/N) is greater than 10, which is more conservative than the criterion applied in previous studies, such as Mosser et al. (2015b).
Currently, we assume that the uncertainty in the observed frequency \(\sigma_{m,i}\) is \(0.02\,\rm\mu Hz\). More proper estimates of the mode frequencies and their uncertainties are obtained in section 2.5 by fitting Lorentzian profiles to the PSD.
We ran 14 parallel chains with length of 5000 steps. The first 50% samples are discarded. Finally, we obtained the best-fitting results that give the identification of the multiplets and allow us to run an automated algorithm (in section 2.5) to measure the asymmetric splittings We show the best-fitting results of KIC 9467102 in Fig. 1. The identified triplets are marked by the horizontal red lines in each panel. Even though there is significant overlap between multiplets, we still can distinguish them. We also find that the splitting identification works well even for the stars with asymmetric splittings.
### Asymmetry measurement
We measured the asymmetries of the identified multiplets by fitting three Lorentzian profiles, whose initial locations are given by the MCMC algorithm in section 2.4. In this fitting for the Lorentzian profile, we can determine the following parameters: frequencies of components, linewidths assuming identical for three components, amplitudes, and inclinations. The amplitudes of the three components were determined by a relation that was modified by inclination (Gizon & Solanki 2003), and the best-fitting results were obtained by maximising the likelihood function defined by Anderson et al. (1990). We allowed the \(m=0\) component to shift freely to reproduce the asymmetry.
Among the stars showing significant multiplet asymmetries, we selected those that share the features expected for magnetic asymmetries, namely:
* They should have the same sign (either positive or negative) for all multiplets in a given star.
* They should decrease with frequency (in absolute value).
* They should be larger for g-dominated modes than for p-dominated modes.
Finally, 13 stars were found to show magnetism-induced asymmetries, including the three stars reported by Li et al. (2022a). Here we show KIC 5696081 as an example. The three panels in Fig. 3 show three continuous asymmetric splittings, whose x-axes are aligned by the \(m=1\) and \(m=-1\) mode frequencies. The modes in the top and the bottom panels are g-dominated, so they show narrower linewidths and larger asymmetries. While the mode in the middle panel is p-dominated, hence it has wider linewidth and smaller asymmetry. Although the asymmetries vary with different modes, all the asymmetries keep positive. Figure 4 shows the variations in the asymmetry as a function of the frequency in KIC 5696081. We find that they follow all the characteristics expected from a magnetic perturbation (as listed in Sect. 1): the asymmetries are all positive in this star, decrease with frequency, and are smaller for p-dominated modes. We show all the asymmetry measurements in Appendix B.
Figure 3: Three continuous asymmetric splittings in KIC 5696081. The top and bottom panels show g-dominated modes while the middle panel displays a p-dominated one. Note that the x-axes of three panels are aligned by the \(m=1\) and \(-1\) mode frequencies.
## 3 Magnetic perturbation and field strength measurement
### Magnetic perturbation
For the stars identified as showing multiplet asymmetries of magnetic origin in Sect. 2.5, we estimated the properties of the field that could reproduce the observations. For this purpose, we followed the approach that we proposed in Li et al. (2022a). We again solved Eq. 2 to obtain the asymptotic expression of mixed mode frequencies, but here the frequencies of pure p and g modes include perturbations arising from rotation and magnetic field.
In the case of axisymmetric fields, or the non-axisymmetric effects are negligible, the frequency perturbation of pure g modes caused by both magnetism and rotation is given by
\[\delta v_{\rm g\,mode,\mu=0}=(1-a)\,\delta v_{\rm g}\left(\frac{v_{\rm max}}{ v}\right)^{3} \tag{12}\]
for \(m=0\) modes, and
\[\delta v_{\rm g\,mode,\mu=\pm 1}=\left(1+\frac{a}{2}\right)\delta v_{\rm g} \left(\frac{v_{\rm max}}{v}\right)^{3}\mp\frac{\Omega_{\rm core}}{4\pi} \tag{13}\]
for \(m=\pm 1\) modes (Li et al. 2022a). In Eqs. 12 and 13, \(\delta v_{\rm g}\) is the magnetic shift for pure g modes at the frequency at maximum power of the oscillations. The asymmetry parameter \(a\) is a dimensionless average of \(B_{r}^{2}\) in the oscillation cavity weighted by the second order Legendre polynomial \(P_{2}(\cos\theta)\):
\[a=\frac{\int_{r_{\rm g}}^{r_{\rm g}}K(r)\iint B_{r}^{2}P_{2}(\cos\theta)\sin \theta\,{\rm d}\theta{\rm d}\phi\,{\rm d}r}{\int_{r_{\rm g}}^{r_{\rm g}}K(r) \iint B_{r}^{2}\sin\theta\,{\rm d}\theta{\rm d}\phi\,{\rm d}r}. \tag{14}\]
It verifies \(-0.5<a<1\), its exact value depending on the latitudinal distribution of \(B_{r}^{2}\) in the oscillation cavity (Li et al. 2022a).
The pure g-mode periods in Eq. 7 can then be rewritten as
\[P_{\rm g,\mu}^{\prime}=\left(\frac{1}{P_{\rm g}}+\delta v_{\rm g\,mode,\mu} \right)^{-1}. \tag{15}\]
For pure p modes, the perturbation only arises from rotation, since the magnetic field being buried inside the star acts mainly on the g-mode parts of mixed modes. The effect on p modes is still negligible even if the magnetic field extends to the p-mode cavity (Mathis et al. 2021).Therefore, the pure p-mode frequency in Eq. 4 is rewritten as
\[v_{\rm p,\mu}^{\prime}=v_{\rm p}-m\frac{\Omega_{\rm env}}{2\pi}. \tag{16}\]
We then solved the asymptotic expressions for \(m=1\), \(0\), and \(-1\) respectively,
\[\tan\theta_{\rm p,\mu}^{\prime}=q\tan\theta_{\rm g,\mu}^{\prime}\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \
The best-fitting parameters for all the 13 stars are listed in Table 2. Here we show KIC 5696081 as an example. The corner diagram of the MCMC result is shown in Fig. 5, where the best-fitting result is found with \(a=0.74^{+0.17}_{-0.18}\) and \(\delta\nu_{\rm g}=0.104^{+0.03}_{-0.019}\,\mu\)Hz. In Fig. 5, as well as the corner diagrams for the other stars in Appendix B, we identify correlations between several parameters. As in the case of non-magnetic red giants, the measurement of \(\Delta\Pi_{1}\) is strongly anti-correlated with the measurement of \(\varepsilon_{\rm g}\). This is a direct result from the linear relation given by Eq. 7. We also find a clear correlation between \(\Delta\Pi_{1}\) and \(\delta\nu_{\rm g}\). This can be understood as follows: starting from a best-fit solution, if we increase the value of \(\delta\nu_{\rm g}\) (that is, if we increase the intensity of the field), the magnetic frequency perturbations increase, which tends to decrease the period spacing between consecutive g modes. Therefore, to correctly reproduce the observations, the asymptotic period spacing of unperturbed g modes \(\Delta\Pi_{1}\) needs to be increased. Also, the parameters \(\delta\nu_{\rm g}\) and \(a\) are found to be strongly anti-correlated. Again, this was expected. Indeed, using Eqs. 12 and 13, we find that the asymmetry of g-mode multiplets near \(r_{\rm max}\) corresponds to \(3a\delta\nu_{\rm g}\). Therefore, if \(\delta\nu_{\rm g}\) increases, one needs to decrease \(a\) to reproduce the observed asymmetries. Finally, we observe a slight anti-correlation between the measurements of \(\Omega_{\rm core}\) and \(\Omega_{\rm env}\). This can be understood from Eq. 10 (even though this relation has not been used in our fits here): in this linear relation, \(\Omega_{\rm core}\) and \(\Omega_{\rm env}\) are related to the slope and the intercept, respectively, and the measurements of these quantities are not independent.
The asymmetry parameter \(a\) generally has larger uncertainties and broader distributions within its prior range (from -0.5 to 1) compared to the other parameters, suggesting that the constraints on \(a\) are weaker. The parameter \(\delta\nu_{\rm g}\) often had an asymmetric distribution, while the other parameters have symmetric distributions and generally tighter constraints.
These stars display normal values for the other parameters of the fits (\(\Delta\Pi_{1}\), \(q\), \(\varepsilon_{\rm g}\), and \(f_{\rm shift}\)), as shown in Appendix C. Despite the fact that we used non-informative priors for \(\varepsilon_{\rm g}\), we obtained results that are in line with typical values for other red giants (\(0.28\pm 0.08\) Takata 2016; Mosser et al. 2018).
### Stellar model and field strength
After obtaining the magnetic shift \(\delta\nu_{\rm g}\), we can calculate the magnetic field strength in the stellar interior. What we can measure about the magnetic field strength is a weighted integral of the horizontal average of the squared radial magnetic field, \(\overline{B_{r}^{2}}\), given by (Li et al. 2022a):
\[\langle B_{r}^{2}\rangle=\int_{r_{\rm i}}^{r_{\rm a}}K\left(r\right) \overline{B_{\rm r}^{2}}{\rm d}r=\frac{16\pi^{4}\mu_{0}\delta\nu_{\rm g}\nu_{ \rm max}^{3}}{\mathcal{I}}, \tag{19}\]
with \(K(r)\) the weight function and where \(r_{\rm i}\) and \(r_{\rm o}\) are the inner and outer turning points of the g-mode cavity, respectively, and \(\mu_{0}\) is the vacuum permeability. The core factor \(\mathcal{I}\) is determined by the internal structure of the star:
\[\mathcal{I}=\frac{\int_{r_{\rm i}}^{r_{\rm o}}\left(\frac{N}{r}\right)^{3} \frac{{\rm d}r}{\rho}}{\int_{r_{\rm i}}^{r_{\rm o}}\left(\frac{N}{r}\right){\rm d }r}, \tag{20}\]
where \(N\) is the buoyancy frequency and \(\rho\) is the local density. The weighted function,
\[K\left(r\right)=\frac{\frac{1}{\rho}\left(\frac{N}{r}\right)^{3}}{\int_{r_{ \rm i}}^{r_{\rm o}}\left(\frac{N}{r}\right)^{3}\frac{{\rm d}r}{\rho}}, \tag{21}\]
sharply peaks at the hydrogen-burning shell, with a much lower sensitivity in the layers below.
To estimate the intensity of the detected magnetic fields, we needed to characterise the internal structures of these stars (such as the buoyancy frequency \(N\) and the density profile \(\rho\)). For this purpose, we adopted the seismology-modelling pipeline and the model grid introduced by Li et al. (2022b) to search for the best-fitting models. The input constraints were based on global parameters, such as effective temperature, luminosity, and metal
Figure 5: The corner diagram of the magnetic fitting of KIC 5696081. The vertical dashed lines mark the median values and \(\pm 1\sigma\) ranges.
Figure 6: Same as fig. 1 but for KIC 5696081. Note that this star shows clear asymmetries by magnetic field.
licity, which were reported by Berger et al. (2020) and listed in table 3. We also used the observed radial-mode frequencies given by Li et al. (2022b), and the asymptotic period spacing of g modes (\(\Delta\Pi_{1}\)) measured by this work as additional constraints. By applying the pipeline, we determined the best-fitting structural model, whose parameters (masses, ages, and radii) are listed in Table 3.
Using our best-fit stellar models, we could derive estimates of the core factor \(\mathcal{I}\), and we were thus able to obtain measurements of the average radial magnetic fields \(\langle B_{r}^{2}\rangle^{0.5}\) using Eq. 19 (see Table 3). We found field strengths ranging from about 20 kG to 150 kG.
Since this work and Li et al. (2022a) used different approaches in the stellar modelling and the fitting of the asymptotic expression of mixed mode frequencies to the observations, there are slight differences in the inferred field strengths for the three stars that they have in common. We compared the two sets of results and found that the field strengths derived by both works are generally consistent, with slightly larger strengths obtained in this work. For KIC 7518143 and KIC 8684542, they have consistent field strengths within 1-\(\sigma\) ranges. While for KIC 11515377, the field strength obtained by this work is approximately 30% times larger than the strength by Li et al. (2022a), though still within the 2-\(\sigma\) range. The ratios between the field strength and the critical field strength show good agreement between the two sets of results, likely due to this ratio being more sensitive to the stellar structure rather than the observed frequencies or the fitting strategies.
In Appendix D, we also offer a linear relation between \(\mathcal{I}\) and \(\Delta\Pi_{1}\). This relation can be used for model-independent calculations of field strength, particularly when a best-fitting stellar model is unavailable.
## 4 Discussion
### Asymmetry parameter and magnetic shift
Figure 8 displays the relation between the asymmetry parameter \(a\) and the magnetic shift \(\delta\nu_{\rm g}\). We find that there is no obvious correlation between these two parameters and there is also no correlation with the period spacing \(\Delta\Pi_{1}\). Figure 9 shows the relation between the asymmetry parameter \(a\) and the field strength of our sample, and we still do not find any correlation between them, meaning that the field strength does not correlate with the field topology.
The asymmetry parameter \(a\) reaches 1 when the field is entirely concentrated to the poles and it reaches \(-0.5\) when the field is concentrated to the equator. Dipolar fields have values of \(a\) ranging from \(-0.2\) (corresponding to a dipolar field aligned with the equator) to 0.4 (corresponding to a dipolar field aligned with the rotation axis). The asymmetry parameter can also vanish, even in the presence of a strong field, for example if a dipolar field is inclined by about \(55^{\circ}\) with respect to the rotation axis or if the latitudinal variations of \(B_{r}^{2}\) only occur at length scales much smaller than the star radius (Li et al. 2022a). Most of the stars exhibit \(a\) values between 0.2 and 0.8. KIC 11515377 is the only star that shows negative asymmetries, which lead to \(a\) close to \(-0.2\). This suggests that this configuration might be rarer among red giants. We note the diversity of the values obtained for the asymmetry parameter \(a\), which clearly shows that the core fields of red giants have various horizontal geometries. In some stars, the posterior probabilities for the asymmetry parameter \(a\) nearly vanish below 0.4, which means that for these stars the magnetic fields are more sharply concentrated on the poles than a purely dipolar field aligned with the rotation axis. Our measurements of \(a\) will be useful to constrain future models of red giant core magnetic fields.
### Core and envelope rotation rates
We display the relation between \(\Omega_{\rm core}\) and \(\Delta\Pi_{1}\) in Fig. 10. The figure shows that the rotation rates of our stars are typical for red giants and are consistent with those of other studies (shown as the grey circles by Gehan et al. 2018). Our observations thus suggest that these stars have not experienced significantly different histories of angular momentum transport compared to other red giant stars. We note that this does not contradict the hypothesis that magnetic fields could be responsible for angular momentum transport in red giants because we cannot exclude that magnetic fields currently escaping detection might exist in other red giants.
We compared our measurements of the core rotation rates with those of Gehan et al. (2018) for the five stars in our sample that were also in their study. We find that our results are
Figure 7: Top panel: measured field strengths by this work and by Li et al. (2022a). Bottom: the ratio between the measured field strengths and the critical strengths. The dotted lines show 1:1 relations. Note that the value of the field strength of KIC 7518143 reported by Li et al. (2022a) is smaller than 41 kG, while we use the median here (20.5 kG).
consistent in four stars (KIC 7518143, 6936091, 11515377, and 8540034), while a large discrepancy appears in KIC 8684542. The reason for the consistency is that the magnetism-induced perturbation does not change the frequency separation between \(m=1\) and \(m=-1\) modes, hence it does not affect the measurements of splittings if the mode identification is correct. However, the discrepancy for KIC 8684542 arises because only symmetric rotational splittings were considered in the previous work, which resulted in an incorrect identification of the splitting of \(m=1\) and \(-1\) modes (see the online peer review file of Li et al. 2022a).
The top panel of fig. 11 displays the measurements of the envelope rotation rates as a function of their model-inferred
\begin{table}
\begin{tabular}{c c c c c c} \hline KIC & \(\nu_{\rm max},\mu\)Hz & \(\Delta\nu,\mu\)Hz & \(\varepsilon\) & \(\alpha\) & \(D,\mu\)Hz \\ \hline
4458118 & 207.6 \(\pm\) 1.6 & 16.442 \(\pm\) 0.004 & 0.326 \(\pm\) 0.003 & 0.0057 \(\pm\) 0.0005 & 0.3620 \(\pm\) 0.0015 \\
5196300 & 207.6 \(\pm\) 1.1 & 15.138 \(\pm\) 0.003 & 0.340 \(\pm\) 0.003 & 0.00406 \(\pm\) 0.00018 & 0.3215 \(\pm\) 0.0011 \\
5696081 & 248.0 \(\pm\) 1.3 & 17.901 \(\pm\) 0.004 & 0.362 \(\pm\) 0.003 & 0.00115 \(\pm\) 0.00024 & 0.3544 \(\pm\) 0.0012 \\
6936091 & 94.3 \(\pm\) 0.4 & 8.731 \(\pm\) 0.004 & 0.174 \(\pm\) 0.006 & 0.0057 \(\pm\) 0.0005 & 0.2090 \(\pm\) 0.0011 \\
7009365 & 210.5 \(\pm\) 1.2 & 15.6082 \(\pm\) 0.0029 & 0.3332 \(\pm\) 0.0027 & 0.00679 \(\pm\) 0.00019 & 0.3147 \(\pm\) 0.0011 \\
7518143 & 159.2 \(\pm\) 0.5 & 12.394 \(\pm\) 0.003 & 0.216 \(\pm\) 0.003 & 0.00388 \(\pm\) 0.00021 & 0.2555 \(\pm\) 0.0012 \\
8540034 & 195.4 \(\pm\) 0.7 & 15.108 \(\pm\) 0.003 & 0.2517 \(\pm\) 0.0029 & 0.00563 \(\pm\) 0.00025 & 0.3285 \(\pm\) 0.0012 \\
8619145 & 130.0 \(\pm\) 0.4 & 10.896 \(\pm\) 0.004 & 0.163 \(\pm\) 0.004 & 0.0097 \(\pm\) 0.0004 & 0.2428 \(\pm\) 0.0012 \\
8684542 & 179.4 \(\pm\) 0.7 & 13.4921 \(\pm\) 0.0027 & 0.2741 \(\pm\) 0.0028 & 0.00345 \(\pm\) 0.00020 & 0.282 \(\pm\) 0.0010 \\
9202471 & 218.5 \(\pm\) 1.3 & 15.946 \(\pm\) 0.004 & 0.358 \(\pm\) 0.003 & \(-\)0.0005 \(\pm\) 0.0003 & 0.3138 \(\pm\) 0.0012 \\
9589420 & 109.9 \(\pm\) 0.5 & 9.473 \(\pm\) 0.004 & 0.187 \(\pm\) 0.005 & \(-\)0.0015 \(\pm\) 0.0005 & 0.1993 \(\pm\) 0.0012 \\
10801792 & 199.7 \(\pm\) 1.0 & 15.0092 \(\pm\) 0.0028 & 0.3358 \(\pm\) 0.0026 & 0.00244 \(\pm\) 0.00018 & 0.326 \(\pm\) 0.0010 \\
11515377 & 194.6 \(\pm\) 0.5 & 14.739 \(\pm\) 0.005 & 0.340 \(\pm\) 0.005 & 0.0011 \(\pm\) 0.0003 & 0.3204 \(\pm\) 0.0012 \\ \hline \end{tabular}
\end{table}
Table 1: The fit results of the p-mode asymptotic relations in eq. 4.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline KIC & \(\Delta\Pi_{1}\) (s) & \(q\) & \(\varepsilon_{\rm g}\) & \(f_{\rm shift}\) (\(\mu\)Hz) & \(\Omega_{\rm core}/2\pi\) (\(\mu\)Hz) & \(\Omega_{\rm low}/2\pi\) (\(\mu\)Hz) & \(a\) & \(\delta\nu_{\star}\) (\(\mu\)Hz) \\ \hline
4458118 & 88.71\({}^{+0.006}_{-0.06}\) & 0.1617\({}^{+0.0025}_{-0.002}\) & 0.274\({}^{+0.002}_{-0.002}\) & 0.734\({}^{+0.0016}_{-0.001}\) & 0.867\({}^{+0.0019}_{-0.001}\) & 0.019\({}^{+0.0020}_{-0.001}\) & 0.42\({}^{+0.3}_{-0.00}\) & 0.023\({}^{+0.0022}_{-0.002}\) \\
5196300 & 829.96\({}^{+0.005}_{-0.005}\) & 0.1304\({}^{+0.0003}_{-0.000}\) & 0.306\({}^{+0.002}_{-0.002}\) & 0.620\({}^{+0.001}_{-0.013}\) & 1.251\({}^{+0.001}_{-0.002}\) & 0.301\({}^{+0.0024}_{-0.004}\) & 0.71\({}^{+0.002}_{-0.002}\) & 0.080\({}^{+0.002}_{-0.003}\) \\
5696081 & 88.26\({}^{+0.017}_{-0.005}\) & 0.1617\({}^{+0.0077}_{-0.007}\) & 0.375\({}^{+0.002}_{-0.029}\) & 0.653\({}^{+0.001}_{-0.013}\) & 0.826\({}^{+0.013}_{-0.014}\) & 0.35\({}^{+0.001}_{-0.014}\) & 0.74\({}^{+0.018}_{-0.018}\) \\
6936091 & 75.29\({}^{+0.06}_{-0.13}\) & 0.132\({}^{+0.008}_{-0.008}\) & 0.39\({}^{+0.029}_{-0.024}\) & 0.550\({}^{+0.003}_{-0.034}\) & 0.362\({}^{+0.014}_{-0.012}\) & 0.01\({}^{+0.004}_{-0.013}\) & 0.30\({}^{+0.031}_{-0.13}\) & 0.064\({}^{+0.004}_{-0.009}\) \\
7009365 & 83.29\({}^{+0.029}_{-0.027}\) & 0.1377\({}^{+0.011}_{-0.011}\) & 0.270\({}^{+0.017}_{-0.017}\) & 0.600\({}^{+0.001}_{-0.011}\) & 0.546\({}^{+0.013}_{-0.013}\) & 0.023\({}^{+0.012}_{-0.012}\) & 0.83\({}^{+0.12}_{-0.12}\) & 0.036\({}^{+0.005}_{-0.009}\) \\
7518143 & 78.50\({}^{+0.04}_{-0.01}\) & 0.1209\({}^{+0.002}_{-0.002}\) & 0.293\({}^{+0.011}_{-0.011}\) & 0.611\({}^{+0.014}_{-0.014}\) & 0.315\({}^{+0.013}_{-0.013}\) & 0
radii. The method used in section 3 does not require the use of the \(\zeta\) function, which is different from the method used by Li et al. (2022a) to measure the internal rotation. Consequently, the envelope rotation rates obtained using the current method exhibit differences compared to the results reported by Li et al. (2022a). Due to the expansion of the envelopes, red giant stars show very slow envelope rotations. Most of the stars have surface rotation rates around 0.02 \(\mu\)Hz, equivalent to about \(\sim\) 600 d. KIC 7518143 has the fastest surface rotation rate with a rotation period of about 170 days. For the two stars KIC 6936091 and KIC 9589420, with relatively large radii inferred from the modelling (larger than six solar radii), the rotation rates are slower than the detection limit of the seismic signal, resulting in surface rotation rates that are compatible with zero within 1\(\sigma\) ranges.
The bottom panel of fig. 11 shows the ratio between the core and the envelope rotations (\(\Omega_{\rm core}/\Omega_{\rm env}\)), which represents the level of differential rotation. For the two stars with near-zero envelope rotations (with radii larger than 6 solar radii), we can only obtain a lower limit to this ratio. Among the remaining stars, we find that the rotation ratio lies between 10 and 100. Interestingly, KIC 7518143, the star with the fastest envelope rotation, also shows a slow core-rotation rate, which leads to the rotation ratio \(\Omega_{\rm core}/\Omega_{\rm env}\) is only around 4. This finding suggests that a strong transfer of angular momentum occurs inside the star, but the relation with magnetic field is unclear, as the measured field strength of this star is not the strongest.
### Comparison with critical field strength
Fuller et al. (2015) showed that magnetic fields that exceed a critical value \(B_{\rm c}\) can prevent the propagation of gravity waves. This phenomenon was invoked as a possible explanation for the suppression of \(l=1\) mixed modes in a fraction of red giant stars (see also in Stello et al. 2016). Fuller et al. (2015) suggested that when the g-mode cavity harbours a magnetic field stronger than \(B_{\rm c}\), all the mode energy that reaches the core is dissipated, leading to modes that have a pure p-like behaviour. This interpretation was challenged by Mosser et al. (2017a), who found that for red giants that show only partially-suppressed dipole modes, the modes still have a g-like character. This question is currently under debate. Even though we do not yet have a clear picture of how global oscillation modes are affected by a magnetic field exceeding \(B_{\rm c}\), it is clear that it will have an impact. We thus compared the measured field strength to the critical field strength, which we computed using our best-fit stellar models. In practice, this critical field varies as a function of radius. The minimum appears at the hydrogen-burning shell (HBS), as given by
\[B_{\rm c,min}=\left(\frac{16\pi^{4}\mu_{0}\rho_{\rm hbs}f_{\rm hbs}^{2}V_{\rm max }^{4}}{8N_{\rm hbs}^{2}}\right)^{0.5}, \tag{22}\]
where \(\rho_{\rm hbs}\), \(r_{\rm hbs}\), and \(N_{\rm hbs}\) are the density, radius, and the Brunt-Vaisala frequency at the hydrogen-burning shell. Using our optimal stellar models, we computed \(B_{\rm c,min}\) for the 13 stars of our sample. The obtained values are listed in Table 3.
Using the values listed in Table 3, we can calculate the ratio between the measured and critical field strengths as a function of \(\Delta\Pi_{1}\). The ratios range from approximately 0.1 to 0.3. It is important to note that the critical field strength varies with radius,
Figure 8: Relation between the asymmetry parameter \(a\) and the magnetic shift \(\delta\psi_{g}\). KIC 11515377 is the only star that shows negative asymmetries.
Figure 10: The core rotation rates \(\Omega_{g}\) as a function of \(\Delta\Pi_{1}\). The black dots are the stars in this work, and the grey circles are reported by Gehan et al. (2018).
Figure 9: Relation between the asymmetry parameter \(a\) and the field strength.
and we use the minimum value, which occurs at the hydrogen-burning shell and is consistent with the layer where we measure the field strength. Additionally, the measured field strength is the average over the weight function \(K(r)\), which means that some contribution from the field strength inside the hydrogen-burning shell is also included (see extended data Figure 1 in Li et al. 2022a). Therefore, the reported ratios may not be representative of the fields ratio at the location of hydrogen-burning shell.
Fig. 12 shows the observed and critical field strength with evolution, indicated by the mixed mode density \(\mathcal{N}=\Delta\nu/(\nu_{\rm max}^{2}\Delta\Pi_{1})\)(Gehan et al., 2018). It is evident from Fig. 12 that the critical field strengths tend to decrease as stars evolve along the red giant branch (this was already shown by Fuller et al., 2015). We also observe an overall decrease in the observed field strengths with the evolution (albeit with relatively large scatter). This is in contrast to the theoretical prediction that the field strength should increase with evolution, assuming the magnetic flux is conserved while the core contracts. However, if the field strength exceeds the critical field strength, the energy of gravity waves is thought to be completely transferred to Alfven waves, and we cannot observe any dipole mixed modes (Fuller et al., 2015) Moreover, a strong field leads to a curvature in the stretched echelle diagram (Li et al., 2022a; Bugnet, 2022; Deheuvels et al., 2023), which could obscure the observations. Therefore, only a field that is \(\sim 10\%\) to \(\sim 30\%\) of the critical field strength can generate observable asymmetric splittings. This could explain why the decrease in the detected field strength with evolution seems to follow the decrease in the critical field \(B_{c}\) (Fig. 12).
Deheuvels et al. (2023) reported 11 stars showing curved stretched echelle diagrams caused by their central magnetic fields. In this case, only the lower limit of the field strengths are derived. We plot the results by the grey vertical arrows in Fig. 12, and find that they also follow a decreasing trend with evolution but with much stronger field strengths. Figure 12 reveals a large gap of field strength between the stars exhibiting asymmetric splittings, as seen in this work, and those with curved stretched echelle diagrams, as observed by Deheuvels et al. (2023). The search for such power spectra may be hindered by an observational bias, as explained in Sect. 4.6.
### Origin of the fields
Figure 13 depicts the measured field strengths plotted against model-inferred age and mass. In the top panel, most stars have ages smaller than 6 Gyr and the field strengths of these stars show large scatter. However, for the two stars with longer ages (between 8 and 9 Gyr), the field strengths are small. As stellar age is strongly related to mass, we examined the correlation between field strength and mass in the bottom panel of Fig. 13. We find that most stars have masses larger than 1.3 M\({}_{\odot}\), meaning that they had convective cores in their main-sequence stages and can generate strong central fields by the dynamo processes. Since the Ohmic timescale, over which magnetic field dissipate, is longer than the evolution timescale (Cantiello et al., 2016), such fields could survive until the red-giant phase, where they may relax into stable mixed poloidal-toroidal configurations (Braithwaite & Spruit, 2004).
However, we observed two stars with lower masses (1.08 M\({}_{\odot}\) for KIC 4458118 and 1.04 M\({}_{\odot}\) for KIC 6936091). Our best-fit models for these stars had a radiative core during the bulk of their main sequence, which challenges the interpretation that their central fields may have been generated by the core dynamo. To further study the origin of the field, we calculated the mass and the radius of the convective core at the beginning of stellar life. For the two low-mass stars (KIC 4458118 and 6936091), we found that they show convective cores with masses of \(\sim\)12% of the total stellar masses at the very beginning of its evolution (\(\sim 30\) Myr), owing to the burning of \({}^{5}\)He and \({}^{12}\)C outside of equilibrium (Deheuvels et al., 2010). However, the convective core was too small to reach the shell that is currently burning hydrogen in red-giant phase (which is located at about \(\sim\)20% of the total stellar masses). Even though the hydrogen-burning shell in these two stars was never convective, we cannot rule out the dynamo origin because the weight function \(K(r)\) involved in the expression of the measured field has a contribution from the deeper layers of the star. If we assume that the magnetic field is confined to the layers that were convective at the beginning of the main sequence (i.e. the layers whose mass is smaller than 12% of the total mass), we find that fields of \(\sim\)300 kG for KIC 4458118 and \(\sim\)200 kG for KIC 6936091 are needed to reproduce the observations. These values are several times larger than the values given in Table 3. However, we cannot exclude that such fields might result from a dynamo process in main-sequence cores because these field strengths remain compatible with the amplitudes predicted by numerical simulations of core convection (Brun et al., 2005) and dynamo scaling laws (Bugnet et al., 2021). In addition, the determination of convective core size still shows an uncertainty due to the poorly-understanding internal chemical mixing process (e.g. Johnston, 2021). Another possibility for the origin of the magnetic field is that it is inherited from a fossil field.
Figure 11: Top panel: surface rotation rates as a function of model-inferred radii. Bottom panel: variations in the ratio \(\Omega_{\rm core}/\Omega_{\rm env}\) with stellar radii.
### Ongoing dynamo or stable fields?
The ratio between the Alfven frequency \(\Omega_{\rm A}=\sqrt{\frac{\alpha B_{\rm c}^{2}}{\mu_{\rm o}m^{2}}}\) and the rotation rate \(\Omega\) is an important parameter of the dynamical interactions between magnetic field and rotation. Its value can provide some clues about the nature of the detected magnetism. We computed this ratio at the HBS of the 13 red giants and, as shown in Fig. 14, found that it is comprised between 0.05 to 0.25.
A first consequence is that a Tayler-Spruit dynamo is probably not presently at work in these layers. Indeed, the Fuller et al. (2019) model of the Tayler-Spruit dynamo finds that \(\Omega_{\rm A}/\Omega\) should scale with \((\Omega/N)^{5/3}\) where \(N\), the Brun-Sualsian frequency, measures the strength of stable stratification in radiative zones. As \((\Omega/N)^{5/3}\sim 10^{-7}\) at the HBS of a typical 1.2 M\({}_{\odot}\) and 4 R\({}_{\odot}\) star at the base of the red giant branch Fuller et al. (2019), the predicted \(\Omega_{\rm A}/\Omega\) are extremely small and thus incompatible with our measurements.
The observed ratios are rather compatible with a stable magnetic field configuration. Indeed, as anticipated in Spruit (1999), numerical studies of a poloidal magnetic field embedded in a differentially rotating radiative zone have shown that, if the ratio \(\Omega_{\rm A}/\Omega_{\rm core}\) exceeds a certain value, the field is not affected by instabilities and evolves towards stable configurations while being only subjected to ohmic dissipation (Jouve et al., 2015, 2020; Gouhier et al., 2022). The values obtained in numerical simulations vary between \(10^{-3}\) for a free differential rotation (Jouve et al., 2020) and \(10^{-2}\) for a differential rotation forced by a radial flow simulating a star contraction (Gouhier et al., 2022). Another argument in favour of a stable magnetic field configuration comes from the magnetic fields observed at the surface of intermediate-mass and massive main-sequence stars. In this mass range, the so-called fossil magnetic fields are stable at least over decades and their \(\Omega_{\rm A}/\Omega\) ratio, computed at the star's surface, is always greater than \(\sim 1\)(Auriere et al., 2007).
### Observational biases
Our method of analysis of _Kepler_ data presented in Sect. 2 leads to several observational biases that need to be acknowledged.
#### 4.6.1 Limitation on the measurable field strength
As mentioned in Sect. 2.1, we have assumed that the magnetic field is not strong enough to significantly alter the regularity of g-mode period spacings. Deheuvels et al. (2023) estimated the magnetic intensity threshold \(B_{\rm th}\) above which deviations from the regular period spacing of pure g modes become detectable (see their Appendix B). They found that \(B_{\rm th}\) follows a similar decreasing trend with evolution as that of the critical field \(B_{\rm c}\). Overall, a magnetic field that exceeds about 40% of the critical field should produce significant deviations. We can thus expect that our method may start failing for field strengths above 0.4 \(B_{\rm th}\). This could at least partly explain the gap that we found between the field measurements in this work and those obtained by Deheuvels et al. (2023).
Additionally, we assumed in Sect. 2.3 that the magnetic asymmetries are not large enough to push the \(m=0\) component of the multiplets outside the interval formed by \(m=\pm 1\) com
Figure 12: Field strength as a function of mixed mode density. The red hexagons with errorbars are the field strengths of the 13 stars reported by this work, and the black dots are their critical field strengths. The grey vertical arrows show the lower limits of the field strengths reported by Deheuvels et al. (2023), where the stars do not show any asymmetric splittings, but only \(m=0\) curved ridges in their stretched échelle diagrams.
ponents. This phenomenon occurs if \(\nu_{\rm g\ mode,\mu=0}<\nu_{\rm g\ mode,\mu=+1}\) when \(a>0\), and if \(\nu_{\rm g\ mode,\mu=-0}>\nu_{\rm g\ mode,\mu=-1}\) when \(a<0\). Using Eq. 12 and 13, one finds that near \(\nu_{\rm max}\), this condition is equivalent to
\[\bar{\sigma}\nu_{\rm g}\frac{3|a|}{2}\frac{4\pi}{\Omega_{\rm core}}>1. \tag{23}\]
For the 13 stars of our sample, the quantity on the left-hand-side of Eq. 23 takes values between 0.03 and 0.36. It is not straightforward to translate the condition given by Eq. 23 in terms of a constraint on the field intensity because it depends on properties that vary from star to star (namely \(a\), \(T\), \(\nu_{\rm max}\), and \(\Omega_{\rm core}\)). If we consider the values of these parameters that were obtained for the stars of our sample, we find that a change in the ordering of components in a multiplet would occur for these stars if the field strengths were multiplied by a factor ranging from 1.6 to 5.8. This corresponds to field intensities that are intermediate between the measured values and the critical field strengths. We thus conclude that our assumption that the \(m=0\) component lies between the \(m=\pm 1\) components could partly explain the lack of red giants with measured fields closer to the critical field strength.
#### 4.6.2 Non-axisymmetric effects
Equations 12 and 13 hold when the non-axisymmetric effects of magnetic fields are negligible. This is true (i) if \(B_{*}^{2}\) is axisymmetric, but also (ii) if the ratio \(b\) between the magnetic frequency and the rotational splitting (\(b=4\pi\delta\nu_{\rm g}/\Omega_{\rm core}\)) is smaller than \(\sim 1\) (see the supplementary material S2.7 in Li et al. 2022a). Our results in Table 2 show that the ratio \(b\) is much smaller than one. The stars with the largest values of \(b\) are KIC 8684542 (\(b=0.64\pm 0.09\)) and KIC 11515377 (\(b=0.56\pm 0.12\)), the other stars having \(b\) values below \(\sim 0.35\). Hence, we state that case (ii) is valid for our stars, so that even if \(B_{*}^{2}\) is non-axisymmetric, we still cannot see any significant non-axisymmetric effect in the oscillation spectra.
As mentioned in Sect. 2.1, when \(b\gtrsim 1\), non-axisymmetric magnetic fields can produce up to nine components for dipole modes (Li et al. 2022a). Dedicated methods need to be devised and applied to search for such features in the oscillation spectra of red giants. This work is currently undertaken by our team and will be the subject of a next publication. Meanwhile, our assumption that non-axisymmetric effects are weak limits the strength of detectable fields. Similarly to Sect. 4.6.1, the condition \(b\gtrsim 1\) cannot directly be translated in terms of field strength. We thus estimated the minimal field strengths that the stars of our sample should have in order to produce significant non-axisymmetric effects. We found that the measured fields would need to be multiplied by a factor ranging from 1.2 to 5.4. Again, this corresponds to field intensities that lie between the measurements obtained in this study and those found by Deheuvels et al. (2023). Taking non-axisymmetric effects into account could thus populate the gap observed in Fig. 12.
## 5 Conclusions
In this study, we conducted a systematic search for magnetism-induced asymmetric splittings in _Kepler_ data. We successfully identified 13 stars (including the three stars previously reported by Li et al. 2022a) exhibiting clear multiplet asymmetries with properties matching those expected in the presence of a core magnetic field. Notably, we found that only one
Figure 14: Ratio of the Alfvén frequency \(\Omega_{\rm A}=\sqrt{\frac{\sigma_{\rm g\ mode,\mu=0}^{2}}{\mu\nu_{\rm g\ mode,\mu=+1}^{2}}}\) to the core rotation rate \(\Omega_{\rm core}\) as a function of mixed mode density. The density \(\rho\) and the radius \(r\) have been computed at the HBS of the star models.
Figure 13: Top panel: field strengths with model-inferred ages. Bottom panel: field strengths but with model-inferred masses.
star (KIC 11515377) displayed negative asymmetries, while the remaining stars exhibited positive asymmetries. By fitting an asymptotic expression of mixed mode frequencies including rotational and magnetic effects to the observations, we were able to measure the magnetic frequency shift \(\delta v_{\rm g}\) (which is related to the field strength), and the asymmetry parameter \(a\) (which yields constraints on the field topology).
Using the best-fitting stellar structure model, we were able to measure the average radial field strength in the core (\(\langle B_{r}^{2}\rangle^{0.5}\)) for the 13 stars. These field strengths were found to lie between approximately 20 and 150 kG, with maximal sensitivity in the vicinity of the hydrogen-burning shells. These values represent about 10% to 30% of the critical field strength above which gravity modes are no longer expected to propagate in the core (Fuller et al., 2015).
We also obtained estimates of the asymmetry parameter \(a\), which provides a horizontal average of \(B_{r}^{2}\) weighted by the second-order Legendre polynomial (see Eq. 14). For the 13 stars, we found values of \(a\) between about \(-0.2\) and \(0.95\), nearly spanning the entire possible range for this parameter (\(-0.5\leqslant a\leqslant 1\)). We recall that large negative values of \(a\) are reached for fields that are concentrated near the equator, while large positive values of \(a\) correspond to fields concentrated on the poles. Our results thus show that the core fields of red giants have various horizontal geometries. For some stars, we find values of \(a\) that significantly exceed 0.4, which is the highest value that can be obtained with a dipolar magnetic field. This means that for these stars, the fields are more sharply concentrated on the poles than a pure dipolar field aligned with the rotation axis. The fact that a negative value of \(a\) was obtained for only one star (KIC 11515377) suggests that this configuration might be rare among red giants.
In agreement with the results of Deheuvels et al. (2023), we found that the magnetic field strength in the core of red giants decreases with evolution (see Fig. 12). This is in contradiction with the general expectation that the contraction of the core should increase the magnetic field, if we assume a conservation of the magnetic flux. As was already pointed out in Deheuvels et al. (2023), the observed decrease seems to follow the overall decrease of the critical field strength \(B_{\rm c}\) with evolution. One possible interpretation is thus that for a given star, the core magnetic field increases with evolution until it reaches \(B_{\rm c}\). At this point, gravity waves no longer propagate in the core so that dipole modes do not have a mixed behaviour, and therefore core magnetic fields can no longer be detected. This could account for the observed decrease in the measured field strength, although further work is clearly needed to test this interpretation.
This work presents a larger sample of red giant stars with central magnetic fields, comprising 13 stars out of approximately 1200 red giant stars with triplets. The prevalence of such magnetic fields is thus currently found to be only 1%. Stello et al. (2016) claimed that about 5 to 15% of red giants in the mass range of our sample are magnetised based on the assumption that suppressed dipole mixed modes are caused by magnetic fields exceeding the critical field strength. So far, this assumption remains challenged (Mosser et al., 2017), but if it were correct, the stars of Stello et al. (2016) (which lie in a different range of field intensities compared to this work) could be added to the list of magnetic red giants. The low prevalence of magnetic giants in our sample may be partly due to observational biases related to the analysis method that we adopted in this study. We assumed that the field strength was not strong enough to significantly alter the regularity in the g-mode period spacings, modify the ordering of the components within dipole multiplets, or produce detectable effects related to the non-axisymmetric component of the magnetic field. We estimated that field intensities exceeding the measured field strengths by a factor of a few would be enough to make at least one of these assumptions invalid. Therefore, the present study might miss stars with stronger core fields. We also stress that in this study, magnetic giants have been identified by searching for multiplet asymmetries. This method can thus not detect magnetic fields with horizontal geometries that correspond to vanishing values of \(a\).
Regarding the origin of the detected fields, one of the main scenarios is that they were produced by a dynamo in the main-sequence convective core. After the end of the main sequence, these fields would have relaxed into stable configurations (Braithwaite & Spruit, 2004), undergoing only weak Ohmic diffusion (Cantiello et al., 2016). In this study, we found magnetic fields in two low-mass stars (\(1.04^{+0.04}_{-0.03}\)\(M_{\odot}\) for KIC 6936091, and \(1.08^{+0.02}_{-0.04}\)\(M_{\odot}\) for KIC 4458118), who had a small convective core only at the very beginning of the main sequence, owing to nuclear reactions outside of equilibrium. These convective cores never reached the layers where our magnetic field measurements have maximal sensitivity, namely the hydrogen-burning shell. Assuming that the core magnetic field is confined to the layers that were once convective, we found that field strengths of 200 to 300 kG need to be invoked. These intensities remain compatible with order-of-magnitude predictions of field strengths produced by dynamo in main-sequence convective cores (Brun et al., 2005, Bugnet et al., 2021). Another possible interpretation is that the detected fields might be inherited from fossil magnetic fields.
Internal magnetic fields have been proposed as a candidate to provide additional transport of angular momentum inside stars. In this study, we were also able to measure average core rotation rates for the 13 stars. We found values that are in line with the typical rotation rates of red giant cores, as obtained by Gehan et al. (2018). This suggests that the stars of our sample do not undergo enhanced angular momentum redistribution compared to other red giants. This does not rule out magnetic fields as the origin of the angular momentum transport in red giants, as other red giants may harbour core magnetic fields that were not detected, either because of our observational biases or because they are below the detection threshold. The relatively high ratios between the Alfven frequency and the rotation rate rule out that the observed fields are fed by an ongoing Tayler-Spruit dynamo. They rather point towards fields that have settled into stable configurations.
This work constrains the strength and topology of the magnetic fields, hence it can be used in future numerical simulations and provides the possibility to further investigate the evolution of magnetic fields and their interaction with stellar rotation.
###### Acknowledgements.
The authors acknowledge support from the project BEAM-ING ANR-18-CE31-0001 of the French National Research Agency (ANR) and from the Centre National d'Etudes Spatiales (CNES). Gha L received funding from the KU Leuven Research Council under grant C16/18/005-PARADISE. TL acknowledges support from the Joint Research Fund in Astronomy (U2031203) under cooperative agreement between the National Natural Science Foundation of China (NSFC) and Chinese Academy of Sciences (CAS), NSFC grants (12090040, 12090042), and the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Cartograph FA, 804752). This paper includes data collected by the Kepler mission and obtained from the MAST data archive at the Space Telescope Science Institute (STScI). Funding for the Kepler mission is provided by the NASA Science Mission Director. STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. |
2309.15327 | In search of beyond mean-field signatures in heavy-ion fusion reactions | Examination of high-resolution, experimental fusion excitation functions for
$^{16,17,18}$O + $^{12}$C reveals a remarkable irregular behavior that is
rooted in the structure of both the colliding nuclei and the quasi-molecular
composite system. The impact of the $\ell$-dependent fusion barriers is
assessed using a time-dependent Hartree-Fock model. Barrier penetrabilities,
taken directly from a density-constrained calculation, provide a significantly
improved description of the experimental data as compared to the standard
Hill-Wheeler approach. The remaining deviations between the parameter-free
theoretical mean-field predictions and experimental fusion cross sections are
exposed and discussed. | R. T. deSouza, K. Godbey, S. Hudan, W. Nazarewicz | 2023-09-27T00:18:09Z | http://arxiv.org/abs/2309.15327v1 | # In search of beyond mean-field signatures in heavy-ion fusion reactions
###### Abstract
Examination of high-resolution, experimental fusion excitation functions for \({}^{16,17,18}\)O + \({}^{12}\)C reveals a remarkable irregular behavior that is rooted in the structure of both the colliding nuclei and the quasi-molecular composite system. The impact of the \(\ell\)-dependent fusion barriers is assessed using a time-dependent Hartree-Fock model. Barrier penetrabilities, taken directly from a density-constrained calculation, provide a significantly improved description of the experimental data as compared to the standard Hill-Wheeler approach. The remaining deviations between the parameter-free theoretical mean-field predictions and experimental fusion cross sections are exposed and discussed.
The merging of two nuclei can provide a window into nuclear dynamics on short timescales. Heavy-ion fusion is governed by the interaction of the colliding nuclei resulting from the delicate time-dependent balance of the repulsive electrostatic force and the attractive nuclear force in the presence of angular momentum for non-central collisions. Of fundamental importance in describing heavy-ion fusion is the collective potential of the two colliding nuclei, collective excitations of projectile and target, and the appearance of clustering effects during the fusion process. Progress in experiment, theory, and high performance computing allows a direct confrontation of high-resolution fusion measurements with advanced time-dependent theoretical frameworks to provide new insights into fusion dynamics.
_Experimental evidence.--_ Indirect evidence for the transient configurations in fusion was first provided by examination of elastic scattering in \({}^{12}\)C + \({}^{12}\)C [1]. Irregular energy dependence of the elastic cross-section was interpreted as the formation of "molecular states" at specific energies. This behavior was attributed to the deformability of the carbon nuclei [2]. Absence of such behavior in \({}^{16}\)O + \({}^{16}\)O [1] was interpreted in terms of the reduced deformability of the tightly bound, doubly-magic \({}^{16}\)O nucleus [2]. A direct examination of the fusion excitation function for \({}^{12}\)C + \({}^{12}\)C [3], \({}^{16}\)O + \({}^{12}\)C [4; 5], \({}^{16}\)O + \({}^{16}\)O [6; 7], and \({}^{20}\)Ne + \({}^{20}\)Ne [8] reveals the presence of an oscillatory structure in the near-barrier regime. This zigzag structure can be understood as originating from the accumulation of cross-section associated with successive individual \(\ell\)-waves with slightly different barriers [9; 10; 11; 12]. In order to directly probe the existence of transient configurations, particularly those that are weakly populated, it is crucial to disentangle the underlying macroscopic contribution due for example to \(\ell\)-wave dependent barriers. In the present work, we utilize high-resolution experimental data to confront state-of-the-art time-dependent Hartree-Fock (TDHF) calculations.
High-resolution fusion excitation functions were obtained both by using recent active-target measurements as well as by combining prior thin-target measurements. Fusion was identified either by the direct detection of the heavy fusion products following de-excitation or by their secondary \(\gamma\)-emission. Any contribution from breakup prior to fusion, expected to be small for the energies and systems considered in this work, is not accounted for. Obtaining these high-resolution excitation functions was the key first step in this work.
Comparison of fusion processes induced by \({}^{16,17,18}\)O nuclei provides insight into three highly interesting cases.
Figure 1: Experimental fusion excitation functions for the reactions of \({}^{16}\)O (black triangles) [13], \({}^{17}\)O (red dots) [14], and \({}^{18}\)O (open squares) [15; 16] impinged on a \({}^{12}\)C target. The inset shows the results of TDHF\({}^{*}\). See text for details.
The \({}^{16}\)O represents the reference case of a doubly-magic, tightly-bound nucleus. In the case of \({}^{17}\)O, an odd unpaired neutron occupies the \(0d_{5/2}\) shell, resulting in a ground-state spin \(5/2^{+}\). The extent to which this valence neutron is strongly or weakly coupled to the core is expected to impact the fusion cross-section. In the case of \({}^{18}\)O, the two valence neutrons form a Cooper pair. Pairing correlations are expected to impact the fusion cross section in two ways: by increasing the fusion barrier and by enhancing the neutron pair transfer. The experimental excitation functions for \({}^{16,17,18}\)O + \({}^{12}\)C are presented in Fig. 1.
Direct comparison of these three experimental excitation functions alone provides considerable information. While the excitation functions exhibit common features, notable differences exist. All the excitation functions shown in Fig. 1 manifest a zigzag behavior superimposed on the overall increase in cross-section with increasing energy. Significantly more structure is observed for \({}^{16}\)O with prominent peaks observed at \(E_{\rm c.m.}\approx 11\) MeV, \(14\) MeV, and \(16.5\) MeV. The magnitude of these peaks is reduced for \({}^{17}\)O and \({}^{18}\)O. At lower energies, all the excitation functions are rather similar suggesting that in this regime the valence neutrons in \({}^{17,18}\)O play a spectator role. In contrast, the reduction in cross-section for \({}^{17}\)O as compared to \({}^{16}\)O at higher energies is particularly noteworthy. If the valence neutron in \({}^{17}\)O is weakly coupled to the \({}^{16}\)O core one might expect either an increased fusion cross-section due to an increased spatial extent of the neutrons or essentially no increase at all if neutron breakup preceded fusion. The reduction of the fusion cross-section for \({}^{17}\)O thus suggests that in this energy regime the presence of the valence neutron does influence fusion. This influence could be associated with the increased role of breakup and neutron transfer which can suppress the above-barrier cross-section while enhancing the below-barrier cross-section [17]. The enhanced fusion cross section at \(E_{\rm c.m.}>14\) MeV for \({}^{18}\)O as compared to \({}^{16}\)O suggests that pairing correlations impact the fusion cross section at higher energies.
In order to provide the most complete, high-resolution description of the fusion excitation function for \({}^{16}\)O + \({}^{12}\)C several datasets have been combined and the result is presented in Fig. 2. The cross-section at higher energies which relies on the direct detection of the fusion products [13; 5; 18] is augmented by indirect measurements of the cross-section at lower incident energies [19]. Measurement of fusion at higher incident energies that relied on \(\gamma\)-ray measurements were excluded due to larger uncertainties. The reported cross-sections depicted in Fig. 2 are internally very consistent. The high resolution data not only reveals the peaks in the cross-section at \(E_{\rm c.m.}\approx 11\) MeV, \(14\) MeV, and \(16.5\) MeV previously noted but also an oscillatory behavior at lower energies.
_Theoretical framework.--_ To understand the fusion excitation functions, we have performed TDHF calculations for the above-barrier collisions. On general grounds, a TDHF approach is well suited to describe the large-amplitude collective motion associated with fusion while also describing the transfer dynamics, equilibration processes, and Pauli blocking that affect heavy-ion fusion probabilities [20; 21; 22].
Recently, advances in theoretical and computational techniques have allowed TDHF calculations to be performed on a three-dimensional (3D) Cartesian grid thus eliminating artificial symmetry restrictions [23]. The unrestricted 3D geometry allows for precise simulations that can capture the rich time-dependent dynamics at play in light nuclear reactions [24; 25]. Although in the sub-barrier regime it is necessary to perform density constrained TDHF (DC-TDHF) calculations [26; 27] to obtain the heavy-ion potentials [11; 28], at the above-barrier energies considered in this work direct TDHF calculations can be performed by initiating collisions for a series of increasing impact parameters until the maximum impact parameter for fusion is reached. Moreover, the barrier associated with each incoming \(\ell\)-wave can be determined by finding the lowest energy associated with each \(\ell\)-window. This collision energy was scanned in steps of \(0.25\) MeV across the reported range of energies for all systems. The effective interaction represented by energy density functional (EDF) used in this work was primarily UNEDF1 [29], though a set of parameters cho
Figure 2: Comparison of experiment with theory for the fusion excitation function for the \({}^{16}\)O+ \({}^{12}\)C reaction. Experimental data are taken from Refs. [18] (blue circles), [5] (green squares), [13] (orange diamonds), and [19] (red upside-down triangles). Raw TDHF results are shown with a light dotted line and modified DC-TDHF/TDHF hybrid results are shown with a solid black line. The difference between TDHF and TDHF\({}^{*}\) is highlighted by shading.
sen from the Bayesian posterior distribution [30] was also used to assess the sensitivity of the reaction outcomes to the choice of EDF [31]. The same systematic calculations were performed for all three oxygen beams. For the \({}^{18}\)O reaction the frozen pairing approximation was employed, as in Ref. [16]. In contrast to the variations seen in fusion studies for heavier nuclei [31; 32], the above-barrier fusion cross sections have been found to be largely insensitive to the choice of effective interaction. While the unrestricted 3D Cartesian geometry affords a more flexible computational framework, it comes at an increased cost with each simulation requiring a few hours on a standard multicore compute node. For the entire study, considering three systems, around 3000 individual trajectories were simulated to precisely determine the capture cross sections across a wide range of impact parameters and energies above the barrier. Illustrative videos of the time evolution of the neutron localization function [33] obtained in our TDHF simulations can be found in the Supplemental Material [34].
The fusion cross section can be expressed as
\[\sigma=\frac{\pi\hbar^{2}}{2\mu E_{\text{c.m.}}}\sum_{\ell=0}^{\ell_{\text{ max}}}(2\ell+1)P_{\ell}, \tag{1}\]
where \(\mu\) is the reduced mass, \(E_{\text{c.m.}}\) is the center-of-mass energy, \(P_{\ell}\) is the probability of the \(\ell\)-wave fusing, and \(\ell_{\text{max}}\) corresponds to the largest \(\ell\)-wave that fuses. For the raw TDHF results, \(P_{\ell}\) is 1 if the system fuses and 0 if it does not.
The TDHF calculations were performed for \(8<\ell\leq 20\). For each \(\ell\), a sharp increase in cross section is observed when the barrier for that particular \(\ell\)-wave is surpassed. Tunnelling through the barrier mitigates this sharp threshold behavior [11; 12]. While the Hill-Wheeler approximation is often used for the penetrability, this approach presumes the transmission through an inverted parabolic potential. This assumption becomes progressively worse with increasing \(\ell\)-wave, particularly as \(\ell\) approaches \(\ell_{\text{max}}\). In the current work, we extract \(P_{\ell}\) directly from the penetrability of the computed DC-TDHF potentials for that \(\ell\) value thus providing a self-consistent microscopic approach. In the event that \(\ell_{\text{max}}\) is different between the TDHF and DC-TDHF approaches, the lower of the two is chosen. In the following, we refer to this method as the hybrid DC-TDHF/TDHF approach and designate it TDHF\({}^{*}\). The primary difference between TDHF\({}^{*}\) and the standard treatment for TDHF as detailed in Refs. [11; 12] is that the cross sections are suppressed in addition to having a smoother behavior.
_Discussion.--_The predictions of the TDHF* model for the three reactions considered is shown in the inset of Fig. 1. As might be naively expected from geometrical considerations based on mass scaling, \({}^{16}\)O exhibits a smaller cross-section than \({}^{17,18}\)O. The predicted trend differs from that of the experimental data shown in Fig. 1.
A more detailed comparison of the measured and calculated fusion excitation functions is provided in Figs. 2-4. We first discuss the \({}^{16}\)O+\({}^{12}\)C reaction as it provides an excellent reference due to the rigid nature of the \({}^{16}\)O projectile. As shown in Fig. 2, for \(E_{\text{c.m.}}<14\,\text{MeV}\), the TDHF\({}^{*}\) method provides a good description of the fusion excitation function due to the addition of successive \(\ell\)-waves. For \(E_{\text{c.m.}}>11\,\text{MeV}\), TDHF\({}^{*}\) systematically overestimates the measured excitation function, although the oscillating behavior of the cross section is well reproduced. The raw TDHF method systematically overshoots the data.
Overestimation of the fusion cross-section at higher energies by TDHF has typically been attributed to the existence of breakup channels in the experimental data that are not properly represented in TDHF, though the full extent of this effect is an open question. Our TDHF\({}^{*}\) calculations indicate that a more accurate description of transmission probabilities reduces the need for invoking breakup channels. All in all, the description of the reference reaction \({}^{16}\)O+ \({}^{12}\)C by the parameter-free TDHF\({}^{*}\) approach is satisfactory.
Having established the success of TDHF\({}^{*}\) in describing the \({}^{16}\)O + \({}^{12}\)C reaction, we investigate the impact on fusion introduced by the addition of a single neutron to the projectile. Figure 3 illustrates the case of \({}^{17}\)O + \({}^{12}\)C. The experimental data were collected in recent active thick-target measurements [14; 35] along with earlier thin-target measurements [18; 36; 37]. It is to be
Figure 3: Similar as in Fig. 2 but for the \({}^{17}\)O+ \({}^{12}\)C reaction. Experimental data are taken from Refs. [18] (blue circles), [37] (green squares), [14] (orange diamonds), [36] (red upside-down triangles), and [35] (purple triangles).
noted that the close examination of different experimental datasets for \({}^{17}\)O reveals some significant differences. For \(E_{\rm c.m.}\sim 14\,\)MeV the data of [35] and the lowest energy point from [37] suggest a pronounced dip in the cross section differing from the data of [14; 18]. The accuracy of the thick-target data in Ref. [14] has been corroborated by comparing the measured cross-section with thin-target measurement of the fusion cross-section of mirror nuclei. The magnitude of the dip at \(E_{\rm c.m.}\sim 14\,\) MeV is significantly reduced as compared to [35] and is shifted to slightly higher energy. Also, at the lowest energies shown, the data of Ref. [36] appears slightly low relative to the data from both [14] and [18] which are in a reasonable agreement. As the data of Ref. [14] are self-normalizing, in our opinion, they provide a more accurate measure of the fusion cross section.
The deviation from smooth behavior of the excitation function evident for the case of \({}^{16}\)O + \({}^{12}\)C, is also apparent in the case of the \({}^{17}\)O but the pronounced zigzag pattern in the cross-section, as seen in the \({}^{16}\)O data, is harder to quantify. The TDHF* calculations for this reaction significantly overestimate the measured cross section for \(14<E_{\rm c.m.}<21\,\)MeV. There are several possible reasons for this, including neutron transfer which does not lead to fusion. The impact of transfer on the fusion probabilities was estimated by checking the isovector fusion potentials extracted from DC-TDHF in a similar procedure to Ref. [20]. As seen in Fig. S1 of [34], the magnitude of the isovector contribution for \({}^{17}\)O is less than that of \({}^{18}\)O, suggesting that any transfer effects at the mean-field level will not account for the significant suppression in above barrier cross sections seen in experiment. The presence of nucleonic cluster-like structures in the transient configurations can be probed by TDHF, see, e.g., [33]. However, the TDHF results shown in Fig. 1 do not show any appreciable reduction of \(\sigma_{\rm F}\) for \({}^{17}\)O. On the contrary, the predicted cross section for \({}^{17}\)O systematically exceeds the \({}^{16}\)O "reference".
Since the odd neutron in \({}^{17}\)O occupies the \(0d_{5/2}\) orbit leading to the \(5/2^{+}\) ground state of \({}^{17}\)O, some increase of the fusion barrier may be possible due to a hindrance factor of fusion by specialization energy - an increase in the barrier due to angular momentum conservation [38]. This effect, considered for fission, has so far not been considered by theoretical approaches to heavy-ion fusion. In particular, it is not accounted for by TDHF which does not conserve angular momentum. An experimental argument against this scenario, however, is the similarity of the measured fusion excitation functions for \({}^{16}\)O and \({}^{17}\)O projectiles at low energies seen in Fig. 1.
We now examine the impact of two valence neutrons in \({}^{18}\)O. The excitation function for \({}^{18}\)O + \({}^{12}\)C shown in Fig. 4 utilizes thin-target measurements [16; 18; 39] together with recent active thick-target data [15]. While the experimental data exhibit oscillations, the presence of sharp resonant-like structures is absent. The TDHF\({}^{*}\) model with pairing provides a reasonably good overall agreement with the data although the calculations slightly overestimate the data.
Pairing correlations are expected to effectively increase the fusion barrier, hence decrease fusion cross sections [16; 40]. The experimental data in Fig. 1 do not manifest such a reduction for \({}^{18}\)O: actually its fusion excitation function exceeds that for \({}^{16}\)O for \(E_{\rm c.m.}>14\,\)MeV. This result implies that the impact of pairing correlations on \(\sigma_{\rm F}\) in \({}^{18}\)O is minor, consistent with the similarity of predicted fusion excitation function for \({}^{18}\)O with that of \({}^{16,17}\)O in Fig. 1.
_Summary.--_ We have presented a framework for using a microscopic, parameter-free TDHF\({}^{*}\) model to investigate fusion excitation functions in the oxygen isotopes. To obtain \(\sigma_{\rm F}(E)\) with sufficient resolution, multiple experimental datasets were combined. The resulting data reveal oscillatory structures consistent with the presence of different \(\ell\)-wave barriers. To accurately describe the experimental data, an extension of the standard TDHF approach was required to calculate the fusion penetrability directly from the DC-TDHF potential. The resulting TDHF\({}^{*}\) model provided a reasonably good description for the reference case of the \({}^{16}\)O-induced fusion, including the reproduction of oscillatory structures. A slightly worse, but still acceptable agreement with experiment was obtained for the \({}^{18}\)O-induced fusion. An appreciable reduction of the experimental fusion excitation function for \({}^{17}\)O remains a puzzle.
Several possible explanations exist for the remaining
Figure 4: Similar as in Fig. 2 but for the \({}^{18}\)O+ \({}^{12}\)C reaction. Experimental data are taken from Refs. [18] (blue circles), [39] (green squares), [15] (orange diamonds), [5] (red upside-down triangles), and [16] (purple triangles).
discrepancies between experiment and theory: the effect of breakup and transfer channels, an imperfect description of \(\ell\)-dependent fusion barriers by TDHF, or the presence of transient configurations involving nucleonic clusters. Distinguishing between these possibilities requires advances on both experimental and theoretical fronts. Systematic high-resolution, exclusive measurements of heavy-ion fusion and transfer/breakup measurements along isotopic chains is necessary in order to establish the limits of breakup and transfer channels. This experimental data, paired with continued investment in high-performance computing, will be critical in enabling the development of a more complete beyond-mean-field description of heavy-ion fusion.
This work was supported by the U.S. Department of Energy Office of Science under Grant Nos. DE-FG02-88ER-40404 (Indiana University), DOE-DE-SC0013365 and DE-SC0023175 (Michigan State University), and DOE-DE-NA0004074 (NNSA, the Stewardship Science Academic Alliances program). This work was supported in part through computational resources and services provided by the Institute for Cyber-Enabled Research at Michigan State University.
|
2309.05134 | Benchmarking ground truth trajectories with robotic total stations | Benchmarks stand as vital cornerstones in elevating SLAM algorithms within
mobile robotics. Consequently, ensuring accurate and reproducible ground truth
generation is vital for fair evaluation. A majority of outdoor ground truths
are generated by GNSS, which can lead to discrepancies over time, especially in
covered areas. However, research showed that RTS setups are more precise and
can alternatively be used to generate these ground truths. In our work, we
compare both RTS and GNSS systems' precision and repeatability through a set of
experiments conducted weeks and months apart in the same area. We demonstrated
that RTS setups give more reproducible results, with disparities having a
median value of 8.6 mm compared to a median value of 10.6 cm coming from a GNSS
setup. These results highlight that RTS can be considered to benchmark process
for SLAM algorithms with higher precision. | Effie Daum, Maxime Vaidis, François Pomerleau | 2023-09-10T21:01:34Z | http://arxiv.org/abs/2309.05134v2 | # Benchmarking ground truth trajectories with robotic total stations
###### Abstract
Benchmarks stand as vital cornerstones in elevating Simultaneous Localization and Mapping (SLAM) algorithms within mobile robotics. Consequently, ensuring accurate and reproducible ground truth generation is vital for fair evaluation. A majority of outdoor ground truths are generated by Global Navigation Satellite System (GNSS), which can lead to discrepancies over time, especially in covered areas. However, research showed that Robotic Total Station (RTS) setups are more precise and can alternatively be used to generate these ground truths. In our work, we compare both RTS and GNSS systems' precision and repeatability through a set of experiments conducted weeks and months apart in the same area. We demonstrated that RTS setups give more reproducible results, with disparities having a median value of \(8.6\,\mathrm{mm}\) compared to a median value of \(10.6\,\mathrm{cm}\) coming from a GNSS setup. These results highlight that RTS can be considered to benchmark process for SLAM algorithms with higher precision.
## I Introduction
Benchmarks play a crucial role in enhancing SLAM algorithms and real-time location algorithms in mobile robotics [1, 2]. It is essential to ensure the accuracy and reproducibility of the ground truth used for fair comparisons between evaluated algorithms [3]. However, outdoor ground truths, primarily generated by GNSS, can lead to disparities between experiments conducted at different times in the same environment, as shown in Figure 1. These variations in GNSS positions result from various sources, such as satellite constellations, ephemerid, and atmospheric conditions. They may cause significant biases when evaluating trajectories through benchmarks [4]. Recently, our research has demonstrated that RTS can generate ground truths in six-Degrees Of Freedom (DOF) with millimeter-level accuracy [5, 6]. Building on these findings, we evaluate the feasibility of using RTSs to generate ground truth trajectories for objective benchmarking of SLAM algorithms. We compare the precision and repeatability of a RTS and GNSS system by analyzing their data taken simultaneously during different deployments.
## II Benchmarking Standardized Experiments
### _Standardization of RTS and GNSS protocol_
The experiments were conducted following a standardized protocol to ensure accurate and reproducible results. The process began by allowing the RTS surveying instruments to acclimate to the outdoor temperature. Once ready, the instrument was leveled to ensure proper alignment. Three prisms were mounted at different heights on the robotic platform to optimize visibility and tracking. Each prism was associated with its respective RTS, also positioned at different heights to avoid obstructing visibility and to facilitate the extrinsic calibration of the sensors. An essential aspect of the experiment is the extrinsic calibration, where a set of eight to twelve static ground control points is measured around the RTS in a circular configuration to express all the data in a common frame [7]. Finally, after each deployment, we performed a final extrinsic calibration in the laboratory by measuring the positions of prisms and sensors on the robotic platform using a RTS. The same procedure was applied during each experiment to collect consistent and standardized data during all deployments.
The data obtained from each deployment were processed using our pipeline.1 The pipeline incorporates various filtering techniques to enhance the precision of the ground truth. These filtering methods contribute to minimizing noise and errors, ultimately improving the reliability of the generated trajectories. As RTS positions are not taken synchronously, we used the parameters described by Vaidis _et al._[6] to perform linear interpolation of the positions. A point-to-point
Fig. 1: A RTS setup and GNSS antennas were used to record the trajectory of a _Warthog Clearpath_ platform on the Université Laval campus. The color bar displays the average GNSS disparities obtained between two identical trajectories done at different times. The red sphere marks the location of our static Real Time Kinematic (RTK) GNSS reference antenna.
minimization is used to reconstruct the full pose of the vehicle by measuring three rigid points [5]. By leveraging this comprehensive processing pipeline and utilizing the same parameters, the experiment aimed to achieve more precise results, thus facilitating the evaluation and validation of the robotic platform's localization and mapping performance.
Additionally, three GNSS antennas are mounted on top of the prisms to achieve precise positioning. A fourth static antenna, located nearby with known global geodesic coordinates, provides real-time corrections to the three mobile antennas on the robotic platform for RTK-positioning. The RTK method allows obtaining real-time measurements for the trajectory of the moving platform. Establishing a radio connection between the static antenna and the three mobile antennas, along with setting mask parameters, is crucial for the system's GNSS method. By using the same point-to-point minimization method as for the RTS solution [5], the robot ground truth trajectory can be determined in the GNSS frame, through the extrinsic calibration of the GNSS antennas done in the laboratory.
### _Metrics_
An inter-distance metric is used to evaluate the precision of each system. This metric is computed with the distance between each synchronous triplet of RTS target position (inter-prism distances) or GNSS antenna position (inter-GNSS distances) obtained during an experiment. Each of these distance triplets is then compared to their RTS calibrated distance, i.e., the position of the prisms or GNSS antennas rigidly installed on the robot. Moreover, an inter-experiment metric is used to quantify the difference in precision obtained between two experiments done at different times. Two positions taken during different experiments are assessed to be in close range by computing their nearest neighbor distance. Then, each inter-distance triplet of the RTS prisms or GNSS antenna positions that matched spatially are subtracted to compute this metric. The results represent the disparities in precision in-between the two different trajectories taken at a different time, as shown in Figure 1 for a set of GNSS data.
## III Results
The RTS setup is composed of three _Trimble_ S7 surveying instruments that track three _Trimble MultiTrack Active Target_ MT1000 prisms, operating at a measurement rate of \(2.5\,\mathrm{Hz}\). The prisms are mounted on a _Clearpath Warthog_ unmanned ground vehicle, along with three _Emlid RS+_ GNSS antennas. To analyze the disparities of the different setups, eleven experiments were conducted weeks and months apart on the same area of the Universite Laval campus, for a total of \(16\,\mathrm{km}\) of GNSS and RTS-tracked prism trajectories.
The Figure 2 illustrates the inter-prism and inter-GNSS metric errors, indicating that the RTS acquisition system achieves median sub-centimeter precision at \(6.8\,\mathrm{mm}\), while the GNSS system provides a median precision around \(1.35\,\mathrm{cm}\). The GNSS precision aligns with results from an RTK method, showing within \(2\,\mathrm{cm}\) accuracy in static scenarios [8]. These outcomes are especially promising considering the dynamic nature of the robotic platform. It's worth noting that the inter-distances error highlights the higher precision of the RTS acquisition system compared to the GNSS system. This discrepancy can be attributed to the relatively low error of the RTS acquisition system versus the absolute error of GNSS related to the satellites' constellation, even in open-sky and large-space environments where the experiments were done.
Reproducibility between the experiments is assessed by computing the nearest neighbor distance. Points falling within a \(2\,\mathrm{m}\) range are considered reproducible between the different experiments. As evident in Figure 2, the reproducibility appears consistent for the RTS setup. This showcases that precision remains consistent across all experiments with a median margin of \(8.6\,\mathrm{mm}\). However, the GNSS system has higher disparities at a median level of \(10.6\,\mathrm{cm}\). The ground truth trajectory generated is displayed in Figure 1 with the color gradient showing the inter-experiments error and grey points representing the SLAM generated map. These differences highlight that GNSS is less prompt to give reproducible ground truth trajectories with lower uncertainty in the test conditions.
## IV Conclusion
In this paper, we successfully integrated both RTS and GNSS ground truth acquisition systems for trajectory reconstruction. RTSs offers a valuable solution for benchmarking due to their higher precision, their median reproducibility around \(8.6\,\mathrm{mm}\), and applicability as shown in Figure 2. Moreover, they can be used in both indoor and outdoor environments compared to GNSS. Results for GNSS are as expected, with higher disparities at a median level of \(10.6\,\mathrm{cm}\), making it a relevant subsidiary to obtain reproducible trajectories. However, it's important to note that RTS has certain limitations, such as line of sight dependency, higher cost, and post-processing requirements. Despite these drawbacks, combining both RTS and GNSS systems presents a favorable trade-off. This approach enables us to generate accurate ground truth trajectories and enhance reproducibility, thereby improving the overall benchmarking process for SLAM algorithms. Future works should consider the complementarity of both systems for six-DOF trajectory reconstruction.
Fig. 2: Error resulting from **(a)** inter-prisms and inter-GNSSs metrics and, **(b)** inter-experiments metrics presented in Section II-B. The results from the RTS are depicted in blue, while those from the GNSS are represented in orange. The median error is displayed at the center of each box, and the Interquartile Range (IQR) is depicted on the side. |
2309.12448 | Near Field Optimization Algorithm for Reconfigurable Intelligent Surface | Reconfigurable intelligent surface (RIS) is a type of wireless communication
technology that uses a reconfigurable surface, such as a wall or building that
is able to adjust its properties by an integrated optimization algorithm in
order to optimize the signal propagation for a given communication scenario. As
a reconfiguration algorithm the multidimensional optimization of the GNU
scientific library was analyzed to evaluate the performance of the smart
surface in the quality of signal reception. This analysis took place by means
of electrodynamic simulations based on the finite difference time domain
method. Through these simulations it was possible to observe the efficiency of
the algorithm in the reconfiguration of the RIS, managing to focus the
electromagnetic waves in a remarkable way towards the point of interest. | Emanuel Colella, Luca Bastianelli, Franco Moglie, Valter Mariani Primiani | 2023-09-21T19:44:53Z | http://arxiv.org/abs/2309.12448v1 | # Near Field Optimization Algorithm for Reconfigurable Intelligent Surface
###### Abstract
Reconfigurable intelligent surface (RIS) is a type of wireless communication technology that uses a reconfigurable surface, such as a wall or building that is able to adjust its properties by an integrated optimization algorithm in order to optimize the signal propagation for a given communication scenario. As a reconfiguration algorithm the multidimensional optimization of the GNU scientific library was analyzed to evaluate the performance of the smart surface in the quality of signal reception. This analysis took place by means of electrodynamic simulations based on the finite difference time domain method. Through these simulations it was possible to observe the efficiency of the algorithm in the reconfiguration of the RIS, managing to focus the electromagnetic waves in a remarkable way towards the point of interest.
## 1 Introduction
A metasurface is a planar destruction of metamaterial, a man-made material engineered to have electromagnetic properties not found in nature [1, 2]. This material is made up of elementary units, arranged periodically in repeating patterns with dimensions and spacings much smaller than the wavelength [4]. Thanks to these microscopic characteristics, chemical due to the nature of the resonant elements and geometric due to the dimensional nature, the metasurface is able to control the propagation of the electromagnetic waves with which it interacts, manipulating the distribution of the fields in space. Their ability to control the response of incident waves lies in the interaction of the electric and magnetic fields of the waves with the periodic structure of the surface which leads to resonances capable of changing the oscillatory nature of the electromagnetic fields [7]. These interactions lead to resonant effects that lead to a wide range of applications such as broadband focusing [6], peculiar reflection/refraction [7], orbital angular momentum creation [8], space-surface wave manipulations [9] and other applications in both microwave and optical frequencies. Although the metasurface has the ability to focus electromagnetic waves at various points in space, it remains a static structure, allowing it to always respond to the incident wave in the same way. Nowadays, therefore, the greatest technological interest has turned towards dynamic structures of metasurfaces capable of dynamically controlling the electromagnetic response of the surface. These surfaces are called Reconfigurable Intelligent Surfaces (RISs). The RIS is a type of surface that can change its configuration to affect signal propagation with low energy consumption managed by optimization software for the dynamic control of the propagation of electromagnetic waves [10]. This can be done through the use of antenna elements which can be switched or adjusted to change the signal reflection and transmission behaviour [7]. This allows to improve the performance of the communication system and to obtain greater flexibility in the design of the networks [11]. The control of electromagnetic waves takes place through an intelligent reconfiguration of the wireless propagation environment which allows to improve the capacity and coverage of wireless networks [12]. Their goal is to overcome the destructive effect of multi-path fading, which attenuates the communication signal strength, and improve the signal quality. Therefore, the RISs appear to be an advanced form of smart surface for the emerging hardware technology for the sixth-generation (6G) of radio communication networks [13]. For RIS reconfiguration there are several optimization algorithms such as Several integrated software for metasurface reconfiguration such as Particle Swarm Optimization (PSO) [14], Network Slicing Resource Optimization (NSRO) [15], Salp Swarm Algorithm (SSA) [16], Gradient-based Optimization Algorithms which can be used to maximize signal reception in space. In this paper we analyze an open source multidimensional minimization algorithm from GNU Scientific Library (GSL) as the multivariable optimization algorithm for the RIS reconfiguration for smart radio environment.
## 2 Simulation Set-Up
The electromagnetic simulations are based on the analysis of the RIS focusing performance by its dynamic reconfigurations in electromagnetic simulations. In order to analyze the EM response of the RIS reconfiguration, numerical simulations were run according to the finite difference time domain (FDTD) method to reproduce computational electrodynamics. The entire FDTD code has been implemented following the standard formulation written in C-language code. The electromagnetic simulations were performed in a domain of 140x140x200 cells of 1 mm cell size. The domain includes the RIS, two antennas, one transmitting antenna (Tx) and one receiving antenna (Rx) sepa
rated by a perfect electric conductor (PEC) metallic barrier. The antennas are dipole antennas of 10 cells working at frequency band 2500-5000 MHz. The barrier, instead, consists of 1x60x60 cells placed in the middle of the two antennas in order to avoid the direct communication between them. The main characteristic of the barrier is the PEC boundary condition that consists of zero tangent electric field, that does not allow to transmit signal through the barrier. In this way, the only contribution that will allow the signal from the Tx to reach the Rx antenna will reside in the RIS. The simulated RIS is a periodic structure made up of 5x5 perfectly conductive resonant units (PECs) connected to each other by means of varactor diodes. The structure has dimensions equal to \(56\times 1\times 56\) cells, in which the patches are separated by 1 cell. Each patch communicates with two patches via variable capacitance varactor diodes for a total of 40 diodes in a dielectric film of 1 cell with \(\epsilon_{r}=3\), separated from margins of 1 cell. This RIS is positioned in front of the dividing barrier, representing the only contribution for the receiving antenna. The entire configuration is shown in Figure 1. The dielectric material of the interior of the chamber is air while the walls of the simulation space are perfect matched layers (PML).
## 3 Multidimensional Minimization
The GNU Scientific Library is a set of open source numerical routines for scientific computing written in C and C++. It includes different kind of optimization algorithms that can be implemented in order to find the minimum or maximum of a given function. Among the optimization algorithm available in GSL, the multidimensional minimization has been chosen. This choice lies in the number of vector diodes each of which is characterized by its own capacity. The goal of the algorithm is to iteratively find the ideal configuration of the varactor capacities in order to have the maximum towards the receiving antenna. The optimization function called multidimensional minimization looks for a point where the function to be optimized \(f(x_{1},..x_{n})\) assumes the lowest value with respect to the neighboring points. To find the maximum, however, the objective is reversed to find \(1-\min(f(x_{1},..x_{n}))\). The algorithm begins with a guess and descends using a search technique. A one-dimensional line is minimized using the gradient of the function until the lowest point is located within a reasonable tolerance. The process is repeated until the true n-dimensional minimum is found, at which point the search direction is updated using local information from the function and its derivatives. Until the simplex is small enough, the iteration process is continued. The process entails three steps: setting up the minimizer state s for algorithm T, updating s with iteration T, determining whether s has reached convergence, and repeating the iteration if necessary. Initializing the minimizer s to minimize the function fdr, the pre-built function initializes the multidimensional minimizer by starting from the initial point x. The tol parameter controls the accuracy of the line minimization, while the step-size parameter controls the size of the first trial step. If the gradient of the function g is orthogonal to the current search direction p and has a relative accuracy of tol, line minimization is frequently regarded as successful, with:
\[p\cdot g<tol|p||g| \tag{1}\]
A tolerance value of 0.1 is chosen for the majority of tasks because the multidimensional minimization algorithm's line reduction process only needs to be performed roughly. A parametric function with n variables \(f(x,\text{params})\) must be defined for the minimizers, along with two additional
Figure 1: Representation of the simulated 56 \(\times\) 56 mm \({}^{2}\) RIS. It consists of 5 \(\times\) 5 PEC patches of 1 cm \({}^{2}\) connected by 40 varactor diodes, vertically and horizontally polarized.The dielectric support of \(\epsilon_{r}=3\) and 1 mm of thickness.
Figure 2: Representation of the simulation configuration: the RIS is positioned in the upper center of the workspace, the PEC barrier in the lower center while the two Tx and Rx antennas are located on the sides of the barrier. All the measures are in mm.
routines: one for calculating the function's gradient and the second for calculating both the function value and the gradient. The iteration functions are used to update the minimizer's state, and because the same function is used by all minimizers, different techniques can be swapped at runtime without modifying the code. The minimization process should stop when one of the following conditions is met: a minimum has been found within the user-specified precision, a user-specified maximum number of iterations has been reached, or an error has occurred. The minimization process compares the gradient g norm to the absolute tolerance epsabs. At a minimum, the gradient of a multidimensional function is zero. The test returns success if this condition is met:
\[|g|<epsabs \tag{2}\]
At this point the algorithm stops and the optimum set of parameters are then run on the last FDTD simulation for the best RIS reconfiguration.
## 4 Results
This section shows the results of the electromagnetic simulations according to the FDTD method considering the set-up shown in Figure 1. The reported results refer to the electromagnetic response of the RIS before and after the metasurface reconfiguration. On the left in Figure 2, it is shown the electromagnetic field distribution in case of not re-configured RIS. In this case the diodes are all switched on at fixed 1 pF capacitance, and how it is possibile to see there is not focusing toward the receiving antenna. On the right in Figure 2, however, it is shown the electromagnetic field distribution in case of re-configured RIS.In this case, instead, there is greater focus near the reception point. This distribution of fields focused towards the receiving antenna significantly improves the received signal. To quantify the signal received in the two configurations it is possible to observe the Figure 3. On the left, the received signal in both reconfiguration is reported. The signal amplitude in time domain is shown on the left. As can be seen, in case of not
_This paper's copyright is held by the author(s). It is published in these proceedings and included in any archive such as IEEE Xplore under the license granted by the "Agreement Grunting URSI and IEICE Rights Related to Publication of Scholarly Work."_
Figure 4: Received signal amplitude in time domain in case of not configured RIS (on top - red line) and configured RIS (top - blue line). The maximum value of the not configured RIS is 0.015 V while the configured one is 0.02 V. On the bottom is reported the \(s_{21}\) in frequency domain in case of not configured RIS (red line) and configured RIS (blue line) from 2.5 GHz up to 5.0 GHz.
Figure 3: Electromagnetic field distribution in case of not reconfigured RIS (on the left) and in case of configured RIS (on the right).
reconfigured RIS (red line) the received signal amplitude is lower than in case of reconfigured RIS (blue line). On the right, instead, is reported the \(s_{21}\) in frequency domain, in order to analyze the spectrum of the received signal. From 2.5 GHz up to 3.25 GHz, the received signal of the reconfigured RIS is higher as from 3.25 GHz up to 5 GHz with respect to the not reconfigured RIS.
## Acknowledgements
This work has been supported by EU H2020 RISE-6G project under the grant number 101017011.
## 5 Conclusion
In conclusion, the GNU Scientific Library's multidimensional optimization algorithm is effective at finding the optimal configuration for a specific point of focus. The algorithm has also two variations, the Fletcher-Reeves conjugate gradient algorithm and the Polak-Ribiere conjugate gradient algorithm. Both methods use a sequence of search directions to approximate the curvature of the function near the minimum. The Fletcher-Reeves method uses one approach for the coefficient beta, while the Polak-Ribiere method uses a different approach. Next steps foresee the analysis of the two versions of the multidimensional algorithm to guarantee a better reconfiguration of the metasurface. Upcoming activities are focused on analyzing different optimization algorithms and setups for the best reconfiguration of the RIS in smart radio environments.
|
2307.16382 | Does fine-tuning GPT-3 with the OpenAI API leak personally-identifiable
information? | Machine learning practitioners often fine-tune generative pre-trained models
like GPT-3 to improve model performance at specific tasks. Previous works,
however, suggest that fine-tuned machine learning models memorize and emit
sensitive information from the original fine-tuning dataset. Companies such as
OpenAI offer fine-tuning services for their models, but no prior work has
conducted a memorization attack on any closed-source models. In this work, we
simulate a privacy attack on GPT-3 using OpenAI's fine-tuning API. Our
objective is to determine if personally identifiable information (PII) can be
extracted from this model. We (1) explore the use of naive prompting methods on
a GPT-3 fine-tuned classification model, and (2) we design a practical word
generation task called Autocomplete to investigate the extent of PII
memorization in fine-tuned GPT-3 within a real-world context. Our findings
reveal that fine-tuning GPT3 for both tasks led to the model memorizing and
disclosing critical personally identifiable information (PII) obtained from the
underlying fine-tuning dataset. To encourage further research, we have made our
codes and datasets publicly available on GitHub at:
https://github.com/albertsun1/gpt3-pii-attacks | Albert Yu Sun, Eliott Zemour, Arushi Saxena, Udith Vaidyanathan, Eric Lin, Christian Lau, Vaikkunth Mugunthan | 2023-07-31T03:17:51Z | http://arxiv.org/abs/2307.16382v3 | # Does fine-tuning GPT-3 with the OpenAI API leak personally-identifiable information?
###### Abstract
Machine learning practitioners often fine-tune generative pre-trained models like GPT-3 to improve model performance at specific tasks. Previous works, however, suggest that fine-tuned machine learning models memorize and emit sensitive information from the original fine-tuning dataset. Companies such as OpenAI offer fine-tuning services for their models, but no prior work has conducted a memorization attack on any closed-source models. In this work, we simulate a privacy attack on GPT-3 using OpenAI's fine-tuning API. Our objective is to determine if personally identifiable information (PII) can be extracted from this model. We (1) explore the use of naive prompting methods on a GPT-3 fine-tuned classification model, and (2) we design a practical word generation task called Autocomplete to investigate the extent of PII memorization in fine-tuned GPT-3 within a real-world context. Our findings reveal that fine-tuning GPT3 for both tasks led to the model memorizing and disclosing critical personally identifiable information (PII) obtained from the underlying fine-tuning dataset. To encourage further research, we have made our codes and datasets publicly available on GitHub at: [https://github.com/albertsun1/gpt3-pii-attacks](https://github.com/albertsun1/gpt3-pii-attacks).
## 1 Introduction
On July 13, 2023, the US Federal Trade Commission (FTC) announced its open investigation into the ChatGPT maker OpenAI for the company's recent record of personal data breaches Zakrzewski (2023). Globally, recent frameworks like the European Union's Global Data Protection Regulation (GDPR) and China's Measures for the Management of AI Services also address personal data breaches GPDR (2023); CAC (2023).
Fine-tuning large models for specific tasks is crucial in machine learning, especially in natural language processing applications Hu et al. (2021); Devlin et al. (2019); Howard and Ruder (2018); Sun et al. (2023). As a result, many popular generative model providers like OpenAI offer services that allow consumers to fine-tune large language models for specific tasks. OpenAI offers an API 1 to fine-tune their base GPT-3 models, which allows users to adapt transformer models, like GPT-3, to perform better at specific tasks Vaswani et al. (2017).
Footnote 1: [https://platform.openai.com/docs/guides/fine-tuning](https://platform.openai.com/docs/guides/fine-tuning)
However, previous studies have shown that training large language models without adequate privacy techniques can lead to severe risks of PII leakage Lukas et al. (2023); Carlini et al. (2021). While previous studies focused on older, smaller language models (BERT, GPT-2, etc.), no study has done a comprehensive investigation of whether this vulnerability persists with OpenAI's fine-tuning API and GPT-3.
In light of privacy concerns and regulatory interest, we investigate whether OpenAI's fine-tuning API preserves privacy for PII. In this work, we simulate experiments where an attacker has "black box" access to a fine-tuned OpenAI language model. In this scenario, the attacker attempts to retrieve PIIs by prompting the model. A simple PII extraction attack attempts to answer the question that recent regulations pose. Specifically, we investigate whether attackers can extract PIIs that appear
Figure 1: We run two experiments, **Classification** and **Autocomplete**, to detect memorization of sensitive personally identifiable information (PII) by fine-tuned GPT-3.
in the fine-tuning dataset from a language model.
We used the Enron email dataset as our training data, a gold standard in privacy research on language models for its authentic company correspondences and personal identifying information (PIIs) (Klimt and Yang, 2004). The open-source Enron email dataset was released to the public domain by the Federal Energy Regulatory Commission (FERC) during its 2002 investigation. It currently exists as one of the most common datasets for linguistics and privacy research because it exists as one of the only "substantial collections of real email made public" (Cohen, 2015). Given that the topics in the emails discussed are business-related, attacking a language model trained on this data can reveal privacy vulnerabilities in enterprise use-cases.
GPT-3 models excel at classification and text generation tasks (Brown et al., 2020). In the following two experiments, we adapt the Enron dataset to both of these tasks, which we call **classification** and **autocomplete**. As shown in Fig 1, the first experiment uses the Enron email dataset to train an email classifier, and the second experiment uses the Enron email dataset to train an autocomplete service where a user inputs the subject of an email and the model is trained to output the body of the email.
Our work contributes to the literature as follows:
* **Extraction attack on OpenAI fine-tuning API and GPT-3.** To our knowledge, we are the first to conduct public extraction attacks on fine-tuned GPT-3 and any proprietary fine-tuning API, including OpenAI's. OpenAI doesn't release specific information about how it fine-tunes models, so its training method (adapters, which model weights they freeze, privacy techniques) remains an black box to users. As a result, we argue that it is important for consumers to test how privacy-preserving their fine-tuning process is.
* **Designed practical attack scenario: autocomplete.** Besides replicating a naive extraction attack, we introduce a practical PII extraction scenario called autocomplete. This attack simulates a real-world product where a user uses the language model to suggest the body of their emails given a subject line.
## 2 Relevant Works/Background
### Recent Policy
The Global Data Protection Regulation (GDPR) asserts that the leakage of personally identifiable information (PII) can lead to rights limitation, increased discrimination, identity theft, financial loss, and severe repercussion for affected users. Other significant policies regulating data leakage like China's Interim Measures for AI Services Management and the US's Health Insurance Portability and Accountability Act are in the process of being implemented or have already been implemented. A detailed table with these policies can be found in Appendix section A.
### Definition of Personally Identifiable Information (PII)
We use the same definition of PII as the definition of "re-identifying information" from (Pilan et al., 2022). Re-identifying information can fall under two categories: direct identifiers and indirect identifiers. Direct identifiers correspond to values that are unique to an identity such as a full name of person or cellphone number, whereas indirect identifiers are pieces of information that can be used in conjunction to reidentify an individual like gender and someone's first name. If leaked, indirect identifiers can still hold large, negative privacy implications. An adversary can use the three indirect identifiers of gender, birth date, and postal code to re-identify 63-87% of a population, given the public availability of the U.S. Census (Golle, 2006).
Figure 2: Step-by-step flowchart of our simulated attacks for a GPT-3 (curie) model trained on the Enron Email dataset using the OpenAI fine-tuning API.
### Fine-tuning large language models
Earlier studies like Hu et al. (2021) highlight the significance of fine-tuning large language models for utility and accuracy, including newer models like GPT-3. Using the OpenAI API, Fine-tuning GPT-3 starts with a large curated enterprise dataset of prompts and completions. These are two columns, where prompts contains all the data points and completions contains the ground truth classification/output/intended next step of the prompt.
### Privacy attacks
There has been an recent increase in research on privacy attacks on machine learning models Shokri et al. (2017); Carlini et al. (2021). The former work introduces membership inference attacks (MIA), and the latter work studies data extraction attacks in the context of language models. In this study, we focus on implementing data extraction attacks on large language models, akin to He et al. (2022). In our study, we focus on extracting personally identifiable information (PII) like Lukas et al. (2023); Diera et al. (2022); Mireshgallah et al. (2022). Unlike previous work that focused on language models like such as GPT-2 and BERT Lukas et al. (2023); Diera et al. (2022); Mireshgallah et al. (2022), we conduct an extraction attack on GPT-3 and OpenAI's fine-tuning API.
## 3 Methodology
### Classification
Many language models are used in classification settings, such as sentiment analysis. We investigate whether an adversarial user with access to a fine-tuned GPT-3 classification model can accurately access the contents of the original fine-tuning set.
For the purpose of our initial experiment, we adopt a similar setup to recent papers that have conducted extraction attacks on smaller language models Lukas et al. (2023); Diera et al. (2022). To conduct a naive extraction attack on a GPT-3 powered classification model, we replicate the experimental setup used by Diera et al. (2022), which conducted extraction attacks on the older transformer model BERT. In particular, we use the same subset of the original Enron email dataset used in Diera et al. (2022). In this dataset, each email is organized within one of seven different folders (e.g., deal discrepancies, personal, online trading). Using the OpenAI fine-tuning API, we train models to categorize the Enron emails into their respective folders.
To enable a generative model trained for classifications to produce text, we exclude the separator (\(\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\|n\verb|n|n\verb|n\verb|n|n\verb|n|n\verb|n|n\verb|n|n\verb|n\verb|n|n\verb|n|n\verb|n|n\verb|n|n\verb|n|n\verb|n|n\verb|n|n\verb|n|n\verb|n|n\verb|n|n\verb|n|n\verb|n|n\verb|n|n\verb|n|n\verb|n|n\verb|n|n\verb|n|n\verb|n|n\verb|n\verb|n|n\verb|n\verb|n|n\verb|n\verb|n\|n\verb|n|n\verb|n\verb|n|n\verb|n\verb|n|n\verb|n\verb|n|n\verb|n|n\verb|n\verb|n|n\verb|n\verb|n|n\verb|n\verb|n|n\verb|n\verb|n|n\verb|n|n\verb|n|n\verb|n|n\verb|n|n\verb|n|n\verb|n|n\verb|n|n\verb|n|n\verb|n|n\verb|n|n\verb|n\verb|n|n\verb|n\verb|n|n\verb|n|n\verb|n|n\verb|n|n\verb|n\verb|n|n\verb|n|n\verb|n|n\verb|n|n\verb|n|n\verb|n\verb|n|n\verb|n|n\verb|n|n\verb|n|n\verb|n\verb|n\verb|n|n\verb|n\verb|n|n\verb|n|n\verb|n|n\verb|n|n\verb|n|n\verb|n\verb|n\verb|n|n\verb|n|n\verb|n\verb|n|n\verb|n\verb|n|n\verb|n\verb|n|n\verb|n\|n\verb|n|n\verb|n|n\verb|n|n\verb|n|n\verb|n\verb|n|n\verb|n|n\verb|n|n\verb|n\verb|n|n\verb|n\verb|n|n\verb|n|n\verb|n|n\verb|n\verb|n|n\verb|n|n\verb|n|n\verb|n\verb|n|n\verb|n|n\verb|n\verb|n|n\verb|n|n\verb|n\verb|n|n\verb|n|n\verb|n|n\verb|n|n\verb|n|n\verb|n\verb|n|n\verb|n|n\verb|n\verb|n|n\verb|n|n\verb|n\verb|n\verb|n|n\verb|n\verb|n\verb|n\verb|n|n\verb|n\verb|n|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n|n\verb|n\verb|n|n\verb|n\verb|n\verb|n\verb|n\verb|n|n\verb|n\verb|n\verb|n\verb|n\verb|n|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n\verb|n|
model to specifically examine the memorization of PIIs during fine-tuning, disregarding any potential memorization during pre-training, in this experiment.
### Automcomplete
In this second task, we explore email body generation, inspired by Google's Smart Compose (Lambert, 2018). Our model suggests text for the body of an email based on the given subject.
To acquire our data, we extracted a subset of the Enron email dataset. Similar to the Annotated Enron Subject Line Corpus (AESLC) dataset from the Enron dataset (Zhang and Tetreault, 2019), we focused on cleaning up our dataset to isolate the subjects and bodies of the emails. After randomly selecting approximately 600 emails, we manually filtered them down to a subset of 149 emails using the following filters. For the body of the emails, we applied filters such as a minimum of 3 sentences, a minimum of 25 words, and a maximum of 256 words. We manually excluded emails that seemed to be notifications, bulletins, promotions, or customer service communications, as they are less relevant to our goal of developing a "smart-compose" model for general workplace usage. Moreover, we eliminated emails that had limited natural language content, particularly those with in-line graphs or charts, as they could negatively impact the quality of the fine-tuned models' natural language outputs.
To organize our email dataset, we utilized the prompt/completion format. Given an email's subject line \(s\) and body \(b\), the prompt follows the format: "Generate the body of an email from the following subject line. Subject: [\(s\)]". The corresponding completion is: "[\(b\)]".
The train set consists of the subjects of the emails in the training set, and the test set (henceforth labelled as OOD) are the subjects of the emails in a hold-out set of subjects that were not included in the train set. The purpose of the OOD set is to determine if these personally identifiable information (PIIs) can be unintentionally revealed when writing other emails. We aim to answer the question: If a user attempts to use this model for email composition, how likely is it for them to encounter personally identifiable information (PIIs) from other emails?
Similarly to experiment 1, we trained a GPT-3 (curie) model for our analysis. We conducted 5 queries per email subject using the train and test sets. Consequently, for the train set consisting of 149 email subjects, we performed a total of 745 model queries. Likewise, for the test set containing 255 email subjects, we made 1275 model queries. Qualitatively speaking, we found that a higher percentage of the generated PII were obviously Enron-specific PIIs, so we didn't apply the same set-difference PII filtering process as we did in the first experiment.
## 4 Results
### Classification
With just 1800 generations of text (around 250k tokens generated), we were able to recover 256 unique PIIs from our fine-tuning dataset. During this experiment, we extracted names of confidential corporate parties such as "Enron North America," "KPMG," and "C-SPAN," as well as real personal names like "Jeffrey K. Skilling," "J. Kaminski," "Tracy Smith," and "Jeffrey C." These extractions were made after excluding common PIIs that also appear when generating from the baseline non-fine-tuned model. Table 1 shows the breakdown of types of exposed PIIs; organization words and people names are memorized the most out of these categories.
We observe that GPT-3 recalls 4.06% of PII in the Enron fine-tuning dataset with a precision of 2.45%. Precision measures an attacker's confidence that a generated PII is in the training set, and recall measures how much a PII is at risk of extraction. Therefore, approximately 1 out of every 40 PIIs retrieved from the fine-tuning dataset would be valuable for the attacker.
Through qualitative analysis, we found that a significant number of the extracted PIIs are related to the Enron Corporation and the 2000-01 California energy scandal (Borger, 2005). Therefore, this extraction attack on the Enron dataset has the potential to reveal information about the fine-tuning data as well as the sensitive topics surrounding the scandal.
### Automplete
Our metrics in Table 2 reveals that a significant risk of encountering sensitive personally identifiable information (PII) exist for users of the Autocomplete machine learning model. In the case of subject lines from the training set (Train), we were able to retrieve 236 PIIs. Furthermore, for subject lines not present in the training set (OOD), we retrieved
223 PIIs. The precision for recalling PIIs was measured to be 5.71% for train prompts and 13.16% for OOD prompts. These results indicate that around 10% of the PIIs emitted by the fine-tuned GPT-3 model match exact PIIs found in the Enron email training data. Additionally, fine-tuned GPT-3 recalls 26.29% and 27.83% of PII in the Enron fine-tuning autocomplete dataset for the train prompts and OOD prompts, respectively. With just approximately 1000 calls to fine-tuned GPT-3, we were able to identify over a quarter of the PIIs present in our dataset of about 150 emails.
The data leakage of the OOD setup is slightly lower than the train set, but it still remains high. This means that in practical settings where a user uses the autocomplete product with novel email subjects, it remains likely that they can still see leaked PIIs when using the product.
For a comprehensive breakdown of specific examples and PII leakage, refer to Table 4 in Appendix section A.2 for our train set and Table 5 for our OOD set.
## 5 Discussion
Our work demonstrates that sensitive personal identifying information (PII) can be extracted from both our naive setting (classification) and practical setting (autocomplete), where users have black box access to the model. We find that GPT-3 models fine-tuned for classification and autocomplete tasks can successfully retrieve sensitive PIIs with simple prompts.
Using APIs like OpenAI's fine-tuning interface exposes user-sensitive data and PII to the risk of extraction, potentially resulting in data breaches, lawsuits, and significant fines for non-compliance with privacy regulations. We believe it is valuable to explore privacy-preserving finetuning models that incorporate privacy techniques such as differential privacy and PII scrubbing to make language models resistant to extraction attacks Lukas et al. (2023).
Recent research indicates that larger models are more prone to memorizing fine-tuning datasets and
\begin{table}
\begin{tabular}{c|c c} \hline \hline
**Category** & \begin{tabular}{c} **Number of leaked PIIs from** \\ **Enron emails** \\ \end{tabular} &
\begin{tabular}{c} **Examples** \\ \end{tabular} \\ \hline Person & 51 & 'Jeffrey K. Skilling’, ’J. \\ & & Kaminski’, ’Tracy Smith’, \\ & & ”Jeffrey C” \\ \hline Organization & 92 & "Enron North America", \\ & & ”C-SPAN”, "Enron Investor \\ & & Relations”, "KPMG" \\ \hline Geopolitical entities (Countries, cities, states) & 28 & "Santa Clara", "Palo \\ & & Alto", “Calif.”, \\ & & ”Turkmenistan” \\ \hline Facilities & 3 & ”the Houston Astrodome”, \\ & & ”Smith Street”, ”Enron \\ & & Field” \\ \hline Dollar Amounts & 28 & ”19 million”, \\ & & \({}^{\text{\textasci{-29.95"}},}^{\text{\textasci{-120,000"}},}^{\text{\textasci{-8.5}}}\) \\ & & & million” \\ \hline Cardinal (numeric values) & 55 & ”8/1/99”, ”5/21/2000”, \\ & & ”11/26/99” \\ \hline Total & 257 & - \\ \hline \hline \end{tabular}
\end{table}
Table 1: Breakdown of leaked PIIs for classification task
\begin{table}
\begin{tabular}{c c c c} \hline \hline Prompts & \# PIIs retrieved & Precision & Recall \\ \hline Train & 236 & 13.16\% & 27.83\% \\ OOD & 223 & 5.71\% & 26.29\% \\ \hline \hline \end{tabular}
\end{table}
Table 2: Precision and recall for our data extraction attacks on the autocomplete task. We conduct a data extraction attack using email subject prompts from the training set (Train) and email subject prompts from outside the training set (OOD).
becoming vulnerable to extraction attacks (Lukas et al., 2023). As OpenAI and other providers of large language models consider offering fine-tuning API support for larger models, users should be aware of the heightened susceptibility to privacy attacks and memorization if privacy mitigation techniques are not implemented.
## Ethics Statement
We have minimized public disclosure of actual names in the Enron email dataset. We have only included leaked PIIs already associated with the Enron email dataset, such as "Jeffrey Skilling", the former CEO of Enron Corporation.
## Contribution Statement
A. Sun drafted the paper, designed the experiments with E. Zemour, implemented the experiments, and made the figures. E. Zemour designed the second experiment and contributed edits to the paper. A. Saxena wrote the policy sections in the introduction, contributed research toward Table 3, and made edits to the paper. U. Vaidyanathan, E. Lin, C. Lau, V. Mugunthan provided initial direction to the paper and comments to the manuscript.
|
2309.11141 | Uniqueness of Obstacles in Riemannian Manifolds from Travelling Times | Suppose that $K$ and $L$ are two disjoint unions of strictly convex obstacles
with the same set of travelling times, contained in an $n$-dimensional
Riemannian manifold $M$ (where $n\geq2$). Under some natural curvature
conditions on $M$, and provided that no geodesic intersects more than two
components in $K$ or $L$, we show that $K = L$. | Tal Gurfinkel, Lyle Noakes, Luchezar Stoyanov | 2023-09-20T08:37:29Z | http://arxiv.org/abs/2309.11141v1 | # Uniqueness of Obstacles in Riemannian Manifolds from Travelling Times
###### Abstract
Suppose that \(K\) and \(L\) are two disjoint unions of strictly convex obstacles with the same set of travelling times, contained in an \(n\)-dimensional Riemannian manifold \(M\) (where \(n\geq 2\)). Under some natural curvature conditions on \(M\), and provided that no geodesic intersects more than two components in \(K\) or \(L\), we show that \(K=L\).
keywords: Travelling Times, Inverse Scattering, Convex Obstacles, Riemannian Manifolds Msc: [2020] 37D40, 37C83, 53C21 Boundary rigidity problems have been important in Riemannian geometry for over one hundred years, thanks to their practical applications in tomography and geophysics [13]. Inverse problems similar to boundary rigidity problems, such as lens rigidity problems and marked length spectrum problems have also been studied [5; 7]. We also consider an inverse problem related to boundary rigidity, concerning the recovery of the boundary of many disjoint obstacles contained within an exterior body, from scattering data known only along the boundary of the exterior body. Classically boundary rigidity problems make use of data given by the geodesic flow, while our data will be determined by the billiard flow. Note that we aim to determine the boundary of several bodies while in boundary rigidity problems this information is already known and the objective is to recover the metric on the given manifold.
We suppose that \(M\) is a complete \(n\)-dimensional Riemannian manifold (\(n\geq 2\)). We denote the metric on \(M\) by \(\langle\ \cdot\ \rangle\), and the sectional curvature of \(M\) by \(sec_{M}(X,Y)\) for any \(X,Y\in TM\). We say that an \(n\)-dimensional submanifold \(W\) of \(M\) is strictly convex if the second fundamental form of \(\partial W\) is positive definite with respect to the inward
normal field of \(\partial W\). We ought to remark that this notion of convexity is sometimes known as infinitesimal convexity. It is known that strict convexity implies local convexity [2]. Furthermore, under certain curvature conditions on \(M\), it is known that if \(H\) is a strictly convex hypersurface of \(M\) then \(H\) is the boundary of a convex body \(\widetilde{H}\) in \(M\). That is, \(\widetilde{H}\) is convex in the sense that any two points in \(\widetilde{H}\) are connected by a unique geodesic of \(M\) which is contained in \(\widetilde{H}\) (cf. [1, 14, 10]). Nonetheless for our purposes it is sufficient to only assume that strict convexity of the boundary holds. Let \(S\) be a \(n\)-dimensional, strictly convex submanifold of \(M\) with smooth boundary, such that between any two points \(p,q\in S\) there is a unique minimal smooth geodesic of \(S\) connecting \(p\) and \(q\). Note that this does not necessarily imply that for every \(p,q\in S\) there is a unique minimal smooth geodesic of \(M\) between \(p\) and \(q\) which is contained in \(S\). By an _obstacle_ we mean a union \(K=K_{1}\cup\cdots\cup K_{d}\) of \(d\geq 2\) disjoint \(n\)-dimensional, strictly convex submanifolds \(K_{i}\) of \(S\) with smooth boundary. Let \(S_{K}=\overline{S\backslash K}\), we denote the (smooth) geodesic flow induced by \(M\) as \(\mathcal{F}\) and the billiard flow induced by \(S_{K}\) as \(\mathfrak{F}^{K}\). Denote the unit tangent bundle of \(S_{K}\) by \(T_{1}S_{K}\), and for any \(\sigma\in T_{1}S_{K}\), let \(\gamma_{\sigma}(t)=\mathrm{pr}_{1}\circ\mathfrak{F}^{K}_{t}(\sigma)\) be the billiard ray generated by \(\sigma\), where \(\mathrm{pr}_{1}:TM\to M\) is the usual projection. If there are finite distinct times \(\underline{t}\leq 0\leq\overline{t}\) such that \(\gamma_{\sigma}(\underline{t}),\gamma_{\sigma}(\overline{t})\in\partial S\) we say that \(\gamma_{\sigma}\) is a _non-trapped_ ray. Otherwise we say that it is _trapped_, and denote the set of \(\sigma\in T_{1}S_{K}\) such that \(\gamma_{\sigma}\) is trapped by \(\mathrm{Trap}(S_{K})\). Denote the inward normal of \(\partial S\) by \(N_{S}\), and the set of inward pointing unit vectors on \(\partial S\) by
\[T^{+}\partial S=\{\sigma\in T_{1}S:\mathrm{pr}_{1}(\sigma)\in\partial S\text{ and }\langle\sigma,N_{S}\rangle>0\},\]
Denote the set of trapped rays in \(T^{+}\partial S\) by \(\mathrm{Trap}(\partial S)^{(K)}=\mathrm{Trap}(S_{K})\cap T^{+}\partial S\). For any non-trapped \(\sigma\in T^{+}\partial S\backslash\mathrm{Trap}(\partial S)^{(K)}\), there is a time \(t(\sigma)>0\) such that \(y(\sigma)=\gamma_{\sigma}(t(\sigma))\in\partial S\). Let \(x(\sigma)=\gamma_{\sigma}(0)\). The set of _travelling times_ of \(K\) is defined as
\[\mathcal{T}_{K}=\{(x(\sigma),y(\sigma),t(\sigma)):\sigma\in T^{+}\partial S \backslash\mathrm{Trap}(\partial S)^{(K)}\}.\]
Suppose that \(K\) and \(L\) are two obstacles which have the same set of travelling times. That is \(\mathcal{T}_{K}=\mathcal{T}_{L}\). When \(M\) is \(\mathbb{R}^{n}\), it is known (see [11]) that \(K=L\). Furthermore, when \(n=2\), and the conditions in eq. (1) or eq. (2) (see below) hold1 then one can reconstruct \(K\) directly from the set of travelling times (see [12, 6]), provided \(K\) is in _general position_, that is, no smooth geodesic in \(S\) will intersect more than 2 distinct components of \(K\).
Footnote 1: Note that in this case \(M\) does not have to be \(\mathbb{R}^{2}\), any Riemannian manifold satisfying the conditions will suffice. Setting \(n=2\) means that \(\xi=1\) and \(\varphi_{0}=0\) are sufficient.
Denote the minimum distance between any two distinct components in \(K\) by \(d^{K}_{\min}\) and in \(L\) by \(d^{L}_{\min}\). Set \(d_{\min}=\min\{d^{K}_{\min},d^{L}_{\min}\}\), and let \(D\) be the diameter of \(S\). Since \(S\) is compact, there is a constant \(sec_{\max}\in\mathbb{R}\) such that \(sec_{M}(X,Y)\leq sec_{\max}\) for all \(X,Y\in T_{1}S\). Since \(K\) and \(L\) are disjoint unions of strictly convex submanifolds, there exists a lower bound \(\kappa_{\min}>0\) on the principal curvatures of \(\partial K\) or \(\partial L\). Let \(\xi=\lceil\frac{D}{d_{\min}}\rceil+2\). By Lemma 4 there are angles \(\varphi_{0}^{K},\varphi_{0}^{L}\in(0,\pi/2)\) such that every ray in \(S_{K}\) and \(S_{L}\) respectively will hit \(\partial K\) and
\(\partial L\) at an angle of at most \(\varphi_{0}^{K}\) and \(\varphi_{0}^{L}\) with respect to the outward normals of \(\partial K\) and \(\partial L\) at least once very \(\xi\) reflections. We set \(\varphi_{0}=\min\{\varphi_{0}^{K},\varphi_{0}^{L}\}\). In Proposition 5 we construct strictly convex fronts from the tangent rays of the obstacles. Such a front is shown to have a lower bound \(\Theta_{0}>0\) on its minimum principal curvature. We note that \(\varphi_{0}\) and \(\Theta_{0}\) both depend only on \(K\) and \(L\). We ask that \(S\) satisfies one of the following conditions:
\[sec_{\max}\leq 0\mbox{ or,} \tag{1}\]
\[sec_{\max}>0,\mbox{ while }D\xi\sqrt{sec_{\max}}<\frac{\pi}{2}\mbox{ and }\tan(sec_{\max}D\xi)\sqrt{sec_{\max}}<\Theta. \tag{2}\]
\[\Theta=\min\{2\kappa_{\min}\cos\varphi_{0},\Theta_{0}\}.\]
Given any \(\widetilde{\sigma}\in T_{1}S\backslash\mbox{Trap}(S_{K})\) which generates a trajectory \(\gamma_{\widetilde{\sigma}}^{K}\) in \(S_{K}\) that is tangential to \(\partial K\) at some point. There is some \(\sigma\in T^{+}\partial S\) such that \(\gamma_{\sigma}^{K}\) and \(\gamma_{\widetilde{\sigma}}^{K}\) are the same ray (up to re-parameterisation). Let \(t_{K}^{*}>0\) be the minimum time at which \(\gamma_{\sigma}^{K}\) is tangent to \(K\). Since \({\cal T}_{K}={\cal T}_{L}\) it follows that \(\gamma_{\sigma}^{L}\) must be tangent to \(\partial L\) as well. This holds owing to the fact that \(\sigma\) is a singularity of the travelling time function \(t_{K}:T^{+}\partial S\backslash\mbox{Trap}(\partial S)^{(K)}\rightarrow{\cal T }_{K}\). Since \(t_{L}=t_{K}\), the point \(\sigma\) must also be a singularity of \(t_{L}\) and hence generate a tangent ray in \(S_{L}\). Denote the minimum time at which \(\gamma_{\sigma}^{L}\) is tangent to \(\partial L\) by \(t_{L}^{*}>0\). Set \(t^{*}=\min\{t_{K}^{*},t_{L}^{*}\}\). We say that \(K\) and \(L\) are _equivalent up to tangency_ if for all such \(\sigma\in T_{1}S\),
\[\gamma_{\sigma}^{K}(t)=\gamma_{\sigma}^{L}(t)\quad\mbox{for all}\quad 0\leq t \leq t^{*}.\]
This allows us to state our main theorem:
**Theorem 1**.: _Let \(K\) and \(L\) be two disjoint unions of strictly convex obstacles contained within the same strictly convex submanifold \(S\) of \(M\). Suppose that \(K\) and \(L\) have the same set of travelling times, and satisfy either eq. (1) or eq. (2). If \(K\) and \(L\) are equivalent up to tangency, then \(K=L\). Consequently if \(K\) and \(L\) are both in general position then \(K\,=\,L\)._
Denote by \({\cal K}_{T}\) the class of pairs \((K,L)\) of obstacles with the same travelling times that are equivalent up to tangency. We remark that this class is nonempty, in fact if a pair of obstacles \((K,L)\) both satisfy Ikawa's no-eclipse condition (see [8]) and have the same set of travelling times then they are equivalent up to tangency. i.e. \((K,L)\in{\cal K}_{T}\). To show that this is in fact true, recall Ikawa's no-eclipse condition is as follows: for any three distinct components \(K_{i},K_{j},K_{l}\subseteq K\), if \(\mbox{Hull}(K_{i}\cup K_{j})\) is the convex hull of \(K_{i}\cup K_{j}\) then \(\mbox{Hull}(K_{i}\cup K_{j})\cap K_{l}=\emptyset\). We note that this is equivalent to saying that \(K\) is in general position. Proposition 10 shows that if \(K\) and \(L\) are both in general position, respectively, then they must be equivalent up to tangency.
## 1. The Propagation of Convex Fronts
Let \(X\) be a strictly convex codimension \(1\) submanifold of \(M\). Let \(N\) be a unit normal field to \(X\), then the second fundamental form of \(X\) is
\[h(Y,Z)=\langle\nabla_{Y}Z,N\rangle=-\langle\nabla_{Y}N,Z\rangle,\]
for any vectors \(Y,Z\) tangent to \(X\). Then \(h\) also defines the _shape operator_ of \(X\), a linear self-adjoint map \(s\), where \(\langle-sY,Z\rangle=h(Y,Z)\). Also note that \(sY=\nabla_{Y}N\). Now define the submanifold \(X_{t}\) as
\[X_{t}=\{\gamma_{(x,N(x))}(t):x\in X\}.\]
Here \(\gamma_{(x,N(x))}\) is the geodesic in \(M\) starting from \(x\in X\) in the direction of the outward normal \(N(x)\) to \(X\). We may consider \(h_{t}\) along each geodesic normal to \(X\), to obtain a relation between the second fundamental forms \(h_{0}\) and \(h_{t}\) of \(X\) and \(X_{t}\) respectively. When we do so, we take the parallel translates of vectors \(Y,Z\) from \(X\) to \(X_{t}\) along the geodesic. The purpose of this section is to describe, as directly as possible, the evolution of the curvature of \(X_{t}\) as it is propagated forward via the billiard flow. We do this either in terms of the operator \(s\), or the principal curvatures of \(X_{t}\).
**Proposition 2**.: _Suppose that \(\gamma_{x,N(x)}\) does not undergo any reflections for all \(x\in X\), and all \(t_{0}<t<t_{1}\). Then for all \(t_{0}<t<t_{1}\), the shape operator \(s\) of \(X_{t}\) satisfies the following differential equation,_
\[\dot{s}Y=R(N,Y)N-s^{2}Y, \tag{3}\]
_for all \(Y\) tangent to \(X\). Moreover, if \(k\) is a principal curvature of \(X_{t}\) then it satisfies the following differential equation:_
\[\dot{k}=-sec_{M}(N,V)-k^{2}, \tag{4}\]
_where \(V\) is the principal eigenvector corresponding to \(k\)._
Proof.: We compute the derivative:
\[\dot{s}Y =\nabla_{N}\nabla_{Y}N\] \[=R(N,Y)N+\nabla_{Y}\nabla_{N}N+\nabla_{[N,Y]}N.\]
Consider the term \([N,Y]\), by the symmetry of the connection we have:
\[[N,Y]=\nabla_{N}Y-\nabla_{Y}N=-\nabla_{Y}N=-sY.\]
Now since \(\nabla_{N}N=0\) we have,
\[\dot{s}Y =R(N,Y)N-\nabla_{sY}N\] \[=R(N,Y)N-s^{2}Y.\]
Note that the eigenvalues of \(s\) are the principal curvatures, so given an eigenvector \(V\) of \(s\) such that \(sV=kV\) and \(\langle V,V\rangle=1\), we have \(k=\langle sV,V\rangle\). Thus,
\[\dot{k} =\langle\dot{s}V,V\rangle\] \[=\langle R(N,V)N-s^{2}V,V\rangle\] \[=-sec_{M}(N,V)-k^{2}.\]
Recall that by \(sec_{M}(N,V)\), we denote the sectional curvature of \(M\) with respect to \(N\) and \(V\).
**Corollary 2.1**.: _Suppose that \(\Theta\) is a lower bound for the principal curvatures of \(X\). Then there exists a global lower bound \(k_{\min}>0\) for the principal curvatures of \(X_{t}\) provided that \(t<D\xi\)._
For a proof, see Theorem 3.1 in [15]. This involves solving a differential equation:
\[\dot{k}=-sec_{\max}-k^{2},\quad k(0)=\Theta,\]
which bounds eq. (4) below, for all time \(0<t<D\xi\). Note that for Corollary 2.1 to be true, the proof depends directly on the bounds given in eq. (1) and eq. (2).
**Proposition 3**.: _Suppose that \(\gamma_{x,N(x)}\) reflects off an obstacle \(K\) at \(t_{r}\). Let \(s^{-}\) and \(s^{+}\) be the shape operators of \(X\) before and after reflection respectively, at the point of reflection \(\gamma_{x,N(x)}(t_{r})\in K\). Let \(s_{K}\) be the shape operator of \(\partial K\). Then shape operators satisfy the following equation,_
\[s^{+}(Y_{+})-s^{-}(Y_{-})=-2\langle N_{-},N_{K}\rangle s_{K}(Y), \tag{5}\]
_for all \(Y\) tangent to \(K\). Here we denote by \(Y_{-}=Y-\langle Y,N_{-}\rangle N_{-}\) and similarly for \(Y_{+}\), where \(N_{-}\) is the normal to \(X\) prior to reflection and \(N_{K}\) is the normal to \(K\)._
Proof.: We note that along \(K\), we can define the vector field \(N_{-}\) of outward normals to \(X\) prior to reflection. We then define \(N_{+}\) as the vector field along \(K\) such that
\[N_{+}-N_{-}=-2\langle N_{-},N_{K}\rangle N_{K}.\]
Now we may take covariant derivatives along \(K\), in any tangent direction \(Y\),
\[\nabla_{Y}N_{+}-\nabla_{Y}N_{-}=-2Y\langle N_{-},N_{K}\rangle N_{K}-2\langle N _{-},N_{K}\rangle\nabla_{Y}N_{K}.\]
Then taking inner products with respect to any tangent vector \(Z\) to \(K\), we have
\[\langle\nabla_{Y}N_{+},Z\rangle-\langle\nabla_{Y}N_{-},Z\rangle=-2\langle N_ {-},N_{K}\rangle\langle\nabla_{Y}N_{K},Z\rangle.\]
Denote the projections of \(Y\) on to the tangent space of \(X\) before and after reflection by \(Y_{-}\) and \(Y_{+}\) respectively. i.e.
\[Y_{\pm}=Y-\langle Y,N_{\pm}\rangle N_{\pm}.\]
We similarly write \(Z_{\pm}\) for the projections of \(Z\). Thus we have
\[\langle\nabla_{Y_{+}}N_{+},Z_{+}\rangle-\langle\nabla_{Y_{-}}N_{-},Z_{-}\rangle= -2\langle N_{-},N_{K}\rangle\langle\nabla_{Y}N_{K},Z\rangle. \tag{6}\]
Which immediately implies eq. (5).
**Corollary 3.1**.: _If \(k_{\min}\) is the minimum principal curvature of \(X\) prior to reflection on \(\partial K\) at time \(t_{r}\), and \(k_{+}\) is any principal curvature after reflection, then_
\[k_{+}\geq k_{\min}+2\kappa_{\min}\cos\varphi, \tag{7}\]
_where \(\varphi\in(0,\frac{\pi}{2})\) is the angle between the outward normal to \(\partial K\) and the normal to \(X\) after reflection, and \(\kappa_{\min}\) is the minimal principal curvature of \(\partial K\)._
Proof.: Suppose \(V_{+}\) is a principal direction with \(||V_{+}||=1\), and let \(k_{+}\) be the corresponding principal curvature,. We may pick \(Y\) such that \(Y_{+}=V_{+}\), using the notation for \(Y_{+}\) from the proof of proposition 3. That is, we set
\[Y=V_{+}-\frac{\langle N_{K},V_{+}\rangle}{\langle N_{K},N_{+}\rangle}N_{+}.\]
Note that \(||Y_{-}||=1\) by construction since \(Y_{-}=Y_{+}-2\langle Y_{+},N_{K}\rangle N_{K}\). Although \(Y_{-}\) might not be a principal direction of \(X\) prior to reflection. By setting \(Z_{\pm}=Y_{\pm}\) it now follows from eq. (6) that,
\[k_{+}\geq k_{\min}+2\kappa_{\min}\cos\varphi\left|\left|Y\right|\right|^{2}.\]
Then we note that
\[||Y||^{2}=1+\frac{\langle V_{+},N_{K}\rangle^{2}}{\langle N_{+},N_{K}\rangle^ {2}}.\]
The result follows.
**Lemma 4**.: _There exist constants \(\xi\in\mathbb{Z}^{+}\) and \(\varphi_{0}\in(0,\frac{\pi}{2})\) such that any geodesic reflecting transversly on \(\partial K\) at least \(\xi\) times will hit \(\partial K\) at an angle \(\varphi<\varphi_{0}\) at least once, with respect to the outward normal._
Proof.: Suppose the contrary. Then there exists a sequence \(\{\sigma_{i}\}_{i=1}^{\infty}\subseteq T_{1}S_{K}\) such that the billiard ray \(\gamma_{\sigma_{i}}\) generated by \(\sigma_{i}\) has \(i\) reflections in \(S_{K}\), and at each point of reflection the angle between \(\dot{\gamma}_{\sigma_{i}}\) and the outward normal to \(\partial K\) is greater than \(\pi/2-1/i\). Since \(T_{1}S_{K}\) is compact it follows that there is a convergent subsequence \(\{\sigma_{i_{j}}\}_{j=1}^{\infty}\to\sigma^{*}\in T_{1}S_{K}\). Then by construction the billiard ray \(\gamma_{\sigma^{*}}\) has infinitely many points of reflection, all tangential to \(\partial K\). That is, \(\gamma_{\sigma^{*}}\) is a smooth geodesic of infinite length in \(S_{K}\). This is a contradiction, since the maximal length of a smooth geodesic in \(S\) is \(D\).
The following proposition allows one to encode the tangent ray to a small neighbourhood in \(\partial K\) as a strictly convex front. The proof is rather long and technical, however the result is essential for several results which follow.
**Proposition 5**.: _Fix a point \(x_{0}\in\partial K\) and tangent direction \(V\in T_{x_{0}}\partial K\) such that \(||V||=1\). There is a neighbourhood \(U\subseteq\partial K\) of \(x_{0}\) and a strictly convex front \(Y\) such that for every \(y\in Y\), the ray in the inward normal direction from \(y\) will intersect \(U\) tangentially. Hence, the front \(Y\) is diffeomorphic to \(U\). Furthermore, there is a global lower bound \(\Theta_{0}>0\) such that the minimum principal curvature of \(Y\) is greater than \(\Theta_{0}\) for all \(x_{0}\in\partial K\) and \(V\in T_{x_{0}}\partial K\)._
Proof.: Pick a small \(\varepsilon>0\), and let \(\exp_{x_{0}}^{(\partial K)}\) be the exponential map on \(\partial K\) at \(x_{0}\). Define \(x_{0}^{*}=\exp_{x_{0}}^{(\partial K)}(-\varepsilon V)\). It is known [3] that the geodesic sphere
\[\partial B_{\varepsilon}(x_{0}^{*})=\{x\in\partial K:d^{(\partial K)}(x,x_{0} ^{*})=\varepsilon\},\]
has a strictly positive second fundamental form, provided \(\varepsilon\) is sufficiently small. In fact, given any \(K_{B}>0\), there exists a \(\varepsilon>0\) such that the minimum principal curvature of \(\partial B_{\varepsilon}(x_{0}^{*})\) is bounded below by \(K_{B}\). Let \(0<\delta<\varepsilon\) and denote the family of spheres given by \(\partial B_{\varepsilon^{*}}(x_{0}^{*})\), where \(\varepsilon^{*}\in(\varepsilon-\delta,\varepsilon+\delta)\). Note that \(x_{0}\in\partial B_{\varepsilon}(x_{0}^{*})\) by construction. Let \(U^{*}\subseteq\partial B_{\varepsilon}(x_{0}^{*})\) be a neighbourhood of \(x_{0}\). Parameterise \(U^{*}\) as \(\widetilde{x}(u_{2},\ldots,u_{n-1})\) and let \(\eta(u_{2},\ldots,u_{n-1})\) be the outward unit normal field to \(\partial B_{\varepsilon}(x_{0}^{*})\). Denote the domain of \(\widetilde{x}\) by \(U_{0}^{*}\). Now for each fixed \((u_{2},\ldots,u_{n-1})\) let \(x(u_{1},u_{2},\ldots,u_{n-1})\) be the unit-speed geodesic in \(\partial K\) from \(\widetilde{x}(u_{2},\ldots,u_{n-1})\) in the direction \(\eta(u_{2},\ldots,u_{n-1})\). Then \(x(u_{1},u_{2},\ldots,u_{n})\in\partial B_{(\varepsilon+u_{1})}(x_{0}^{*})\) for all \(u_{1}\in(-\delta,\delta)\). Therefore, provided \(\delta>0\) is sufficiently small, \(x:(-\delta,\delta)\times U_{0}^{*}\to U\) is a parameterisation of a neighbourhood \(U\subseteq\partial K\) of \(x_{0}\) such that \(U^{*}\subset U\) and \(\partial x/\partial u_{1}(0)=V\).
We define
\[y_{t}(u)=\operatorname{pr}_{1}\circ\mathcal{F}_{t}\left(x(u),\frac{\partial x }{\partial u_{1}}(u)\right),\]
where \(\operatorname{pr}_{1}\) is the projection from \(TM\) onto \(M\). Then we claim that \(y(u)=y_{\varepsilon-u_{1}}(u)\) is a smooth parameterisation of a strictly convex submanifold \(Y=\{y(u):u\in(-\delta,\delta)\times U_{0}^{*}\}\) for sufficiently small constants \(\varepsilon,\delta>0\), and that any smooth geodesic from \(y(u)\) in the inward normal direction of \(Y\) will intersect \(U\) tangentially.
First, we show that the latter claim holds, that is, we show that \(\partial y/\partial u_{i}\) is normal to \(\partial y_{t}/\partial t\) for all \(i=1,\ldots,n-1\). Note that for \(i=2,\ldots,n-1\) we have
\[\frac{\partial y}{\partial u_{i}}=\frac{\partial y_{t}}{\partial u_{i}}\bigg{|} _{t=\varepsilon-u_{1}}\quad\text{and}\quad\frac{\partial y}{\partial u_{1}}= \left(\frac{\partial y_{t}}{\partial u_{1}}-\frac{\partial y_{t}}{\partial t} \right)\bigg{|}_{t=\varepsilon-u_{1}}.\]
Now consider the function
\[f_{i}=\left\langle\frac{\partial y_{t}}{\partial u_{i}},\frac{\partial y_{t}}{ \partial t}\right\rangle.\]
Then taking derivatives with respect to \(t\) we get,
\[\dot{f}_{i} =\left\langle\frac{D}{\partial t}\frac{\partial y_{t}}{\partial u_{ i}},\frac{\partial y_{t}}{\partial t}\right\rangle\] \[=\frac{1}{2}\frac{\partial}{\partial u_{i}}\left|\left|\frac{ \partial y_{t}}{\partial t}\right|\right|^{2}=0.\]
Thus \(f_{i}\) is constant with respect to \(t\), so we may compute its value at \(t=0\),
\[f_{i}=\left\langle\frac{\partial x}{\partial u_{i}},\frac{\partial x}{ \partial u_{1}}\right\rangle.\]
That is, \(f_{1}=1\) and \(f_{i}=0\) for \(i=2,\ldots,n-1\). We also note that for all \(i=2,\ldots,n-1\), we have
\[\left\langle\frac{\partial y}{\partial u_{i}},\frac{\partial y_{t}}{\partial t }\right\rangle=f_{i}=0.\]
Hence our claim holds for \(i=2,\ldots,n-1\), leaving only the case where \(i=1\). But a simple computation shows that the claim holds in this case as well, as follows,
\[\left\langle\frac{\partial y}{\partial u_{1}},\frac{\partial y_{t}}{\partial t }\right\rangle=\left\langle\frac{\partial y_{t}}{\partial u_{1}},\frac{ \partial y_{t}}{\partial t}\right\rangle-\left|\left|\frac{\partial y_{t}}{ \partial t}\right|\right|^{2}=f_{1}-1=0.\]
Hence \(\partial y/\partial u_{i}\) is normal to \(\partial y_{t}/\partial t\) for all \(i=1,\ldots,n-1\). Our next claim is that \(y\) is a smooth parameterisation of a submanifold \(Y\) in \(M\). To prove that the claim holds, it suffices to show that the vectors \(\{\partial y/\partial u_{i}(0)\}_{i=1}^{n-1}\) are linearly independent when \(\varepsilon>0\) is sufficiently small. Suppose this does not hold, then there is some \(\overline{\varepsilon}>0\) such that for all \(0<\varepsilon\leq\overline{\varepsilon}\) the vectors \(\{\partial y/\partial u_{i}(0)\}_{i=1}^{n-1}\) are linearly dependent. Pick a normal coordinate chart about \(y(0)\), and let \(\{E_{1},\ldots,E_{n}\}\) be the orthonormal frame associated with this chart. Recall that \(\partial B_{\varepsilon}(x_{0}^{*})\) is a strictly convex submanifold of \(\partial K\) with a unit normal given by \(\partial x/\partial u_{1}\). Therefore,
\[\left\langle\frac{\partial x}{\partial u_{1}},\frac{\partial x}{\partial u_{ i}}\right\rangle=0\text{ for all }i=2,\ldots,n-1. \tag{8}\]
Also recall that \(x\) is a unit-speed geodesic in the \(u_{1}\)-direction, i.e. \(||\partial x/\partial u_{1}||=1\). Hence,
\[\left\langle\nabla_{\frac{\partial x}{\partial u_{j}}}\frac{\partial x}{ \partial u_{1}},\frac{\partial x}{\partial u_{1}}\right\rangle=0\text{ for all }j=1,\ldots,n-1. \tag{9}\]
Furthermore, differentiating eq. (8) with respect to \(u_{1}\) we obtain the following identity,
\[\left\langle\nabla_{\frac{\partial x}{\partial u_{1}}}\frac{\partial x}{ \partial u_{1}},\frac{\partial x}{\partial u_{i}}\right\rangle+\left\langle \frac{\partial x}{\partial u_{1}},\nabla_{\frac{\partial x}{\partial u_{1}}} \frac{\partial x}{\partial u_{i}}\right\rangle=0\text{ for all }i=2,\ldots,n-1. \tag{10}\]
Now combining the two equations, we find the following
\[\left\langle\nabla_{\frac{\partial x}{\partial u_{1}}}\frac{\partial x}{ \partial u_{1}},\frac{\partial x}{\partial u_{i}}\right\rangle=0\text{ for all }i=1,\ldots,n-1.\]
The case where \(i=1\) follows directly from eq. (9), by setting \(j=1\). Thus we have shown that \(\nabla_{\partial x/\partial u_{1}}\partial x/\partial u_{1}\) is a normal to \(\partial K\). We should remark here that it is an inward pointing normal. Since \(\partial K\) is strictly convex, it follows that \(\big{|}\big{|}\nabla_{\partial x/\partial u_{1}}\partial x/\partial u_{1} \big{|}\big{|}\big{|}>0\). Furthermore, since the vectors \(\{\partial x/\partial u_{i}(0)\}_{i=2}^{n-1}\) are linearly independent, and \(\nabla_{\partial x/\partial u_{1}}\partial x/\partial u_{1}\) is orthogonal to \(\partial x/\partial u_{i}\) for all \(i=2,\dots,n-1\), it follows that
\[\mathcal{L}=\left\{\nabla_{\frac{\partial x}{\partial u_{1}}}\frac{\partial x }{\partial u_{1}}\right\}\cup\left\{\frac{\partial x}{\partial u_{i}}\right\} _{i=2}^{n-1},\]
is also a linearly independent set. We then write the parameterisation \(x(u)\) locally as \((x_{1}(u),\dots,x_{n}(u))\), and \(\nabla_{\partial x/\partial u_{1}}\partial x/\partial u_{1}=(X_{1},\dots,X_{ n})\). Consider the matrix \(B\) whose rows are the vectors in \(\mathcal{L}\),
\[B=\begin{bmatrix}X_{1}(0)&\dots&X_{n}(0)\\ \partial x_{1}/\partial u_{2}(0)&\dots&\partial x_{n}/\partial u_{2}(0)\\ \vdots&\ddots&\vdots\\ \partial x_{1}/\partial u_{n-1}(0)&\dots&\partial x_{n}/\partial u_{n-1}(0) \end{bmatrix}.\]
\(B\) has rank \(n-1\), since its rows are linearly independent. Thus there is a column we can remove from \(B\) while maintaining its rank. Without loss of generality, assume we may remove the last column. Then the submatrix obtained by removing the last column of B,
\[B^{\prime}=\begin{bmatrix}X_{1}(0)&\dots&X_{n-1}(0)\\ \partial x_{1}/\partial u_{2}(0)&\dots&\partial x_{n}/\partial u_{2}(0)\\ \vdots&\ddots&\vdots\\ \partial x_{1}/\partial u_{n-1}(0)&\dots&\partial x_{n-1}/\partial u_{n-1}(0) \end{bmatrix},\]
has nonzero determinant. We will use the fact that \(\det B^{\prime}\neq 0\) to arrive at a contradiction to the assumption that the vectors \(\{\partial y/\partial u_{i}(0)\}_{i=1}^{n-1}\) are linearly dependent for all \(0<\varepsilon\leq\overline{\varepsilon}\). We write \(y(u)\) locally as \((y_{1}(u),\dots,y_{n}(u))\) in our normal coordinate chart. Consider the square matrix
\[A=\begin{bmatrix}\partial y_{1}/\partial u_{1}(0)&\dots&\partial y_{n-1}/ \partial u_{1}(0)\\ \vdots&\ddots&\vdots\\ \partial y_{1}/\partial u_{n-1}(0)&\dots&\partial y_{n-1}/\partial u_{n-1}(0) \end{bmatrix}.\]
Since the vectors \(\{\partial y/\partial u_{i}\}_{i=1}^{n-1}\) are linearly dependent, so are the rows of \(A\). Thus \(\det A=0\). Before we proceed to showing a contradiction, we must examine \(\partial y/\partial u_{i}\) more closely. Working within our normal coordinate neighbourhood, consider the Taylor expansion of \(y\),
\[y(u)=x(u)+(\varepsilon-u_{1})\frac{\partial x}{\partial u_{1}}-\frac{1}{2}( \varepsilon-u_{1})^{2}\sum_{i,j,k=1}^{n}\Gamma_{ij}^{k}(u)\frac{\partial x_{i} }{\partial u_{1}}\frac{\partial x_{j}}{\partial u_{1}}E_{k}+O((\varepsilon-u_ {1})^{3}). \tag{11}\]
We will make use of this expansion in the latter stages of the proof as well. For now we highlight two consequences,
\[\frac{\partial y}{\partial u_{1}} =\frac{\partial x}{\partial u_{1}}-\frac{\partial x}{\partial u_{1} }+(\varepsilon-u_{1})\frac{\partial^{2}x}{\partial{u_{1}}^{2}}+(\varepsilon-u _{1})\sum_{i,j,k=1}^{n}\Gamma_{ij}^{k}(u)\frac{\partial x_{i}}{\partial u_{1}} \frac{\partial x_{j}}{\partial u_{1}}E_{k}+O((\varepsilon-u_{1})^{2})\] \[=(\varepsilon-u_{1})\nabla_{\frac{\partial x}{\partial u_{1}}} \frac{\partial x}{\partial u_{1}}+O((\varepsilon-u_{1})^{2}).\]
Hence,
\[\frac{\partial y}{\partial u_{1}}(0)=\varepsilon\nabla_{\frac{\partial x}{ \partial u_{1}}}\frac{\partial x}{\partial u_{1}}(0)+O(\varepsilon^{2}), \tag{12}\]
\[\frac{\partial y}{\partial u_{i}}(0)=\frac{\partial x}{\partial u_{i}}+O( \varepsilon)\text{ for all }i=2,\ldots,n-1. \tag{13}\]
Now consider the matrix \(A^{\prime}\) obtained by dividing the first row of \(A\) by \(\varepsilon\),
\[A^{\prime}=\begin{bmatrix}X_{1}(0)+O(\varepsilon)&\ldots&X_{n-1}(0)+O( \varepsilon)\\ \partial x_{1}/\partial u_{2}(0)+O(\varepsilon)&\ldots&\partial x_{n}/ \partial u_{2}(0)+O(\varepsilon)\\ \vdots&\ddots&\vdots\\ \partial x_{1}/\partial u_{n-1}(0)+O(\varepsilon)&\ldots&\partial x_{n-1}/ \partial u_{n-1}(0)+O(\varepsilon)\end{bmatrix}.\]
Notice that \(\lim_{\varepsilon\to 0}A^{\prime}=B^{\prime}\). However,
\[\det B^{\prime}=\lim_{\varepsilon\to 0}\det A^{\prime}=(\lim_{\varepsilon\to 0} \varepsilon^{-1})\det A=0.\]
This is a contradiction since, as we have shown, \(\det B^{\prime}\neq 0\). Therefore \(y\) is indeed a smooth parameterisation of a smooth submanifold \(Y\) in \(M\), for some sufficiently small \(\varepsilon>0\).
To conclude the proof we must now show the existence of a positive lower bound on the curvature of \(Y\). We will continue to work in the normal coordinate chart about \(y(0)\) as before. We denote the unit normal field to \(y(u)\) as \(N(u)\) and the shape operator of \(Y\) by \(s_{Y}\), then \(s_{Y}W=\nabla_{W}N\) for all vectors \(W\) tangent to \(Y\). Since the vectors \(\partial y/\partial u_{i}(0)\) are linearly independent, it suffices to calculate the terms
\[s_{Y}^{ij}=\left\langle s_{Y}\frac{\partial y/\partial u_{i}(0)}{||\partial y /\partial u_{i}(0)||},\frac{\partial y/\partial u_{j}(0)}{||\partial y/ \partial u_{j}(0)||}\right\rangle,\]
and show that there is a positive lower bound for all \(i,j=1,\ldots,n-1\). As we have shown prior, we know that \(N(u)=\partial y_{t}/\partial t|_{t=\varepsilon-u_{1}}\). Thus, by eq. (11),
\[N(u)=\frac{\partial x}{\partial u_{1}}-(\varepsilon-u_{1})\sum_{i,j,k=1}^{n} \Gamma_{ij}^{k}(u)\frac{\partial x_{i}}{\partial u_{1}}\frac{\partial x_{j}}{ \partial u_{1}}E_{k}+O((\varepsilon-u_{1})^{2}).\]
Now at \(u=0\) for \(i,j=2,\ldots,n-1\), by eq. (13) we have
\[s_{Y}\frac{\partial y}{\partial u_{i}}(0) =\nabla_{\frac{\partial y}{\partial u_{i}}}N(0)=\nabla_{\frac{ \partial x}{\partial u_{i}}}\frac{\partial x}{\partial u_{1}}(0)+O(\varepsilon),\] \[\left\langle\nabla_{\frac{\partial y}{\partial u_{i}}}N(0),\frac {\partial y}{\partial u_{j}}(0)\right\rangle =\left\langle\nabla_{\frac{\partial x}{\partial u_{i}}}\frac{ \partial x}{\partial u_{1}}(0),\frac{\partial x}{\partial u_{j}}(0)\right\rangle +O(\varepsilon)>K_{B}+O(\varepsilon),\] \[\left|\left|\frac{\partial y}{\partial u_{i}}(0)\right|\right| =\left|\left|\frac{\partial x}{\partial u_{i}}+O(\varepsilon)\right|\right|=1 +O(\varepsilon^{1/2}). \tag{14}\]
Recall that \(\partial B_{\varepsilon}(x_{0}^{*})\) was a strictly convex submanifold with unit normal \(\partial x/\partial u_{1}\). Hence at \(u=0\), provided we choose \(\varepsilon>0\) to be sufficiently small, for any \(K_{Y}>0\) we have
\[s_{Y}^{ij}=\left\langle s_{Y}\frac{\partial y/\partial u_{i}(0)}{||\partial y /\partial u_{i}(0)||},\frac{\partial y/\partial u_{j}(0)}{||\partial y/ \partial u_{j}(0)||}\right\rangle>K_{Y}\text{ for all }i,j=2,\ldots,n-1.\]
We will now proceed to show that there exists a lower bound for \(s_{Y}^{i1}\), for all \(i=2,\ldots,n-1\). Using eq. (12) and eq. (13), one can find the following,
\[\left\langle s_{Y}\frac{\partial y}{\partial u_{i}}(0),\frac{\partial y}{ \partial u_{1}}(0)\right\rangle=-\left\langle\nabla_{\frac{\partial y}{ \partial u_{i}}}\frac{\partial y}{\partial u_{1}}(0),N(0)\right\rangle=- \varepsilon\left\langle\nabla_{\frac{\partial x}{\partial u_{i}}}\nabla_{ \frac{\partial x}{\partial u_{1}}}\frac{\partial x}{\partial u_{1}}(0),\frac{ \partial x}{\partial u_{1}}\right\rangle+O(\varepsilon^{2}).\]
Recall that \(\nabla_{\partial x/\partial u_{1}}\partial x/\partial u_{1}\) is an inward normal to \(\partial K\). Hence
\[\left\langle s_{Y}\frac{\partial y}{\partial u_{i}}(0),\frac{\partial y}{ \partial u_{1}}(0)\right\rangle>\varepsilon\kappa_{\min}\left|\left|\nabla_{ \frac{\partial x}{\partial u_{1}}}\frac{\partial x}{\partial u_{1}}(0)\right| \right|+O(\varepsilon^{2}).\]
Where \(\kappa_{\min}>0\) is the lower bound on the curvature of \(\partial K\). Once again using eq. (12), along with eq. (14)
\[\left|\left|\frac{\partial y}{\partial u_{1}}(0)\right|\right|^{2}= \varepsilon^{2}\left|\left|\nabla_{\frac{\partial x}{\partial u_{1}}}\frac{ \partial x}{\partial u_{1}}(0)\right|\right|^{2}+O(\varepsilon^{3}). \tag{15}\] \[\left\langle s_{Y}\frac{\partial y/\partial u_{i}(0)}{||\partial y /\partial u_{i}(0)||},\frac{\partial y/\partial u_{1}(0)}{||\partial y/ \partial u_{1}(0)||}\right\rangle>\frac{\kappa_{\min}\left|\left|\nabla_{ \partial x/\partial u_{1}}\partial x/\partial u_{1}(0)\right|\right|+O( \varepsilon)}{\sqrt{\left|\left|\nabla_{\partial x/\partial u_{1}}\partial x /\partial u_{1}(0)\right|\right|^{2}+O(\varepsilon)}}\rightarrow\kappa_{\min} \text{ as }\varepsilon\to 0.\]
Therefore, provided \(\varepsilon\) is sufficiently small, \(s_{Y}^{i1}\) (\(i=2,\ldots,n-1\)) is bounded below by some \(K_{Y}>0\) which depends only on the minimum principal curvature \(\kappa_{\min}\) of \(\partial K\) and our choice of \(\varepsilon\). For the last curvature term, \(s_{Y}^{11}\), we compute the derivative of \(N\),
\[\frac{\partial N}{\partial u_{1}} =\frac{\partial^{2}x}{{\partial u_{1}}^{2}}+\sum_{i,j,k=1}^{n} \Gamma_{ij}^{k}(u)\frac{\partial x_{i}}{\partial u_{1}}\frac{\partial x_{j}}{ \partial u_{1}}E_{k}+O(\varepsilon-u_{1})\] \[=\nabla_{\frac{\partial x}{\partial u_{1}}}\frac{\partial x}{ \partial u_{1}}+O(\varepsilon-u_{1}).\]
Then at \(u=0\) we have,
\[\left\langle\frac{\partial N}{\partial u_{1}}(0),\frac{\partial y}{\partial u_{1}} (0)\right\rangle=\varepsilon\left|\left|\nabla_{\frac{\partial x}{\partial u_{1 }}}\frac{\partial x}{\partial u_{1}}(0)\right|\right|^{2}+O(\varepsilon^{2}).\]
Now recall that we picked normal coordinates about \(y(0)\), so that we have
\[\left\langle s_{Y}\frac{\partial y}{\partial u_{1}}(0),\frac{\partial y}{ \partial u_{1}}(0)\right\rangle=\left\langle\nabla_{\frac{\partial y}{ \partial u_{1}}}N(0),\frac{\partial y}{\partial u_{1}}(0)\right\rangle=\left \langle\frac{\partial N}{\partial u_{1}}(0),\frac{\partial y}{\partial u_{1} }(0)\right\rangle.\]
Using eq. (15) we may conclude the following
\[\left\langle s_{Y}\frac{\partial y/\partial u_{1}(0)}{||\partial y/\partial u _{1}(0)||},\frac{\partial y/\partial u_{1}(0)}{||\partial y/\partial u_{1}(0) ||}\right\rangle=\frac{1}{\varepsilon}\frac{\left|\left|\nabla_{\partial x/ \partial u_{1}}\partial x/\partial u_{1}(0)\right|\right|^{2}+O(\varepsilon) }{\left|\left|\nabla_{\partial x/\partial u_{1}}\partial x/\partial u_{1}(0) \right|\right|^{2}+O(\varepsilon)}.\]
Therefore if our previous choice of \(\varepsilon\) was not small enough, we may shrink \(\varepsilon\) so that, \(s_{Y}^{11}>K_{Y}\). We note that \(K_{Y}\) depends only on \(\kappa_{\min}\) and our choice of \(\varepsilon\). Hence we have shown that \(Y\) is strictly convex, i.e. its curvature is positive and bounded below by \(K_{Y}\) depending only on \(\kappa_{\min}\) and our choice of \(\varepsilon\). Therefore, since \(\partial K\) is compact, there exists a global lower bound \(\Theta_{0}>0\), for the curvature of \(Y\) for all \(x_{0}\) and \(V\).
## 2 Proof of Theorem 1
Combining all the results in Section 1 shows that any strictly convex front constructed via Proposition 5 will remain strictly convex when propagated forward via the billiard flow. This follows since the front may travel for distance at most \(D\) between each reflection, and by Corollary 2.1 it will remain strictly convex for a distance of at least \(D\xi\). Then every \(\xi\) reflections, by Lemma 4, the front will hit \(\partial K\) at an angle of at most \(\varphi_{0}\), at least once. By Corollary 3.1 the principal curvatures of the front will therefore be bounded below by \(\Theta=\min\{2\kappa_{\min}\cos\varphi_{0},\Theta_{0}\}\) once more. In this section we leverage these facts to prove a few additional results that will be useful in the proof of Theorem 1, which is given at the end of the section.
**Proposition 6**.: _Suppose \(X\) and \(Y\) are two strictly convex fronts such that for some \(t_{0}>0\) and \(x\in X\) we have \(x_{t_{0}}=\gamma_{x,N(x)}(t_{0})\in Y\). Moreover, suppose that \(N_{t_{0}}(x)=\dot{\gamma}_{x,N(x)}(t_{0})\) is an inward normal to \(Y\), and that for any principal curvatures \(k_{X}\) and \(k_{Y}\) of \(X\) and \(Y\) respectively with respect to \(N_{t_{0}}(x)\) at \(x_{t_{0}}\), we have \(k_{Y}<-k_{\min}<0<k_{\min}<k_{X}\). Let \(\mathcal{G}\) be the set of points \(x^{*}\in X\) such that \(x^{*}_{t(x^{*})}\in Y\) and \(N_{t(x^{*})}(x^{*})\) is normal to \(Y\). Then \(\mathcal{G}\) is a submanifold of \(X\) of dimension 0. That is, \(\mathcal{G}\) is at most countable._
Proof.: Shrink \(X\) and \(Y\) such that for all \(x\in X\) there is a \(t(x)>0\) such that \(x_{t(x)}\in Y\). We begin by taking Fermi coordinates about \(X\) (cf. chapter 5 in [9]). Then for any \(x\in X\)
we have \(x=(x_{1},\ldots,x_{n-1},0)\), and \(y(x)=x_{t(x)}=(x_{1},\ldots,x_{n-1},t(x))\in Y\). Note that the outward unit normal to \(X\) in these coordinates is \(N=(0,\ldots,0,1)\). Let \(g_{i}(x)=\langle\partial y/\partial x_{i},N\rangle\) for each \(i=1,\ldots n-1\). Then we have
\[\frac{\partial y}{\partial x_{i}}=\frac{\partial x}{\partial x_{i}}+\frac{ \partial t}{\partial x_{i}}N.\]
Hence, taking inner products with \(N\) on both sides, we get
\[\frac{\partial t}{\partial x_{i}}=\langle\frac{\partial y}{\partial x_{i}},N \rangle=g_{i}.\]
Now taking the covariant derivative we get
\[\nabla_{\frac{\partial y}{\partial x_{i}}}\frac{\partial y}{\partial x_{j}}= \nabla_{\frac{\partial y}{\partial x_{i}}}\frac{\partial x}{\partial x_{j}}+ \frac{\partial^{2}t}{\partial x_{i}\partial x_{j}}N+\nabla_{\frac{\partial y }{\partial x_{i}}}N.\]
Since \(\nabla_{N}N=0\) we know that
\[\langle\nabla_{N}\frac{\partial x}{\partial x_{j}},N\rangle=N\langle\frac{ \partial x}{\partial x_{j}},N\rangle=0.\]
Therefore we note that
\[\langle\nabla_{\frac{\partial y}{\partial x_{i}}}\frac{\partial x}{\partial x _{j}},N\rangle=\langle\nabla_{\frac{\partial x}{\partial x_{i}}}\frac{\partial x }{\partial x_{j}}+\frac{\partial t}{\partial x_{i}}\nabla_{N}\frac{\partial x }{\partial x_{j}},N\rangle=\langle\nabla_{\frac{\partial x}{\partial x_{i}}} \frac{\partial x}{\partial x_{j}},N\rangle.\]
Allowing us to conclude,
\[\frac{\partial^{2}t}{\partial x_{i}\partial x_{j}}=\langle\nabla_{\frac{ \partial y}{\partial x_{i}}}\frac{\partial y}{\partial x_{j}},N\rangle- \langle\nabla_{\frac{\partial x}{\partial x_{i}}}\frac{\partial x}{\partial x _{j}},N\rangle.\]
Now set \(g=(g_{1},\ldots,g_{n-1}):X\rightarrow\mathbb{R}^{n-1}\), and let \(\mathcal{G}=g^{-1}(0)\). Given \(x_{0}\in\mathcal{G}\), we know that \(N\) is the unit inward normal to \(Y\) at \(y(x_{0})\). Hence,
\[\frac{\partial g_{i}}{\partial x_{j}}=\frac{\partial^{2}t}{\partial x_{i} \partial x_{j}}(x_{0})>2k_{\min}>0.\]
So \(dg_{x_{0}}\) is surjective, and \(\mathcal{G}=g^{-1}(0)\) is a \(0\)-dimensional submanifold of \(X\).
**Proposition 7**.: _Let \(q:[0,\ell]\to X\) be an arc-length parameterised curve on a front \(X\). Denote by \(q_{t}(u)\in X_{t}\) the curve \(q\) after propagation along the geodesic flow for time \(t>0\) in the normal direction. Then_
\[d_{X_{t}}(q_{t}(0),q_{t}(\ell))\geq d_{X}(q(0),q(\ell))e^{tk_{\min}}. \tag{16}\]
_Where \(k_{\min}\) is the minimum principal curvature of \(X_{t}\), and \(d_{X}\) and \(d_{X_{t}}\) are the distance functions on \(X\) and \(X_{t}\) respectively._
Proof.: We begin by noting that \(J_{u}(t)=\frac{\partial}{\partial u}q_{t}(u)\) is a Jacobi field for each \(u\in[0,\ell]\), since \(q_{t}(u)\) is a variation through geodesics. We also note that
\[\frac{D}{\partial t}J_{u}(t) =\frac{D}{\partial t}\frac{\partial}{\partial u}q_{t}(u)\] \[=\frac{D}{\partial u}\frac{\partial}{\partial t}q_{t}(u)\] \[=s_{t}J_{u}(t).\]
Here \(s_{t}\) is the shape operator of \(X_{t}\). To bound \(d(q_{t}(0),q_{t}(\ell))\) we are interested in the norm of \(J_{s}(t)\), so consider the function \(f(u,t)=||J_{u}(t)||\). We take the derivative of \(f\) in \(t\) as follows:
\[\dot{f}(t) =\frac{1}{||J_{u}(t)||}\left<\frac{D}{\partial t}J_{u}(t),J_{u}( t)\right>\] \[=\frac{1}{||J_{u}(t)||}\left<s_{t}J_{u}(t),J_{u}(t)\right>\] \[=||J_{u}(t)||\left<s_{t}\frac{J_{u}(t)}{||J_{u}(t)||},\frac{J_{u }(t)}{||J_{u}(t)||}\right>\] \[=f(t)\left<s_{t}\frac{J_{u}(t)}{||J_{u}(t)||},\frac{J_{u}(t)}{|| J_{u}(t)||}\right>.\]
It now follows that the solution to \(f(t)\) is,
\[f(t)=\exp\left(\int_{0}^{t}\left<s_{t}\frac{J_{u}(t)}{||J_{u}(t)||},\frac{J_{u }(t)}{||J_{u}(t)||}\right>dt\right)\geq e^{tk_{\min}}.\]
Suppose now that for a fixed \(t>0\), the curve \(q_{t}(u)\) is the minimal geodesic along \(X\) between \(q_{t}(0)\) and \(q_{t}(\ell)\).
\[d_{X_{t}}(q_{t}(0),q_{t}(\ell))=\int_{0}^{\ell}f(t)\ du\geq\int_{0}^{\ell}e^{tk _{\min}}\ du=\ell e^{tk_{\min}}.\]
This is eq. (16), as required.
Owing to corollary 3.1 and proposition 7 above, we get the following corollary,
**Corollary 7.1**.: _Let \(q_{t}\) be as in proposition 7, but under propagation along the billiard flow. That is, we allow for reflections. Suppose \(q_{t}\) reflects at \(0<t_{1}<t_{2}<\cdots<t_{n}\) then_
\[d_{X_{t}}(q_{t}(0),q_{t}(\ell))\geq d_{X}(q(0),q(\ell))e^{t_{n}k_{\min}}. \tag{17}\]
We define the following submanifold of \(TS\):
\[T_{\partial}S_{K}=\{(x,v)\in TS:x\in\partial K\}\cup T^{+}\partial S\]
**Proposition 8**.: _There exists a countable family \(\{\Xi_{i}\}\) of codimension 2 smooth submanifolds of \(T_{\partial}S_{K}\) such that for any \(\sigma\in T_{\partial}S_{K}\backslash(\cup_{i}\Xi_{i})\) the billiard ray generated by \(\sigma\) is tangent to \(\partial K\) at most once._
Proof.: First, suppose that \(\sigma_{0}\in T_{1}\partial K\) is such that there is some \(t_{0}>0\) and \(\sigma_{1}\in T_{1}\partial K\) such that \(\mathfrak{F}_{t_{0}}(\sigma_{0})=\sigma_{1}\). Let \(0<i_{1},i_{2}\leq d\) be integers such that \(\mathrm{pr}_{1}(\sigma_{0})\in\partial K_{i_{1}}\) and \(\mathrm{pr}_{1}(\sigma_{0})\in\partial K_{i_{2}}\). Let \(\widetilde{V}\) be an open neighbourhood of \(\sigma_{0}\) in \(T_{1}S\), such that \(\mathrm{pr}_{1}(\widetilde{V})\cap\partial K_{i}=\emptyset\) for all \(i\neq i_{1}\). We set \(V=\mathrm{pr}_{1}(\widetilde{V})\) and construct a strictly convex front \(X\), via Proposition 5, such that any smooth geodesic from \(X\) in the inward normal direction to \(X\) will intersect \(\partial K_{i_{1}}\) tangentially. In particular, let \(\gamma\) be the smooth geodesic from \(x_{0}\) in the direction \(\sigma_{0}\) and denote the intersection of \(\gamma\) and \(X\) by \(x_{0}^{\prime}\). Also let \(\varepsilon_{0}>0\) be the time such that \(\mathrm{pr}_{1}\circ\mathcal{F}_{\varepsilon_{0}}(\sigma_{0})=x_{0}^{\prime}\). Note that we may shrink \(\widetilde{V}\) and \(\varepsilon_{0}\) as needed, shrinking \(X\) and bringing it closer to \(x_{0}\) in the process, while ensuring that \(X\cap\partial K=\emptyset\). Denote the outward unit normal to \(X\) at \(x\) by \(N_{X}(x)\), and let
\[\widetilde{X}=\{(x,N_{X}(x)):x\in X\},\]
\[\widehat{X}=\{(x,v):x\in X,v\in TS\}.\]
Note that \(\widetilde{X}\) is an \(n-1\) dimensional submanifold of \(TS\) while \(\widehat{X}\) is a submanifold of codimension 1 in \(TS\). Possibly shrinking \(\widetilde{V}\), there is a smooth function \(g:\widetilde{V}\to\mathbb{R}\) such that \(\mathcal{F}_{g(\sigma)}(\sigma)\in\widehat{X}\) for each \(\sigma\in\widetilde{V}\). Now consider the map \(f:\widetilde{V}\to\widehat{X}\) given by \(\sigma\mapsto\mathcal{F}_{1}(g(\sigma)\sigma)\), recalling that \(||\sigma||=1\) for all \(\sigma\in\widetilde{V}\) it follows that \(||f(\sigma)||=g(\sigma)\). Note that \(f\) is a diffeomorphism between \(\widetilde{V}\) and \(\widehat{X}\). We shall now define similar neighbourhoods for \(\sigma_{1}\). Let \(\widetilde{W}\) be an open neighbourhood of \(\sigma_{1}\) in \(T_{1}S\), and set \(W=\mathrm{pr}_{1}(\widetilde{W})\). Shrink \(\widetilde{W}\) so that \(W\cap\partial K_{i}=\emptyset\) for all \(i\neq i_{2}\).
Let \(\widetilde{K}=K\backslash(K_{i_{1}}\cup K_{i_{2}})\). Define the curves \(\chi:[0,t_{0}+d_{\min}/2]\times\widetilde{V}\to T_{1}S\) as the billiard rays in \(S_{\widetilde{K}}\) generated by \(\sigma\in\widetilde{V}\). That is, \(\chi_{t}(\sigma)\) is the billiard ray generated by \(\sigma\), which ignores reflections on \(\partial K_{i_{1}}\) and \(\partial K_{i_{2}}\). Pick \(t_{0}^{*}>0\) such that \(|t_{0}-t_{0}^{*}|<d_{\min}/2\), and let \(\widetilde{Y}=\chi_{t_{0}^{*}}(\widetilde{X})\). Let \(Y=\mathrm{pr}_{1}(\widetilde{Y})\), and set
\[\widehat{Y}=\{(y,v):y\in Y,v\in TS\}.\]
Once again, shrinking \(\widetilde{V}\) or \(\widetilde{W}\) if necessary, there is a smooth function \(\mathring{g}:\widetilde{W}\to\mathbb{R}\) such that \(\mathcal{F}_{\mathring{g}(\sigma)}(\sigma)\in\widehat{Y}\) for all \(\widetilde{W}\). Then we may define the diffeomorphism \(\mathring{f}:\widetilde{W}\to\widehat{Y}\) given by \(\sigma\mapsto\mathcal{F}_{1}(\mathring{g}(\sigma)\sigma)\). Thus we have a diffeomorphism \(\Phi=\mathring{f}^{-1}\circ\chi_{t_{0}^{*}}\circ f:\widetilde{V}\to\widetilde{W}\), and in particular \(\Phi(\sigma_{0})=\sigma_{1}\).
Let \(\varphi_{V}:\widetilde{V}\to\mathbb{R}^{2}\) and \(\varphi_{W}:\widetilde{W}\to\mathbb{R}^{2}\) be local defining functions for \(T_{1}\partial K\) in \(\widetilde{V}\) and \(\widetilde{W}\) respectively. Denote \(V_{0}=\varphi_{V}^{-1}(0)\) and \(W_{0}=\varphi_{W}^{-1}(0)\). We shall show that \(V_{0}\cap\Phi^{-1}(W_{0})\) has codimension at least \(1\) in \(V_{0}\). Let \(\psi_{V}:V\to\mathbb{R}\) and \(\psi_{W}:W\to\mathbb{R}\) be local defining functions for \(\partial K\) in \(V\) and \(W\) respectively. It follows that \(\mathrm{grad}\;\psi_{V}/\left|\left|\mathrm{grad}\;\psi_{V}\right|\right|\) is the outward unit normal to \(\partial K\) on \(V\cap\partial K\) and similarly for \(W\cap\partial K\). Consider the sets \((d\psi_{V})^{-1}(0),(d\psi_{W})^{-1}(0)\subseteq TS\). Note that \(0\) is a regular value of \(d\psi_{V}\), since
\[\nabla(d\psi_{V})(X,Y)=-\left|\left|\mathrm{grad}\;\psi_{V}\right|\right|h_{ \partial K}(X,Y)\geq 0\text{ for all }X,Y\in\mathfrak{X}(\partial K),\]
where \(h_{\partial K}\) is the scalar second fundamental form of \(\partial K\). Thus \((d\psi_{V})^{-1}(0)\) is a codimension \(1\) submanifold of \(TS\), and similarly for \((d\psi_{W})^{-1}(0)\). Now set \(V^{\prime}=(d\psi_{V})^{-1}(0)\cap\widetilde{V}\) and \(W^{\prime}=(d\psi_{W})^{-1}(0)\cap\widetilde{W}\). Note that both \(V^{\prime}\) and \(W^{\prime}\) are codimension \(1\) submanifolds of \(\widetilde{V}\) and \(\widetilde{W}\). Consider the codimension \(1\) submanifold \(Y^{\prime}\) of \(\widehat{Y}\) given by \(\hat{f}(W^{\prime})\). We claim that \(Y^{\prime}\) and \(\widetilde{Y}\) are transversal near \((y_{0},\mu_{0})=\chi_{t_{0}^{*}}\circ f(\sigma_{0})\). Denote the outward unit normal of \(Y\) at \(y\) by \(\mu(y)\). Let \(\widehat{t}_{0}=t_{0}-t_{0}^{*}\), and \(N_{0}^{*}\) be the outward unit normal of \(\partial K_{i_{2}}\) at \(x_{1}\). Denote by \(N^{*}:[t_{0}^{*},t_{0}]\to T_{1}S\) the vector field given by parallel translation of \(N_{0}^{*}\) along the smooth geodesic in \(S\) from \(y_{0}\) to \(x_{1}\) such that \(N^{*}(t_{0})=N_{0}^{*}\). For some small \(\delta>0\), let \(\lambda:(-\delta,\delta)\to Y\) be the smooth geodesic in \(Y\) such that \(\lambda(0)=y_{0}\) and \(\lambda^{\prime}(0)=N^{*}(t_{0}^{*})\). Note that \(N^{*}(t_{0}^{*})\) is indeed tangent to \(Y\) by construction. Let \(\omega(s)=(\lambda(s),\mu(\lambda(s)))\), and \(p(s)=\mathrm{pr}_{1}\circ\mathcal{F}_{\widehat{t}_{0}}(\omega(s))\). Then \(p((-\delta,\delta))\) is diffeomorphic to \(\lambda((-\delta,\delta))\) via the shift along the geodesic flow. Indeed, suppose there exist distinct points \(\omega_{1},\omega_{2}\in\widehat{Y}\) such that \(\mathrm{pr}_{1}\circ\mathcal{F}_{\widehat{t}_{0}}(\omega_{1})=\mathrm{pr}_{1} \circ\mathcal{F}_{\widehat{t}_{0}}(\omega_{2})\). Owing to Corollary 2.1, Corollary 3.1 and Lemma 4, \(Y\) is strictly convex. Thus by Proposition 7, we must have
\[d(\mathrm{pr}_{1}\circ\mathcal{F}_{\widehat{t}_{0}}(\omega_{1}),\mathrm{pr}_{1 }\circ\mathcal{F}_{\widehat{t}_{0}}(\omega_{2}))>0,\]
contradicting our assumption. Now for \(\delta>0\) sufficiently small, since \(p^{\prime}(0)=N_{0}^{*}\), we have \(p((-\delta,0))\subseteq K_{i_{2}}\) and \(p((0,\delta))\cap K_{i_{2}}=\emptyset\). Thus for \(s\in(-\delta,0)\), the smooth geodesic
\[r(s)=\{\mathrm{pr}_{1}\circ\mathcal{F}_{t}(\omega(s)):0\leq t\leq\widehat{t}_{ 0}\},\]
must intersect \(\partial K_{i_{2}}\). Furthermore, since \(\partial K_{i_{2}}\) is strictly convex, provided \(r(s)\) remains sufficiently close to \(x_{1}\), the smooth geodesic \(r(s)\) cannot be tangent to \(\partial K_{i_{2}}\). For \(s\in(0,\delta)\), since \(r(0)\) is tangent to \(\partial K_{i_{2}}\), which is strictly convex, and the Jacobi field of \(r(s)\) at \(r(0)\) is precisely \(N^{*}\) by construction, it follows that for sufficiently small \(\delta\), the smooth geodesic \(r(s)\) cannot intersect \(\partial K_{i_{2}}\) at all. Thus the curve \(\omega(s)\) was constructed such that \(\omega(s)\in\widetilde{Y}\), \(\omega(0)=(y_{0},\mu_{0})\in Y^{\prime}\) and \(\omega(s)\) is transversal to \(Y^{\prime}\) in \(\widehat{Y}\). Now since \(Y^{\prime}\) has codimension \(1\) in \(\widehat{Y}\), this is sufficient to claim that near \((y_{0},\mu_{0})\) the submanifolds \(\widetilde{Y}\) and \(Y^{\prime}\) are transversal as desired. It follows that \(Y^{\prime}\cap\widetilde{Y}\) has codimension \(1\) in \(\widetilde{Y}\).
Note that \(\widetilde{Z}=f^{-1}\circ\chi_{t_{0}^{*}}^{-1}(\widetilde{Y})\) is an \(n-1\) dimensional submanifold of \(V_{0}\) by construction. Now consider the \(n-2\) dimensional submanifold \(Z^{\prime}=f^{-1}\circ\chi_{t_{0}^{*}}^{-1}(Y^{\prime}\cap\widetilde{Y})\). Since \(Z^{\prime}\) has codimension \(1\) in \(\widetilde{Z}\), we must have \(\widetilde{Z}\backslash Z^{\prime}\neq\emptyset\). Take any \(\sigma\in\widetilde{Z}\backslash Z^{\prime}\), then \(\Phi(\sigma)\in\widetilde{W}\backslash W^{\prime}\)
by construction. But \(W_{0}\subseteq W^{\prime}\), hence it follows that \(\sigma\in V_{0}\backslash\Phi^{-1}(W_{0})\). Now owing the the fact that \(\widetilde{V}\) is connected, and since \(V_{0}\cap\Phi^{-1}(W_{0})\) is closed, we can conclude that \(V_{0}\cap\Phi^{-1}(W_{0})\) must have codimension at least 1 in \(V_{0}\).
Therefore we have shown that there is a submanifold, \(V_{0}\cap\Phi^{-1}(W_{0})\), of codimension 2 or greater in \(T_{\partial}S_{K}\) containing \(\sigma_{0}\). Since \(\sigma_{0}\) was arbitrary, this is true for all \(\sigma\in T_{1}\partial K\) which generate a ray that is tangent to \(\partial K\) more than once. Let \(\gamma_{\sigma}(t)=\mathfrak{F}_{t}(\sigma)\) for all \(t\in\mathbb{R}\) and \(\sigma\in T_{\partial}S_{K}\). Let \(\Xi\) denote the set of points \(\sigma\in T_{\partial}S_{K}\) such that \(\gamma_{\sigma}\) is tangent to \(\partial K\) more than once. For any pair of integers \(\alpha<\beta\) let \(\Xi_{\beta}^{\alpha}\) be the subset of \(\Xi\) containing all \(\sigma\) such that \(\gamma_{\sigma}\) has a tangency after \(\alpha\) transversal reflections and a tangency after transversal \(\beta\) reflections. Here the number of reflections in the negative time direction is represented as negative integers. That is, if \(t_{\alpha}<0\) is the time such that \(\gamma_{\sigma}(t_{\alpha})\in T_{1}\partial K\) then \(\alpha<0\), and similarly for \(\beta\). Note that the numbers of reflections \(\alpha\) and \(\beta\) are both counted starting from \(\gamma_{\sigma}(0)\). We will show that for every \(\sigma\in\Xi_{\beta}^{\alpha}\) there is a neighbourhood \(U\subseteq T_{\partial}S_{K}\) such that \(U\cap\Xi_{\beta}^{\alpha}\) can be covered by a countable union of codimension 2 submanifolds of \(U\). Note that by our argument above this immediately holds for the case where either \(\alpha=0\) or \(\beta=0\). Suppose that both \(\alpha\neq 0\) and \(\beta\neq 0\). Then given any \(\sigma_{-1}\in\Xi_{\beta}^{\alpha}\), let \(t_{\alpha}\) and \(t_{\beta}\) be the times of the first two tangencies of \(\gamma_{\sigma}\). Set \(\sigma_{0}=-\gamma_{\sigma_{-1}}(t_{\alpha})\). By our previous argument there is a codimension 2 submanifold \(\Psi(\sigma_{0})\subseteq T_{\partial}S_{K}\) containing \(\sigma_{0}\). Since there are exactly \(\alpha\) transversal reflections along \(\gamma_{\sigma_{-1}}\) between \(\sigma_{-1}\) and \(\sigma_{0}\), there are open neighbourhoods \(U(\sigma_{0}),U(\sigma_{-1})\subseteq T_{\partial}S_{K}\) of \(\sigma_{0}\) and \(\sigma_{-1}\) respectively, and a diffeomorphism \(\Upsilon:U(\sigma_{0})\to U(\sigma_{-1})\) via the billiard flow in \(S_{K}\). Recall that \(\Psi(\sigma_{0})=V_{0}\cap\Phi^{-1}(W_{0})\) via our previous argument and by construction \(\Upsilon^{-1}(\Xi_{\beta}^{\alpha}\cap U(\sigma_{-1}))\subseteq\Psi(\sigma_{0})\). Thus \(\Xi_{\beta}^{\alpha}\cap U(\sigma_{-1})\subseteq\Upsilon(\Psi(\sigma_{0}))\), which completes our argument.
**Proposition 9**.: _Given an open set \(\widetilde{V}\subseteq T_{1}\partial K\), let \(V\) be the set of points such that for all \((x,\omega)\in V\) the ray \(\gamma(x,\omega)\) from \(x\) in the direction \(\omega\) is trapped, but never tangent. Then \(V\) has topological dimension 0._
Proof.: Given an open set \(\widetilde{V}\subseteq T_{1}\partial K\), let \(V^{\prime}\) be the set of points \((x,\omega)\in\widetilde{V}\) such that the ray \(\gamma(x,\omega)\) from \(x\) in the direction \(\omega\) is tangent to \(K\) at some point \(y\neq x\). Possibly shrinking \(\widetilde{V}\), construct a convex front \(X\) from \(\widetilde{V}\) as in Proposition 5. Let \(U\) be the convex front along with its outward unit normal field \(N(x)\). So \(\widetilde{U}=\{(x,N(x)):x\in X\}\). Let \(V\) be the set of points \((x,\omega)\in\widetilde{V}\backslash V^{\prime}\) for which the ray \(\gamma(x,\omega)\) is trapped. Then there are corresponding sets \(U^{\prime},U\subseteq\widetilde{U}\) to \(V^{\prime},V\). Let \(\widetilde{F}=\{1,\dots,d\}\) and define the metric space
\[F=\prod_{i=1}^{\infty}\widetilde{F},\quad\eta(a,b)=\sum_{i=1}^{\infty}\frac{1} {2^{i}}(1-\delta_{a_{i}b_{i}}),\]
with the metric \(\eta\) defined for all \(a,b\in F\), where \(a=(a_{1},a_{2},\dots)\), \(b=(b_{1},b_{2},\dots)\) and \(\delta_{ij}\) is the Kronecker delta. i.e. \(\delta_{a_{i}b_{i}}=1\) if \(a_{i}=b_{i}\) and 0 otherwise. It is well known that \(F\) is a 0-dimensional metric space (cf. [4] pg. 22). We claim that \(U\) is homeomorphic to a subset of \(F\). We shall prove this claim by constructing a homeomorphism \(f:U\to F\) as
follows: Begin by ordering the components of \(K\) in a fixed order, say \(K_{1},\dots,K_{d}\). Now given \((x,\omega)\in U\), since \(\gamma(x,\omega)\) is trapped, there is a sequence \(a_{1},a_{2},\dots\), with \(a_{i}\in\widetilde{F}\) such that the \(i\)'th reflection point of \(\gamma(x,\omega)\) is on \(K_{a_{i}}\). We define \(f(x,\omega)=(a_{1},a_{2},\dots)\).
The function \(f\) is injective, in the following manner. Suppose there exist distinct \((x,\omega),(x^{\prime},\omega^{\prime})\in U\) such that \(f(x,\omega)=f(x^{\prime},\omega^{\prime})=(a_{1},a_{2},\dots)\). We consider the evolution of the front \(\widetilde{U}\) as it reflects only off the obstacles \(K_{a_{1}},K_{a_{2}},\dots\), in that exact order. Let \(q(t)=\gamma(x,\omega)(t)\) and \(q^{\prime}(t)=\gamma(x^{\prime},\omega^{\prime})(t)\). Then applying Corollary 7.1 we have
\[d_{X}(q(0),q^{\prime}(0))\leq d_{X_{t}}(q(t),q^{\prime}(t))e^{-tk_{\min}}.\]
So \(d_{X}(q(0),q^{\prime}(0))\to 0\) as \(t\to\infty\), that is \((x,\omega)=(x^{\prime},\omega^{\prime})\). Hence \(f\) is injective, and therefore we consider \(f\) as a bijection onto its image \(f(U)\subseteq F\).
Now consider a fixed \((x,\omega)\in U\), and let \((a_{1},a_{2},\dots)=f(x,\omega)\). For any \(\varepsilon>0\) there is an integer \(N\) such that \(2^{1-N}<\varepsilon\). Note that \((x,\omega)\in U\) guarantees \(\gamma(x,\omega)\) is never tangent to an obstacle. Therefore there is a \(\mu>0\) such that the for any \((x^{\prime},\omega^{\prime})\) in the \(\mu\)-ball, \(B_{\mu}\), around \((x,\omega)\) in \(\widetilde{U}\), the ray \(\gamma(x^{\prime},\omega^{\prime})\) transversally reflects off the obstacles \(K_{a_{1}},K_{a_{2}},\dots,K_{a_{N}}\) in that order. Thus \(f(x^{\prime},\omega^{\prime})\) and \(f(x,\omega)\) must agree up to the \(N\)-th coordinate. Then for any \((x^{\prime},\omega^{\prime})\in B_{\mu}\) we have
\[\eta(f(x^{\prime},\omega^{\prime}),f(x,\omega))=\sum_{i=1}^{\infty}\frac{1}{2 ^{i}}(1-\delta_{f(x^{\prime},\omega^{\prime})_{i}f(x,\omega)_{i}})\leq\sum_{i= N}^{\infty}\frac{1}{2^{i}}=\frac{1}{2^{N-1}}<\varepsilon.\]
Hence \(f\) is continuous. We shall complete the proof by showing that \(f\) has a continuous inverse. Given \((x,\omega),(x^{\prime},\omega^{\prime})\in U\), let \((a_{1},a_{2},\dots)=f(x,\omega)\) and \((b_{1},b_{2},\dots)=f(x^{\prime},\omega^{\prime})\). Let \(N\) be the largest integer such that \(a_{i}=b_{i}\) for all \(1\leq i\leq N\). Let \(t_{N}>0\) be the time just after both \(\gamma(x,\omega)\) and \(\gamma(x^{\prime},\omega^{\prime})\) undergo the \(N\)-th reflection. Set \(C=d_{X_{t_{N}}}(x_{t_{N}},x^{\prime}_{t_{N}})\), where \(x_{t_{N}}=\pi_{1}\circ\gamma(x,\omega)(t_{n})\), and \(x^{\prime}_{t_{N}}=\pi_{1}\circ\gamma(x^{\prime},\omega^{\prime})(t_{n})\). Then by Corollary 7.1,
\[d_{X}(x,x^{\prime})<Ce^{-t_{N}k_{\min}}. \tag{18}\]
Now for any \(\varepsilon>0\), pick \(N\) so that \(Ce^{-Nd_{\min}k_{\min}}<\varepsilon\). Then for any \((x^{\prime},\omega^{\prime})\) such that \(f(x,\omega)\) and \(f(x^{\prime},\omega^{\prime})\) agree up to the \(N\)-th reflection, by eq. (18), and since \(t_{N}>Nd_{\min}\), we must have \(d_{X}(x,x^{\prime})<\varepsilon\). Take any \(0<\delta<2^{-N}\), if \(\eta(f(x,\omega),f(x^{\prime},\omega^{\prime}))<\delta\) it follows that \(f(x,\omega)\) and \(f(x^{\prime},\omega^{\prime})\) must agree up to at least the \(N\)-th reflection. Thus \(\eta(f(x,\omega),f(x^{\prime},\omega^{\prime}))<\delta\) must imply \(d_{X}(x,x^{\prime})<\varepsilon\), i.e. the inverse of \(f\) is also continuous. Therefore \(f\) is a homeomorphism onto \(f(U)\). Now since \(f(U)\subseteq F\), it must have dimension \(0\), and hence \(U\) must also have dimension \(0\) as desired.
Proof of Theorem 1.: Suppose that there exists some point \(x_{0}\in\partial K\) such that \(x_{0}\not\in\partial L\). Let \(V\in T_{x_{0}}\partial K\) be any vector tangent to \(\partial K\) at \(x_{0}\). Then there is some neighbourhood \(U\subseteq\partial K\) of \(x_{0}\) such that \(U\cap\partial L=\emptyset\). By Proposition 5 we may construct a convex front \(X\) from \(U\) (possibly shrinking it) such that the rays from \(X\) in the inward normal direction
intersect \(U\) tangentially. Furthermore, let \(\widetilde{X}\) be the set of points in \(X\) for which the ray in \(S_{K}\) in the normal direction is trapped, or tangent to \(\partial K\) more than once. We also let \(\widetilde{X}\) include any points in \(x\) for which the corresponding ray in \(S_{L}\) is tangent to \(\partial L\) more than once. Then it follows from Proposition 8 and Proposition 9 that \(\dim(\widetilde{X})\leq n-2\). Let \(X^{\prime}=X\backslash\widetilde{X}\), then \(\dim(X^{\prime})>0\). Given any \(x\in X^{\prime}\), note that the ray \(\gamma_{x}^{K}\) which intersects \(X\) at \(x\) orthogonally, is tangent to \(\partial K\) exactly once by construction. Then since \(\gamma_{x}^{K}\) is not trapped, there is some \(\sigma_{x}\in T^{+}\partial S\) for which \(\gamma_{x}^{K}(t)=\operatorname{pr}_{1}\circ\mathfrak{F}_{t}^{K}(\sigma_{x})\). Now since \(K\) and \(L\) have the same set of travelling times, for each \(x\in X^{\prime}\) there is exactly one point \(z(x)\in\partial L\) such that the ray \(\gamma_{x}^{L}(t)=\operatorname{pr}_{1}\circ\mathfrak{F}_{t}^{L}(\sigma_{x})\) is tangent to \(\partial L\) at \(z(x)\). Denote the set of all such tangent points by \(Z=\{z(x):x\in X^{\prime}\}\). Note that by construction \(Z\cap\partial K=\emptyset\) and that \(z:X^{\prime}\to Z\) is a homeomorphism. We claim that there exists some \(z^{*}\in\partial L\) such that for all neighbourhoods \(\Omega\subseteq\partial L\) of \(z^{*}\) we have \(\dim(\Omega\cap Z)>0\). Assume the contrary. Then for every \(z\in\partial L\) there is a neighbourhood \(\Omega_{z}\subseteq\partial L\) such that \(\dim(\Omega_{z}\cap Z)=0\). Consider the open cover \(\{\Omega_{z}\}_{z\in\partial L}\) of \(\partial L\). Since \(\partial L\) is compact, we may take a finite subcover \(\{\Omega_{z_{i}}\}_{i=1}^{\alpha}\). It follows \(Z\) is covered by the finite cover \(\{\Omega_{z_{i}}\cap Z\}_{i=1}^{\alpha}\). Recalling that \(\dim(\Omega_{z_{i}}\cap Z)=0\), this implies that \(\dim(Z)=0\), a contradiction.
Let \(t_{K}^{*}(x)\) and \(t_{L}^{*}(x)\) be such that \(\gamma_{x}^{K}(t_{K}^{*}(x))\) and \(\gamma_{x}^{L}(t_{L}^{*}(x))\) are the points of tangency with \(\partial K\) and \(\partial L\) respectively. Suppose that \(t_{K}^{*}(x_{0})<t_{L}^{*}(x_{0})\), then provided \(U\) is sufficiently small, this hold true for all \(x\in X\), i.e. \(t_{K}^{*}(x)<t_{L}^{*}(x)\). Now since \(K\) and \(L\) are equivalent up to tangency, it follows that
\[\gamma_{x}^{K}(t)=\gamma_{x}^{L}(t)\text{ for all }0\leq t\leq t_{K}^{*}(x), \text{ and }x\in X.\]
We now construct a convex front \(Y\) (via Proposition 5) from a neighbourhood \(\Omega\subseteq\partial L\) of \(z^{*}\), such that the outward normal \(N(y)\) of \(Y\) points towards \(X\). i.e. \(\operatorname{pr}_{1}\circ\mathfrak{F}_{t(y)}^{L}(y,N(y))\in X\) for some \(t(y)>0\). Define the set \(\mathcal{Q}\) of points \(z(x)\in Z\) such that \(\gamma_{x}^{L}\) intersects both \(X\) and \(Y\) orthogonally. Since \(\gamma_{x}^{K}(t)=\gamma_{x}^{L}(t)\) for all \(0\leq t\leq t_{K}^{*}(x)\), and \(x\in X\), it follows that \(\gamma_{x}^{L}\) will intersect \(X\) always. But by construction, \(\gamma_{x}^{L}\) will intersect \(Y\) for all \(z(x)\in\Omega\cap Z\). Thus \(\mathcal{Q}=\Omega\cap Z\). Therefore \(\dim(\mathcal{Q})>0\). However Proposition 6 states that \(\dim(\mathcal{Q})=0\), a contradiction. Note that we assumed that \(t_{K}^{*}(x_{0})<t_{L}^{*}(x_{0})\). If we assume instead that \(t_{K}^{*}(x_{0})>t_{L}^{*}(x_{0})\), then by letting \(\mathcal{Q}\) be the set of points \(z(x)\in Z\) such that \(\gamma_{x}^{K}\) intersects both \(X\) and \(Y\) orthogonally, we reach the same contradiction. Thus our proof is complete.
**Proposition 10**.: _Suppose that \(K\) and \(L\) have the same set of travelling times, and that both \(K\) and \(L\) are in general position, respectively. Then \((K,L)\in\mathcal{K}_{T}\), that is, \(K\) and \(L\) are equivalent up to tangency._
Proof.: We begin by noting a few facts. First, if \(\gamma\) is a geodesic in \(S_{K}\) that is tangent to \(\partial K\), then the point of tangency is either the first or last point of contact between \(\gamma\) and \(\partial K\). This follows directly from the general position condition on \(K\) (and \(L\)) as follows. Let \(t_{1},\ldots,t_{j}\) be the times when \(\gamma\) reflects off the obstacle. Suppose \(\gamma\) had a tangency
between the first and last reflection, at some time \(t_{i},\ 1<i<l\). Then \(\gamma|_{[t_{i-1},[t_{i}+1]}\) is a smooth geodesic intersecting \(K\) on three distinct components. Thus contradicting the general position condition on \(K\). This clearly holds for \(L\) as well. Second, note that if \(\sigma\in T^{+}\partial S\backslash\mathrm{Trap}(\partial S)\) is such that geodesic \(\gamma_{\sigma}^{K}\) generated by \(\sigma\) in \(S_{K}\) is tangent to \(\partial K\), then there is a singularity in the travelling time function at the point corresponding to \(\sigma\). Since \(K\) and \(L\) have the same travelling time function, it follows that \(\sigma\) must also generate a geodesic \(\gamma_{\sigma}^{L}\) in \(S_{L}\) which is tangent to \(\partial L\). Note that \(\gamma_{\sigma}^{L}\) and \(\gamma_{\sigma}^{K}\) are not necessarily the same geodesic.
Now using the first fact, \(\gamma_{\sigma}^{K}\) we may assume that \(\gamma_{\sigma}^{K}\) is tangent to \(\partial K\) at the first point of contact. If not we may simply reverse the direction of \(\gamma_{\sigma}^{K}\). We claim that \(\gamma_{\sigma}^{L}\) is also tangent to \(\partial L\) at the first point of contact. Using the first and second facts, \(\gamma_{\sigma}^{L}\) must be tangent to \(\partial L\) at either the first or last reflection points (or both). Suppose, for contradiction that \(\gamma_{\sigma}^{L}\) is tangent to \(\partial L\) only at the last point of reflection. Let \(\tau_{K},\tau_{L}>0\) be the times of the first and last reflection points of \(\gamma_{\sigma}^{K}\) and \(\gamma_{\sigma}^{L}\) respectively. By Proposition 5 we may construct convex fronts \(X\) and \(Y\) about \(\gamma_{\sigma}^{K}(\tau_{K})\) and \(\gamma_{\sigma}^{L}(\tau_{L})\) in the directions \(\dot{\gamma}_{\sigma}^{K}(\tau_{K})\) and \(\dot{\gamma}_{\sigma}^{L}(\tau_{L})\) respectively. It now follows from the same argument as in the proof of Theorem 1 that we reach a contradiction to Proposition 6. Hence \(\gamma_{\sigma}^{K}\) and \(\gamma_{\sigma}^{L}\) must both be tangent at their first points of reflection. If \(t_{K}^{*},t_{L}^{*}>0\) are the times of tangency for \(\gamma_{\sigma}^{K}\) and \(\gamma_{\sigma}^{L}\) respectively, it follows that \(\gamma_{\sigma}^{K}(t)=\gamma_{\sigma}^{L}(t)\) for all \(0\leq t\leq\min\{t_{K}^{*},t_{L}^{*}\}\). That is, general position implies that \(K\) and \(L\) are equivalent up to tangency.
|
2307.16396 | Olio: A Semantic Search Interface for Data Repositories | Search and information retrieval systems are becoming more expressive in
interpreting user queries beyond the traditional weighted bag-of-words model of
document retrieval. For example, searching for a flight status or a game score
returns a dynamically generated response along with supporting, pre-authored
documents contextually relevant to the query. In this paper, we extend this
hybrid search paradigm to data repositories that contain curated data sources
and visualization content. We introduce a semantic search interface, OLIO, that
provides a hybrid set of results comprising both auto-generated visualization
responses and pre-authored charts to blend analytical question-answering with
content discovery search goals. We specifically explore three search scenarios
- question-and-answering, exploratory search, and design search over data
repositories. The interface also provides faceted search support for users to
refine and filter the conventional best-first search results based on
parameters such as author name, time, and chart type. A preliminary user
evaluation of the system demonstrates that OLIO's interface and the hybrid
search paradigm collectively afford greater expressivity in how users discover
insights and visualization content in data repositories. | Vidya Setlur, Andriy Kanyuka, Arjun Srinivasan | 2023-07-31T04:01:00Z | http://arxiv.org/abs/2307.16396v1 | # Olio: A Semantic Search Interface for Data Repositories
###### Abstract
Search and information retrieval systems are becoming more expressive in interpreting user queries beyond the traditional weighted bag-of-words model of document retrieval. For example, searching for a flight status or a game score returns a dynamically generated response along with supporting, pre-authored documents contextually relevant to the query. In this paper, we extend this hybrid search paradigm to data repositories that contain curated data sources and visualization content. We introduce a semantic search interface, Olio, that provides a hybrid set of results comprising both auto-generated visualization responses and pre-authored charts to blend analytical question-answering with content discovery search goals. We specifically explore three search scenarios - question-and-answering, exploratory search, and design search over data repositories. The interface also provides faceted search support for users to refine and filter the conventional best-first search results based on parameters such as author name, time, and chart type. A preliminary user evaluation of the system demonstrates that Olio's interface and the hybrid search paradigm collectively afford greater expressivity in how users discover insights and visualization content in data repositories. |
2309.06429 | Efficient Inference on High-Dimensional Linear Models with Missing
Outcomes | This paper is concerned with inference on the regression function of a
high-dimensional linear model when outcomes are missing at random. We propose
an estimator which combines a Lasso pilot estimate of the regression function
with a bias correction term based on the weighted residuals of the Lasso
regression. The weights depend on estimates of the missingness probabilities
(propensity scores) and solve a convex optimization program that trades off
bias and variance optimally. Provided that the propensity scores can be
pointwise consistently estimated at in-sample data points, our proposed
estimator for the regression function is asymptotically normal and
semi-parametrically efficient among all asymptotically linear estimators.
Furthermore, the proposed estimator keeps its asymptotic properties even if the
propensity scores are estimated by modern machine learning techniques. We
validate the finite-sample performance of the proposed estimator through
comparative simulation studies and the real-world problem of inferring the
stellar masses of galaxies in the Sloan Digital Sky Survey. | Yikun Zhang, Alexander Giessing, Yen-Chi Chen | 2023-09-12T17:50:27Z | http://arxiv.org/abs/2309.06429v2 | # Efficient Inference on High-Dimensional Linear Models with Missing Outcomes
###### Abstract
This paper is concerned with inference on the conditional mean of a high-dimensional linear model when outcomes are missing at random. We propose an estimator which combines a Lasso pilot estimate of the regression function with a bias correction term based on the weighted residuals of the Lasso regression. The weights depend on estimates of the missingness probabilities (propensity scores) and solve a convex optimization program that trades off bias and variance optimally. Provided that the propensity scores can be consistently estimated, the proposed estimator is asymptotically normal and semi-parametrically efficient among all asymptotically linear estimators. The rate at which the propensity scores are consistent is essentially irrelevant, allowing us to estimate them via modern machine learning techniques. We validate the finite-sample performance of the proposed estimator through comparative simulation studies and the real-world problem of inferring the stellar masses of galaxies in the Sloan Digital Sky Survey.
**Keywords:** High-dimensional inference; Missing data; Semi-parametric efficiency; Lasso.
## 1 Introduction
In this paper, we develop a novel method to conduct statistical inference on the conditional mean (or regression function) of a sparse high-dimensional linear model when outcomes are missing at random. Specifically, let \(Y\in\mathbb{R}\) be an outcome variable and \(X\in\mathbb{R}^{d}\) be a high-dimensional covariate vector. The object of interest is the regression function \(m_{0}(x)=\operatorname{E}(Y|X=x)\).
Valid inference on \(m_{0}(x)\) in the presence of high-dimensional covariates and missing outcomes is of practical importance. For one thing, collecting high-dimensional data has become common practice across the board in science and engineering. Examples range from portfolio optimization
and cross-sectional home price analysis in finance (Fan et al., 2011) to the development of biomarker classifiers in biology (Baek et al., 2009). Moreover, some recent works in causal inference lean toward generating high-dimensional covariates in order to control for potential confounders (Wyss et al., 2022). For another, semi-automatic processing and storing of vast amounts of unstructured data inevitably entails missingness (Huang and Knowles, 2016). Such missingness may result from the study dropouts of participants in clinical trial (Higgins et al., 2008) or noncompliance with the assigned treatments in a survey (Frumento et al., 2012). The emerging field of semi-supervised learning in computer science, where additional data samples with the same distribution are given but without labels, is also a missing-outcome problem (Chapelle et al., 2006).
When the outcome variables \(Y_{i},i=1,...,n\) are fully observed given a data sample (which is known as the oracle data setting), statistical estimation on \(m_{0}(x)\) is tractable with regularization and sparsity constraints under high-dimensional settings (Wainwright, 2019). One of the most well-studied approaches is Lasso (Tibshirani, 1996), which assumes the linear model and imposes \(L_{1}\)-regularization on the regression coefficients. When it comes to statistical inference, however, the Lasso solution leads to a biased estimate of \(m_{0}(x)\) even when the linear model assumption is correct (van de Geer et al., 2014; Zhang and Zhang, 2014). Missing outcomes further exacerbate the bias of the Lasso solution; see the first panel of Figure 1 for an illustration. While this bias could be partially mitigated via sample-splitting or re-fitting, doing so would reduce the sample size and increase the computational cost, leading to less efficient estimates. Consequently, the goal of this paper is to address one central question:
_"How can we conduct statistically and computationally efficient inference on \(m_{0}(x)\) despite missing outcomes?"_
This is a challenging question; to answer it, we need to impose some structural assumptions. Throughout the paper, we will assume that the data sample follows a high-dimensional sparse
Figure 1: Comparison of our debiased estimators under two different choices of the tuning parameters (“1SE” and “min-feas”) with the conventional Lasso estimates based on complete-case or oracle data. Here, we adopt the sparse \(\beta_{0}^{sp}\) and dense \(x^{(4)}\) with \(d=1000,n=900\), \(X\sim\mathcal{N}_{d}\left(0,\Sigma^{\text{cs}}\right)\), and \(\epsilon\sim\mathcal{N}(0,1)\) under the MAR setting (22); see Section 4.1 for details.
linear regression model with outcomes "missing at random" (MAR).
**Assumption 1** (High-dimensional sparse linear regression model with MAR outcomes).:
1. _The data sample_ \(\{(Y_{i},R_{i},X_{i})\}_{i=1}^{n}\) _consists of independent and identically distributed (i.i.d.) observations drawn from_ \((Y,R,X)\in\mathbb{R}\times\{0,1\}\times\mathbb{R}^{d}\) _generated by the linear model_ \[Y=X^{T}\beta_{0}+\epsilon,\qquad\mathrm{E}\left(\epsilon\big{|}X\right)=0, \qquad\mathrm{E}\left(\epsilon^{2}\big{|}X\right)=\sigma_{\epsilon}^{2},\] (1) _where_ \(\left|\left|\beta_{0}\right|\right|_{0}=\sum_{k=1}^{d}\mathbbm{1}_{\{\beta_{0 k}\neq 0\}}=s_{\beta}\ll d\)_._
2. _The missingness indicator_ \(R\) _is conditionally independent of_ \(Y\) _given_ \(X\)_._
Assumption 1(a) is standard in high-dimensional statistics, and the sparse linear model can be extended to sparse additive models (Ravikumar et al., 2009) and partially linear models (Muller and van de Geer, 2015; Belloni et al., 2019); however, such generalizations are beyond the scope of this paper. Assumption 1(b) is common in the missing data literature (Tsiatis, 2007; Little and Rubin, 2019) and is related to the ignorability or unconfoundedness condition in causal inference (Imbens, 2004). Under the MAR assumption, the propensity score \(\pi(Y,X):=\mathrm{P}(R=1|Y,X)\) depends only on the covariate vector \(X\) so that \(\pi(Y,X)\equiv\pi(X):=\mathrm{P}(R=1|X)\), which can thus be estimated from the fully observed data \(\{(X_{i},R_{i})\}_{i=1}^{n}\)(Rosenbaum and Rubin, 1983).
Under Assumption 1, our proposed debiasing method directly infers the regression function \(m_{0}(x)=x^{T}\beta_{0}\) by combining a Lasso pilot estimate of \(\beta_{0}\) based on the complete-case data with a bias correction term based on the weighted residuals of the Lasso regression. The weights depend on estimates of the propensity scores \(\pi(X_{i}),i=1,...,n\) and are obtained as the solution to a convex debiasing program which trades off bias and variance in a mean-squared-error-optimal way; see Figure 1(b,c) for a preview.
A crucial difference between the existing works of debiasing the Lasso estimate in the literature (Zhang and Zhang, 2014; van de Geer et al., 2014; Javanmard and Montanari, 2014, 2014) and our proposal is that we focus our inference on the scalar regression function \(m_{0}(x)=x^{T}\beta_{0}\) but not the high-dimensional regression (coefficient) vector \(\beta_{0}\in\mathbb{R}^{d}\). By considering \(m_{0}(x)\), we are able to conduct inference on any individual regression coefficient by setting \(x\) equal to a standard basis vector in \(\mathbb{R}^{d}\) as well as on joint effects of several regression coefficients. Additionally, inferring \(m_{0}(x)\) itself is more computationally efficient than inferring the regression vector \(\beta_{0}\).
### Contribution and Outline of the Paper
**1. Methodology:** We describe the detailed procedures of our proposed debiasing method and reveal its computational feasibility through the dual formulation. We also provides key interpretations to our debiasing method from the perspective of bias-variance trade-off and Neyman near-orthogonality; see Section 2 for details.
**2. Asymptotic Theory:** We prove that our proposed debiased estimator is asymptotically unbiased and normally distributed as long as the propensity scores are consistently estimated in
a mild rate of convergence; see Section3.4. More essentially, the asymptotic variance of our debiased estimator attains the semi-parametric efficiency bound among all asymptotically linear estimators. To establish this asymptotic normality, we derive the consistency of Lasso pilot estimate with complete-case data and the dual solution of our debiasing program as building blocks; see Section3.3.
## 3 Simulations and Real-World Applications:
To demonstrate the finite-sample performance of our debiasing method, we compare it with the existing debiasing methods in the literature through Monte Carlo experiments under comprehensive simulation settings; see Section4.1 and Section4.2. We also apply the proposed debiasing method to the problem of inferring the stellar masses of galaxies in the Sloan Digital Sky Survey; see Section5.
## 4 Algorithmic Implementation:
We describe the implementation details of our debiasing method in SectionA of the supplement and encapsulate them into both a Python package "Debias-Infer" and R package "DebiasInfer". All other codes for our experiments are available at [https://github.com/zhangyk8/Debias-Infer/tree/main/Paper_Code](https://github.com/zhangyk8/Debias-Infer/tree/main/Paper_Code).
### Related Work
Statistical inference for the high-dimensional linear model with complete data has been studied by Zhang and Zhang (2014); van de Geer et al. (2014); Javanmard and Montanari (2014, 2018, 2018); Buhlmann (2013); Ning and Liu (2017), where they proposed confidence intervals and multiple testing adjustments for individual regression coefficients or their projections to low-dimensional components. The main idea of their approaches is to correct the bias of Lasso or ridge regression vector (Hoerl and Kennard, 1970) by estimating a relaxed inverse of the gram matrix or the projection matrix, which is also known as the one-step update (van der Vaart, 2000) in the classical study of semi-parametric inference. More recently, Li (2020) provided a bootstrap procedure for debiasing the Lasso solution, while Javanmard and Lee (2020) presented a flexible framework for testing general hypotheses of regression coefficients. Battey and Reid (2023) considered inferring the linear model by treating each regression coefficient in turn as the interest parameter and finding an optimal transformation to orthogonalize the remaining coefficients. Moreover, some of the above ideas have been extended to parameter estimation and inference for high-dimensional generalized linear models by Belloni et al. (2016); Guo and Chen (2016); Xia et al. (2020); Shi et al. (2021); Cai et al. (2021); Guo et al. (2021); Ma et al. (2021).
The asymptotic normality of the aforementioned debiasing methods requires the regression vector \(\beta_{0}\) to lie in the ultra-sparse regime \(s_{\beta}=o\left(\frac{\sqrt{n}}{\log d}\right)\) as our proposed debiasing method. Under the known covariance matrix and Gaussian design, Javanmard and Montanari (2018) alleviated the sparsity requirement to \(s_{\beta}=o\left(\frac{n}{(\log d)^{2}}\right)\). In addition, Cai and Guo (2017) considered constructing confidence intervals for the regression function \(m_{0}(x)=x^{T}\beta_{0}\) in the moderate-sparse region \(\frac{\sqrt{n}}{\log d}\ll s_{\beta}\lesssim\frac{n}{\log d}\) and establish the minimaxity and adaptivity for their confidence intervals with a prior knowledge of the sparsity level \(s_{\beta}\); see also Nickl and van de Geer (2013); Cai and Guo (2018) for the related discussion.
Statistical estimation of the regression function under the MAR mechanism and its semi-parametric efficiency have also been investigated in the literature (Robins et al., 1994; Robins and Rotnitzky, 1995; Robins et al., 1995; Graham, 2011; Muller and Keilegom, 2012). Although their studies addressed the problems from missing outcomes to unobserved covariates, they mainly focused on low-dimensional data. In the high-dimensional data setting, Loh and Wainwright (2012) proposed an estimator of the linear regression coefficient based on projected gradient descent in the presence of MAR covariates. High-dimensional estimation and inference with missing covariates were also studied by Wang et al. (2019), but the covariates are assumed to be missing complete at random (MCAR). Recently, Celentano and Wainwright (2023) considered semi-parametric estimation of the population mean under MAR outcomes in the high-dimensional inconsistency regime and proposed their debiasing remedy. The most closely related work to our paper is Chakrabortty et al. (2019), which also studied the high-dimensional estimation and inference problems with missing outcomes. They proposed a general M-estimation framework to conduct statistical inference via a Lasso-type debiased and doubly robust estimator. Computationally, their debiased estimator requires the estimation of a \(d\times d\) debiasing matrix, while our debiasing method is more efficient because we only solve for a \(d\)-dimensional debiasing vector through a convex program. Theoretically, both Chakrabortty et al. (2019) and us impose the same condition on the rate of convergence of the propensity score estimation in pursuit of the asymptotic normality. Nevertheless, we demonstrate through simulation studies that our debiased estimator is still asymptotically normal even when the propensity scores are estimated by nonparametric methods. Furthermore, the debiased estimator in Chakrabortty et al. (2019) asymptotically achieves the coordinatewise semi-parametric efficiency bound, which may not be efficient in the worst possible query direction (Jankova and van de Geer, 2018), while our debiased estimator is asymptotically efficient whatever the query direction is.
### Notation
The general probability measure and expectation with respect to the distribution of \((Y,R,X)\) are denoted by P and E, respectively. We write \(Y\mbox{\,$\perp\!\!\!\perp$}R\) when the random variables \(Y\) and \(R\) are independent. We also use the big-\(O\) notation \(h(x)=O(g(x))\) if the absolute value of \(h(x)\) is upper bounded by a positive constant multiple of \(g(x)\) for all sufficiently large \(x\). In contrast, \(h(x)=o(g(x))\) when \(\lim_{x\to\infty}\left|h(x)\right|/g(x)=0\). For random vectors, the notation \(o_{P}(1)\) is short for a sequence of random vectors that converges to zero in probability. The expression \(O_{P}(1)\) denotes a sequence that is bounded in probability. The norm \(\left|\left|x\right|\right|_{q}=\left(\sum_{k=1}^{d}x_{k}^{q}\right)^{1/q}\) with \(q>0\) stands for the \(L_{q}\)-norm in the Euclidean space \(\mathbb{R}^{d}\), though it is no longer a norm when \(0<q<1\). In particular, \(\left|\left|x\right|\right|_{\infty}=\max_{1\leq k\leq d}\left|x_{k}\right|\) and \(\left|\left|x\right|\right|_{0}=\sum_{k=1}^{n}\mathbbm{1}_{\{x_{k}\neq 0\}}\) indicates the number of nonzero elements in \(x\in\mathbb{R}^{d}\). Furthermore, \(\left|\left|X\right|\right|_{q}=(\mathrm{E}|Z|^{q})^{1/q}\) with \(q\geq 1\) is the \(L_{q}\)-norm for a random variable \(Z\). We use the notation \(a_{n}\lesssim b_{n}\) or \(b_{n}\gtrsim a_{n}\) when there exists an absolute constant \(A>0\) such that \(a_{n}\leq Cb_{n}\) when \(n\) is large. If \(a_{n}\gtrsim b_{n}\) and \(a_{n}\lesssim b_{n}\), then \(a_{n},b_{n}\) are asymptotically equal and it is denoted by \(a_{n}\asymp b_{n}\). Finally, we denote the unit sphere in \(\mathbb{R}^{d}\) by \(\mathbb{S}^{d-1}=\{x\in\mathbb{R}^{d}:\left|\left|x\right|\right|_{2}=1\}\) and a ball centered at \(x\) with radius \(r\) in \(\mathbb{R}^{d}\) by \(B_{d}(x,r)=\{y\in\mathbb{R}^{d}:\left|\left|y-x\right|\right|_{2}\leq r\}\).
Methodology
In this section, we outline the detailed procedure of our debiasing inference method and discuss a feasible solution to the key debiasing program through its dual form. We further motivate our debiasing method from the perspectives of bias-variance trade-off and Neyman near-orthogonality.
### Debiasing Inference Procedure
Recall our i.i.d. data \(\{(Y_{i},R_{i},X_{i})\}_{i=1}^{n}\subset\mathbb{R}\times\{0,1\}\times\mathbb{R} ^{d}\), where \(Y_{i}\) is the outcome variable with missing indicator \(R_{i}\) and \(X_{i}\) is the fully observed high-dimensional covariate vector for \(i=1,...,n\). In order to conduct statistical inference on the regression function \(m_{0}(x)=\mathrm{E}(Y|X=x)=x^{T}\beta_{0}\), we propose a debiasing method with the following procedure.
* **Step 1:** Compute the Lasso pilot estimate \(\widehat{\beta}\) with complete-case data \[\widehat{\beta}=\operatorname*{arg\,min}_{\beta\in\mathbb{R}^{d}}\left[\frac{ 1}{n}\sum_{i=1}^{n}R_{i}(Y_{i}-X_{i}^{T}\beta)^{2}+\lambda\left|\left|\beta \right|\right|_{1}\right],\] (2) where \(\lambda>0\) is a regularization parameter.
* **Step 2:** Obtain consistent estimates \(\widehat{\pi}_{i},i=1,...,n\) of the propensity scores \(\pi_{i}=\pi(X_{i})=\mathrm{P}(R_{i}=1|X_{i}),i=1,...,n\) by any machine learning method (not necessarily a parametric model) on the data \(\{(X_{i},R_{i})\}_{i=1}^{n}\subset\mathbb{R}^{d}\times\{0,1\}\).
* **Step 3:** Solve for the debiasing weight vector \(\widehat{\mathbf{w}}\equiv\widehat{\mathbf{w}}(x)=\left(\widehat{w}_{1}(x),..., \widehat{w}_{n}(x)\right)^{T}\in\mathbb{R}^{n}\) through a debiasing program defined as: \[\min_{\mathbf{w}\in\mathbb{R}^{n}}\left\{\sum_{i=1}^{n}\widehat{\pi}_{i}w_{i}^{2}: \left|\left|x-\frac{1}{\sqrt{n}}\sum_{i=1}^{n}w_{i}\cdot\widehat{\pi}_{i} \cdot X_{i}\right|\right|_{\infty}\leq\frac{\gamma}{n}\right\},\] (3) where \(\gamma>0\) is a tuning parameter.
* **Step 4:** Define the debiased estimator for \(m_{0}(x)\) as: \[\widehat{m}^{\mathrm{debias}}(x;\widehat{\mathbf{w}})=x^{T}\widehat{\beta}+\frac{ 1}{\sqrt{n}}\sum_{i=1}^{n}\widehat{w}_{i}(x)R_{i}\left(Y_{i}-X_{i}^{T}\widehat {\beta}\right).\] (4)
* **Step 5:** Construct the asymptotic \((1-\tau)\)-level confidence interval for \(m_{0}(x)\) as: \[\left[\widehat{m}^{\mathrm{debias}}(x;\widehat{\mathbf{w}})\pm\Phi^{-1}\left(1- \frac{\tau}{2}\right)\cdot\sigma_{\epsilon}\cdot\sqrt{\frac{1}{n}\sum_{i=1}^{n} \widehat{\pi}_{i}\widehat{w}_{i}(x)^{2}}\right],\] (5) where \(\Phi(\cdot)\) denotes the cumulative distribution function (CDF) of \(\mathcal{N}(0,1)\). If \(\sigma_{\epsilon}^{2}\) is unknown, then it can be replaced by any consistent estimator \(\widehat{\sigma}_{\epsilon}^{2}\).
In practice, we fit the Lasso pilot estimate \(\widehat{\beta}\) in **Step 1** by the scaled Lasso (Sun and Zhang, 2012) so as to automatically select the regularization parameter \(\lambda>0\) and simultaneously produce a consistent estimator of the noise level \(\sigma_{\epsilon}^{2}\). As for the propensity score estimation in **Step 2**, the theoretical validity of our proposed method only requires the estimated propensity scores \(\widehat{\pi}_{i},i=1,...,n\) to be consistent at a mild rate of convergence and thus allows the usage of any modern machine learning approach; see Assumption 5 with more details in Section 3.3 and Section 3.4. The formulation of our debiasing program (3) in **Step 3** can be motivated from the perspective of bias-variance trade-off in Section 2.3, and the subsequent definition of the debiased estimator (4) in **Step 4** is inspired by the original debiased Lasso estimator in Zhang and Zhang (2014); van de Geer et al. (2014); Javanmard and Montanari (2014). However, instead of estimating the (pseudo-)inverse matrix of the complete-case population gram matrix \(\left[\mathrm{E}\left(RXX^{T}\right)\right]^{-1}\) as the debiased Lasso estimator, we optimize the debiasing weight vector through our debiasing program (3), which is computationally more efficient. Section 2.2 sheds light on a feasible avenue to solve (3) from its dual formulation that becomes an unconstrained convex program. Finally, we prove in Section 3.4 that the debiased estimator \(\widehat{m}^{\mathrm{debias}}(x;\widehat{\mathbf{w}})\) is asymptotically normal and that its asymptotic variance can be estimated consistently via \(\sum_{i=1}^{n}\widehat{\pi}_{i}\widehat{w}_{i}^{2}(x)\). This guarantees the asymptotic validity of the confidence interval in **Step 5**.
### Dual Formulation of the Debiasing Program (3)
The proposed debiasing program (3) is a quadratic programming problem with box constraints. By standard duality theory in optimization, we obtain the following result.
**Proposition 1**.: _The dual form of the debiasing program (3) is given by_
\[\min_{\ell\in\mathbb{R}^{d}}\left\{\frac{1}{4n}\sum_{i=1}^{n}\widehat{\pi}_{i }\left[X_{i}^{T}\ell\right]^{2}+x^{T}\ell+\frac{\gamma}{n}\left|\left|\ell \right|\right|_{1}\right\}. \tag{6}\]
_If strong duality holds, then the solution \(\widehat{\mathbf{w}}(x)\in\mathbb{R}^{n}\) to the primal debiasing program (3) and the solution \(\widehat{\ell}(x)\in\mathbb{R}^{d}\) to the dual debiasing program (6) satisfy the following identity:_
\[\widehat{w}_{i}(x)=-\frac{1}{2\sqrt{n}}\cdot X_{i}^{T}\widehat{\ell}(x),\quad i =1,...,n. \tag{7}\]
The proof of Proposition 1 is in Section E.2. This result has two important consequences: First, it is difficult to select the tuning parameter \(\gamma>0\) based on the constrained quadratic program (3) alone: if \(\gamma>0\) is too small, the the program is infeasible; if \(\gamma>0\) is too large, the estimate has a non-negligible bias (see Section 2.3). However, since the dual program (6) is an unconstrained quadratic programming problem, we can select the tuning parameter \(\gamma>0\) via cross-validation on the dual program. Second, if strong duality holds, the debiased estimator (4) can be expressed
via (7) in terms of the dual solution \(\widehat{\ell}(x)\) as:
\[\widehat{m}^{\mathrm{debias}}(x;\widehat{\mathbf{w}})=x^{T}\widehat{\beta}-\frac{1}{ 2n}\sum_{i=1}^{n}R_{i}X_{i}^{T}\widehat{\ell}(x)\left(Y_{i}-X_{i}^{T}\widehat{ \beta}\right), \tag{8}\]
where the second bias-corrected term reduces to a linear combination of the sample average of \(n\) random variables. This representation is the key to deriving the asymptotic normality of our debiased estimator; see Section 3.1 and Section 3.4 for details. Notice that strong duality holds whenever \(\gamma>0\) is sufficiently large; see Lemma 5 in Section 3.3 for more details.
### Bias-Variance Trade-off Perspective
The design of our debiasing program (3) is motivated by controlling the conditional mean squared error
\[\mathrm{E}\left[\left(\sqrt{n}\cdot\widehat{m}^{\mathrm{debias}}(x;\widehat{ \mathbf{w}})-\sqrt{n}\cdot m_{0}(x)\right)^{2}\left|\mathbf{X}\right|\right]\]
of our debiased estimator (4) given \(\mathbf{X}=(X_{1},...,X_{n})\) and can thus be interpreted from the perspective of a (conditional) bias-variance trade-off. To replicate this motivation, we consider the generic debiased estimator \(m^{\mathrm{debias}}(x;\mathbf{w})\) from (4):
\[m^{\mathrm{debias}}(x;\mathbf{w})=x^{T}\beta+\frac{1}{\sqrt{n}}\sum_{i=1}^{n}w_{i }R_{i}\left(Y_{i}-X_{i}^{T}\beta\right), \tag{9}\]
where one would plug in the Lasso pilot estimate \(\widehat{\beta}\) from (2) and the solution \(\widehat{\mathbf{w}}\equiv\widehat{\mathbf{w}}(x)\in\mathbb{R}^{n}\) from our debiasing program (3) to obtain a concrete debiased estimator (4) in practice. We decompose the conditional mean squared error of (9) into three explicit terms as follows.
**Proposition 2**.: _Under Assumption 1, the conditional mean squared error of \(\sqrt{n}\,m^{\mathrm{debias}}(x;\mathbf{w})\) given \(\mathbf{X}=(X_{1},...,X_{n})\) has the following decomposition as:_
\[\begin{split}&\mathrm{E}\left[\left(\sqrt{n}\,m^{\mathrm{debias}}(x; \mathbf{w})-\sqrt{n}\,m_{0}(x)\right)^{2}\left|\mathbf{X}\right]\right.\\ &=\underbrace{\sigma_{\epsilon}^{2}\sum_{i=1}^{n}w_{i}^{2}\pi(X_{ i})}_{\text{Conditional variance I}}+\underbrace{\left(\beta_{0}-\beta\right)^{T}\left[\sum_{i=1}^{n}w_{i}^{2}\pi(X_{ i})\left(1-\pi(X_{i})\right)X_{i}X_{i}^{T}\right]\left(\beta_{0}-\beta\right)}_{ \text{Conditional variance II}}\\ &\quad+\underbrace{\left[\left(\frac{1}{\sqrt{n}}\sum_{i=1}^{n} w_{i}\pi(X_{i})X_{i}-x\right)^{T}\sqrt{n}\left(\beta_{0}-\beta\right)\right]^{2}}_{ \text{Conditional bias}}.\end{split} \tag{10}\]
The derivation for Proposition 2 can be found in Section E.2. By Holder's inequality, we can
upper bound the "Conditional bias" term as:
\[\left[\left(\frac{1}{\sqrt{n}}\sum_{i=1}^{n}w_{i}\pi(X_{i})X_{i}-x\right)^{T} \sqrt{n}\left(\beta_{0}-\beta\right)\right]^{2}\leq\left[\left|\left|\frac{1}{ \sqrt{n}}\sum_{i=1}^{n}w_{i}\pi(X_{i})X_{i}-x\right|\right|_{\infty}\sqrt{n} \left|\left|\beta_{0}-\beta\right|\right|_{1}\right]^{2}.\]
The debiasing program (3) is hence designed to optimize the weights \(w_{i},i=1,...,n\) so that the estimated "Conditional variance I" term \(\sum_{i=1}^{n}\widehat{\pi}_{i}w_{i}^{2}\) is minimized as the objective function while an upper bound of the estimated "Conditional bias" term \(\left|\left|x-\frac{1}{\sqrt{n}}\sum_{i=1}^{n}w_{i}\widehat{\pi}_{i}X_{i} \right|\right|_{\infty}\) is well-controlled by the constraint. We ignore the "Conditional variance II" term because, by Lemma 6, it is asymptotically negligible and dominated by the "Conditional variance I" term if \(\beta=\widehat{\beta}\) (solution Lasso program (2)) and \(\mathbf{w}=\widehat{\mathbf{w}}(x)\) (solution to the debiasing program (3)). Similar bias-variance trade-offs also appear in Cai et al. (2021) for generalized linear models and Giessing and Wang (2021) for quantile regression models. However, the motivation for the procedures in these two papers is ad-hoc and not grounded in a rigorous decomposition of the conditional mean squared error. Upon minimizing the (conditional) variance, we expect our debiased estimator to be asymptotically more efficient than other estimators; see our discussions in Section 3.4.
### Neyman Near-Orthogonalization Viewpoint
Our debiasing method in Section 2.1 can also be interpreted from the perspective of Neyman near-orthogonalization (Neyman, 1959, 1979; Chernozhukov et al., 2018). To this end, we regard the regression function \(m\equiv m(x)\in\mathbb{R}\) as the main parameter that we want to infer and the regression vector \(\beta\in\mathbb{R}^{d}\) as the high-dimensional nuisance parameter. Under Assumption 1, the true values of these two parameters are \(m_{0}=m_{0}(x)=x^{T}\beta_{0}\) and \(\beta_{0}\), respectively. We define the generic score function as:
\[\Xi_{x}(Y,R,X;m,\beta)=m-x^{T}\beta-\sqrt{n}\cdot w\cdot R(Y-X^{T}\beta),\]
where \(w\in\mathbb{R}\) is a symbolic weight. Given the observed data \(\{(Y_{i},R_{i},X_{i})\}_{i=1}^{n}\) and \(\mathbf{w}\in\mathbb{R}^{n}\), \(\beta\in\mathbb{R}^{d}\) arbitrary, the generic debiased estimator \(m^{\text{debias}}(x,\mathbf{w})\) solves the sample-based estimating equation
\[\frac{1}{n}\sum_{i=1}^{n}\Xi_{x}(Y_{i},R_{i},X_{i};m^{\text{debias}},\beta)=m ^{\text{debias}}(x;\mathbf{w})-x^{T}\beta-\frac{1}{\sqrt{n}}\sum_{i=1}^{n}w_{i} \cdot R_{i}\left(Y_{i}-X_{i}^{T}\beta\right)=0. \tag{11}\]
Provided that Assumption 1 holds and the weights \(\mathbf{w}\in\mathbb{R}^{n}\) satisfy the box constraint in (3), the Neyman near-orthogonalization condition given the data \(\mathbf{X}=(X_{1},...,X_{n})^{T}\in\mathbb{R}^{n\times d}\) at \((m_{0},\beta_{0})\) requires that
\[\begin{split}&\text{E}\left[\frac{1}{n}\sum_{i=1}^{n}\Xi_{x}(Y_{i},R_ {i},X_{i};m_{0},\beta_{0})\bigg{|}\mathbf{X}\right]=0,\\ &\sup_{\beta\in\mathcal{T}_{n}}\left|\left\{\frac{\partial}{ \partial\beta}\text{E}\left[\frac{1}{n}\sum_{i=1}^{n}\Xi_{x}(Y_{i},R_{i},X_{i} ;m,\beta)\Big{|}\mathbf{X}\right]\bigg{|}_{(m_{0},\beta_{0})}\right\}^{T}(\beta- \beta_{0})\right|\leq\frac{\delta_{n}}{\sqrt{n}},\end{split} \tag{12}\]
where \(\mathcal{T}_{n}\) is a properly shrinking neighborhood of \(\beta_{0}\) and \(\delta_{n}=o(1)\); see Definition 2.2 in Chernozhukov et al. (2018). Indeed, the first condition obviously holds true, while the second condition is also satisfied because for any \(\beta\in\mathcal{T}_{n}\),
\[\left|\left\{\frac{1}{n}\sum_{i=1}^{n}\frac{\partial}{\partial\beta }\mathrm{E}\left[\Xi_{x}(Y_{i},R_{i},X_{i};m,\beta)|X\right]\big{|}_{(m_{0}, \beta_{0})}\right\}^{T}(\beta-\beta_{0})\right|\] \[=\left|\left[x-\frac{1}{\sqrt{n}}\sum_{i=1}^{n}w_{i}\cdot\pi(X_{ i})X_{i}\right]^{T}(\beta_{0}-\beta)\right|\] \[\overset{\text{(a)}}{\leq}\overset{\gamma}{n}\left|\left|\beta- \beta_{0}\right|\right|_{1}\leq\frac{\delta_{n}}{\sqrt{n}},\]
where the approximate inequality (a) follows from Holder's inequality and substituting the estimated propensity scores \(\widehat{\pi}_{i},i=1,...,n\) while (b) holds since \(\mathbf{w}\in\mathbb{R}^{n}\) satisfy the box constraint in (3). Therefore, the nuisance realization set \(\mathcal{T}_{n}\) can be chosen as \(\left\{\beta\in\mathcal{B}\subset\mathbb{R}^{d}:\left|\left|\beta-\beta_{0} \right|\right|_{1}\leq\frac{\sqrt{n}\delta_{n}}{\gamma}\right\}\) for some convex set \(\mathcal{B}\) containing \(\beta_{0}\). We will show in Theorem3 that our Lasso pilot estimate \(\widehat{\beta}\) satisfies \(\left|\left|\widehat{\beta}-\beta_{0}\right|\right|_{1}=O_{P}\left(s_{\beta} \sqrt{\frac{\log d}{n}}\right)\) and thus, our final debiased estimator (4) satisfies the Neyman near-orthogonality condition (12) with a fine-tuned parameter \(\gamma>0\).
By Theorem3.1 in Chernozhukov et al. (2018), the (asymptotic) variance of \(m^{\text{debias}}(x,\mathbf{w})\) is \(\sigma_{\epsilon}^{2}\operatorname{E}\left[\sum_{i=1}^{n}w_{i}^{2}R_{i}^{2}| \mathbf{X}\right]=\sigma_{\epsilon}^{2}\sum_{i=1}^{n}w_{i}^{2}\pi(X_{i})\). Therefore, our debiasing program (3) is designed to pursue the debiased estimator with the smallest (estimated) variance among all the estimators that (approximately) satisfies the constraints in (12). Consequently, it is expected that our debiased estimator is more efficient than most other estimators even without explicitly constructing an efficient score function satisfying the exact Neyman orthogonality; see Section2.2.2 in Chernozhukov et al. (2018) with references therein. Finally, in light of (12), our debiasing method de-correlates the Lasso pilot regression from the propensity score estimation and the optimization of weights for the debiased estimator so that we are able to establish the asymptotic normality of our debiased estimator without resorting to sample-splitting; see Section3.4 for more details.
## 3 Asymptotic Theory
In this section, we study the consistency results of our debiasing inference method in Section2.1 and its asymptotic normality property. The high-level motivation and overview of the technical results are given in Section3.1 with more detailed studies in the follow-up subsections.
### Heuristics and Overview
In order to derive the asymptotic normality of our debiased estimator (4), we need to analyze the asymptotic behavior of \(\sqrt{n}\left[\widehat{m}^{\mathrm{debias}}(x;\widehat{\mathbf{w}})-m_{0}(x)\right]\). Under Assumption 1 and the definition (4) of \(\widehat{m}^{\mathrm{debias}}(x;\widehat{\mathbf{w}})\), we deduce that
\[\sqrt{n}\left[\widehat{m}^{\mathrm{debias}}(x;\widehat{\mathbf{w}})-m_{0}(x) \right]=\sum_{i=1}^{n}\widehat{w}_{i}(x)R_{i}\epsilon_{i}+\left[x-\frac{1}{ \sqrt{n}}\sum_{i=1}^{n}\widehat{w}_{i}(x)R_{i}X_{i}\right]^{T}\sqrt{n}\left( \widehat{\beta}-\beta_{0}\right),\]
where \(\epsilon_{i},i=1,...,n\) are i.i.d. noise variables under model (1). From above expression, it is unclear how one can apply the central limit theorem or related asymptotic theories to derive the asymptotic distribution of \(\sqrt{n}\left[\widehat{m}^{\mathrm{debias}}(x;\widehat{\mathbf{w}})-m_{0}(x)\right]\), because the limiting distribution of the weight vector \(\widehat{\mathbf{w}}\equiv\widehat{\mathbf{w}}(x)=\left(\widehat{w}_{1}(x),..., \widehat{w}_{n}(x)\right)^{T}\in\mathbb{R}^{n}\) is intractable and may not even exist. However, the dual formulation in Proposition 1 provides a promising direction of approaching this problem. By the dual representation (8) of the debiased estimator, we have
\[\sqrt{n}\left[\widehat{m}^{\mathrm{debias}}(x;\widehat{\mathbf{w}})-m _{0}(x)\right] =-\frac{1}{2\sqrt{n}}\sum_{i=1}^{n}R_{i}\epsilon_{i}X_{i}^{T} \widehat{\ell}(x)+\left[x+\frac{1}{2n}\sum_{i=1}^{n}R_{i}X_{i}^{T}\widehat{ \ell}(x)X_{i}\right]^{T}\sqrt{n}\left(\beta_{0}-\widehat{\beta}\right)\] \[=-\frac{1}{2\sqrt{n}}\sum_{i=1}^{n}R_{i}\epsilon_{i}X_{i}^{T} \ell_{0}(x)+\text{``Bias terms''}, \tag{13}\]
where \(\ell_{0}(x)\in\mathbb{R}^{d}\) is the deterministic solution to the population dual program for which the dual solution \(\widehat{\ell}(x)\in\mathbb{R}^{d}\) is consistent; see (17) in Section 3.2 for its definition. The leading term in (13) is a sample average of \(n\) i.i.d. scalar random variables, which can be handled via the standard central limit theorem (CLT), while the "Bias terms" in (13) are of order \(o_{P}(1)\) under mild regularity conditions; see Theorem 7.
The rest of this section corroborates the above heuristic arguments. After introducing the notations and necessary assumptions, we derive the following main results:
* **Consistency of the Lasso pilot estimate (2):** Under a suitable choice of \(\lambda>0\), it holds that \(\left|\left|\widehat{\beta}-\beta_{0}\right|\right|_{2}=O_{P}\left(\frac{1}{ \kappa_{R}^{2}}\sqrt{\frac{s_{\beta}\log d}{n}}\right)\) when \(\log d=o(n)\) with \(d>n\), where \(\kappa_{R}^{2}>0\) is the minimum eigenvalue of the complete-case gram matrix \(\mathrm{E}\left(RXX^{T}\right)\); see Theorem 3 for details.
* **Consistency of the solution to the dual program (6):** Under a suitable choice of \(\gamma>0\), we deduce that \(\left|\left|\widehat{\ell}(x)-\ell_{0}(x)\right|\right|_{2}=O_{P}\left(\frac{ 1}{\kappa_{R}^{2}}\sqrt{\frac{s_{\ell}(x)\log d}{n}}+\frac{r_{\ell}}{\kappa_{R }^{2}}+\frac{r_{\varepsilon}\sqrt{s_{\ell}(x)}}{\kappa_{R}^{2}}\right)\), where \(\ell_{0}(x)\) is the solution to the population dual program, \(r_{\ell}>0\) is a parameter for the sparse approximation to \(\ell_{0}(x)\), and \(r_{\pi}>0\) is the rate of convergence of \(\max_{1\leq i\leq n}|\widehat{\pi}_{i}-\pi(X_{i})|\); see Theorem 4 and its related discussions.
* **Asymptotic normality of the debiased estimator (4):** Combining the above consistency
results with other mild regularity conditions, we prove that
\[\sqrt{n}\left[\widehat{m}^{\text{debias}}(x;\widehat{\mathbf{w}})-m_{0}(x)\right] \overset{d}{\to}\mathcal{N}\left(0,\,\sigma_{m}^{2}(x)\right)\]
and discuss the semi-parametric efficiency of \(\sigma_{m}^{2}(x)\); see Theorem7 for details.
### Notations and Assumptions
We define the \(\psi_{\alpha}\)-Orlicz norm for any random variable \(Z\) and \(\alpha\in(0,2]\) by
\[\left|\left|Z\right|\right|_{\psi_{\alpha}}=\inf\left\{t>0:\mathrm{E}\left[ \exp\left(\frac{|Z|^{\alpha}}{t^{\alpha}}\right)\right]\leq 2\right\}. \tag{14}\]
For \(\alpha\in(0,1)\), there is an alternative definition that modifies (14) into a legitimate norm which can be used interchangeably with (14) up to an absolute constant that only depends on \(\alpha\); see page 266 in van der Vaart and Wellner (1996).
We enumerate the regularity conditions under the high-dimensional sparse linear modeling Assumption1 for our subsequent theoretical analysis.
**Assumption 2** (Sub-Gaussian covariate).: _The covariate vector \(X\in\mathbb{R}^{d}\) is sub-Gaussian, i.e., for any \(u\in\mathbb{R}^{d}\)._
We introduce the standard sub-Gaussian condition as Assumption2 on the covariate vector \(X\) in order to control the exponential tail behavior of \(X\) by its second moment and facilitate our proofs. Notice that the covariate vector \(X\in\mathbb{R}^{d}\) is not necessarily centered in our definition of sub-Gaussian covariate vector. We denote the \(s\)-sparse maximum eigenvalue of the population gram matrix \(\mathrm{E}\left(XX^{T}\right)\in\mathbb{R}^{d\times d}\) by
\[\phi_{s}:=\sup_{u\in\mathbb{S}^{d-1},\left|\left|u\right|\right|_{0}\leq s} \mathrm{E}\left[(X^{T}u)^{2}\right], \tag{15}\]
where \(\mathbb{S}^{d-1}=\{u\in\mathbb{R}^{d}:\|u\|_{2}=1\}\). In particular, \(\mathrm{E}(X_{ik}^{2})\leq\max_{1\leq i\leq n}\max_{1\leq k\leq d}\mathrm{E} (X_{ik}^{2})\leq\phi_{1}\) for each covariate vector \(X_{i}=(X_{i1},...,X_{id})^{T}\in\mathbb{R}^{d}\). It is noteworthy that under the high-dimensional setting with \(s\ll d\), the \(s\)-sparse maximum eigenvalue \(\phi_{s}\) is substantially smaller than the unrestricted maximum eigenvalue \(\phi_{d}\) of \(\mathrm{E}\left(XX^{T}\right)\) and can even be independent of the dimension \(d\).
**Assumption 3** (Sub-Gaussian noise).: _The noise variable \(\epsilon\in\mathbb{R}\) is sub-Gaussian, i.e., \(||\epsilon||_{\psi_{2}}\lesssim\left[\mathrm{E}(\epsilon^{2})\right]^{1/2}= \sigma_{\epsilon}\) with \(\sigma_{\epsilon}^{2}>0\) being the noise level._
Assumption3 is restrictive and simplifies theoretical arguments. In principle, it can be relaxed to a moment conditions; see Belloni and Chernozhukov (2013). We conduct numerical experiments on our debiasing method in which the sub-Gaussian noise assumption is violated in Section4.3 and demonstrate that our proposed method is robust to various heavy-tailed noise distributions. To establish the consistency of our Lasso pilot estimate (2) and the dual solution \(\widehat{\ell}(x)\in\mathbb{R}^{d}\) to (6), we impose the following eigenvalue lower bound on the complete-case population gram matrix \(\mathrm{E}\left(RXX^{T}\right)\in\mathbb{R}^{d\times d}\).
**Assumption 4** (Lower eigenvalue bound on \(\mathrm{E}\left(RXX^{T}\right)\)).: _The minimum eigenvalue of the complete-case gram matrix \(\mathrm{E}\left(RXX^{T}\right)\in\mathbb{R}^{d\times d}\) is bounded away from zero, i.e., there exists a constant \(\kappa_{R}>0\) such that_
\[\inf_{v\in\mathbb{S}^{d-1}}\mathrm{E}\left[R(X^{T}v)^{2}\right]\geq\kappa_{R}^ {2}.\]
Assumption 4 ensures that the regression vector \(\beta_{0}\in\mathbb{R}^{d}\) is identifiable. We explicitly introduce the "constant" \(\kappa_{R}>0\) to handle cases in which the minimum eigenvalue of \(\mathrm{E}\left(RXX^{T}\right)\) vanishes (slowly) as \(d,n\rightarrow\infty\). When proving the consistency of the Lasso pilot estimate (2), however, we allow the common relaxation of Assumption 4 that the minimum eigenvalue of \(\mathrm{E}\left(RXX^{T}\right)\) is only bounded away from zero over the cone \(\mathcal{C}_{1}(S_{\beta},3)\):
\[\inf_{v\in\mathcal{C}_{1}(S_{\beta},3)\cap\mathbb{S}^{d-1}}\mathrm{E}\left[R( X^{T}v)^{2}\right]\geq\kappa_{R}^{2}, \tag{16}\]
where \(\mathcal{C}_{q}(S,\vartheta)=\left\{v\in\mathbb{R}^{d}:\left|\left|v_{S^{c}} \right|\right|_{q}\leq\vartheta\left|\left|v_{S}\right|\right|_{q}\right\}\) is a cone of dominant coordinates (Section 7.2 in Koltchinskii 2011) for some index set \(S\subset\{1,...,d\}\) and the parameter \(\vartheta>0\) with respect to the norm \(\left|\left|\cdot\right|\right|_{q}\) for \(q\geq 1\). Here, \(v_{S}\in\mathbb{R}^{|S|}\) denotes the subvector with elements of \(v\) indexed by \(S\) and \(v_{S^{c}}\) is defined in an analogous manner. This relaxed assumption (16) is known as the restricted eigenvalue condition in the literature; see Section 7.3.1 in Wainwright 2019 and Bickel et al. (2009); van de Geer and Buhlmann (2009).
While it is common in the literature to impose the restricted eigenvalue conditions on the sample-based complete-case gram matrix \(\frac{1}{n}\sum_{i=1}^{n}R_{i}X_{i}X_{i}^{T}\), Lemma E.7 in Section E.9 shows that under Assumption 2, the population and sample-based gram matrices are close with high probability, so (16) holds interchangeably on these two gram matrices in the setting of this paper.
As for the propensity score estimation, we introduce a mild assumption on the consistency of the estimated propensity scores \(\widehat{\pi}_{i}\) to the true propensity scores \(\pi_{i}=\pi(X_{i})\) for \(i=1,...,n\).
**Assumption 5** (Stochastic control of the propensity score estimation).: _Given any \(n\geq 1\) and \(\delta\in(0,1)\), there exists \(r_{\pi}\equiv r_{\pi}(n,\delta)>0\) such that_
\[\mathrm{P}\left(\max_{1\leq i\leq n}|\widehat{\pi}_{i}-\pi_{i}|>r_{\pi}\right) <\delta.\]
Assumption 5 is a non-asymptotic version of the stochastic boundedness for the estimation errors of the propensity scores evaluated at \(X_{i},i=1,...n\). In applications, the exact rate of convergence \(r_{\pi}\) depends on the estimator, which can be parametric, nonparametric, or semiparametric. We demonstrate in Theorem 4 and Theorem 7 below that as long as \(r_{\pi}=r_{\pi}(n,\delta)\to 0\) for some \(\delta\equiv\delta_{n}\to 0\) as \(n\rightarrow\infty\), the solution to the dual program (6) is consistent and consequently, the debiased estimator (4) is asymptotically normal; see Section 3.3 and Section 3.4 for details. The rate at which \(r_{\pi}\to 0\) can be very slow as in (20). These results unveil the possibility of combining our debiased estimator with any consistent estimates of the propensity scores; see Section 4.4 for the numerical experiments.
**Remark 1**.: _In contrast to the existing literature on regression models with missing outcomes, incomplete covariates, or potential outcomes (Rosenbaum and Rubin, 1983; Robins et al., 1994; Chakraborty et al., 2019), we do not require the positivity condition \(\pi(x)=\mathrm{P}(R=1|X=x)>\pi_{\min}>0\) for all \(x\in\mathrm{support}(X)\subset\mathbb{R}^{d}\). As noted by D'Amour et al. (2021), the positivity condition is too stringent under the context of high-dimensional covariates. Instead, we merely assume that the complete-case population gram matrix \(\mathrm{E}\left(RXX^{T}\right)\) is positive definite as in Assumption 4, which is valid even when the positivity condition is violated. For example, \(\mathrm{E}\left(RXX^{T}\right)\) will be positive definite if \(\mathrm{support}(X)\) has a nonzero Lebesgue measure in \(\mathbb{R}^{d}\) and \(\pi(x)\) is positive within a subset of \(\mathrm{support}(X)\) with a nonzero Lebesgue measure._
We now state the definition of the population dual solution for which the solution of our dual formulation (6) is consistent. The population dual program for any \(x\in\mathbb{R}^{d}\) is defined by taking \(n\to\infty\) in (6) under the true propensity scores as:
\[\min_{\ell\in\mathbb{R}^{d}}\left\{\frac{1}{4}\,\mathrm{E}\left[R\left(X^{T} \ell\right)^{2}\right]+x^{T}\ell\right\}. \tag{17}\]
The exact solution to the above population dual program (17) is given by
\[\ell_{0}(x):=-2\left[\mathrm{E}\left(RXX^{T}\right)\right]^{-1}x\in\mathbb{R }^{d} \tag{18}\]
accordingly, provided that Assumption 4 holds. Since the population dual solution \(\ell_{0}(x)\) is not necessarily sparse, we consider the following definition of a \(r_{\ell}\)-approximation \(\widetilde{\ell}(x)\) in order to establish the consistency of \(\widetilde{\ell}(x)\); see also Assumption 6.
**Definition 1** (\(r_{\ell}\)-approximation to \(\ell_{0}(x)\)).: _For any \(x\in\mathbb{R}^{d}\), we define a \(r_{\ell}\)-approximation to the population dual solution \(\ell_{0}(x)\) as:_
\[\widetilde{\ell}(x)\equiv\widetilde{\ell}(x;r_{\ell}):=\operatorname*{arg\, min}_{\ell(x)\in\mathbb{R}^{d}}\left\{||\ell(x)||_{0}:||\ell(x)-\ell_{0}(x)||_{2} \leq r_{\ell}\left||\ell_{0}(x)||_{2}\right\}, \tag{19}\]
_where \(r_{\ell}\in\left[0,\frac{1}{2}\right]\) controls the accuracy of approximation._
In general, a larger value of \(r_{\ell}\) gives rise to a sparser approximation \(\widetilde{\ell}(x)\) to the vector \(\ell_{0}(x)\) of interest. On the other hand, the consistency of \(\widetilde{\ell}(x)\) requires the approximation parameter \(r_{\ell}\) for \(\widetilde{\ell}(x)\) to converge to \(0\) in a certain rate as \(n\to\infty\); see the discussion after Theorem 4 in Section 3.3 below. In other words, as the sample size \(n\) increases, the exact population dual solution \(\ell_{0}(x)\) must tend to a sparse vector in order for the consistency result to hold. We furnish several sufficient conditions and motivating examples under which Assumption 6 below holds in Section B.
**Assumption 6** (Sparsity of \(\widetilde{\ell}(x)\)).: _The \(r_{\ell}\)-approximation \(\widetilde{\ell}(x)\in\mathbb{R}^{d}\) to the exact population dual solution to \(\ell_{0}(x)\in\mathbb{R}^{d}\) exists and satisfies_
\[S_{\ell}(x):=\mathrm{support}\left(\widetilde{\ell}(x)\right)=\left\{1\leq k \leq d:\widetilde{\ell}_{k}(x)\neq 0\right\}\ \ \text{and}\ \ s_{\ell}(x):=\left|\left|\widetilde{\ell}(x)\right|\right|_{0}\ll\min\{n,d\}.\]
**Remark 2**.: _Under the stronger Assumption 4, \(\mathrm{E}(RXX^{T})\) also satisfies the \(\mathcal{C}_{1}(S_{\ell}(x),3)\)-restricted eigenvalue condition with some parameter \(\kappa_{R}^{2}>0\) defined in (16)._
### Consistency Results
In this section, we unroll the consistency results of our Lasso pilot estimate (2) and the solution to the dual program (6). Additionally, we discuss the strong duality theory of our debiasing inference method and validate the asymptotic negligibility of the "Conditional variance II" term in Proposition 2 of Section 2.3.
**Theorem 3** (Consistency of the Lasso Pilot Estimate with Complete-Case Data).: _Let \(\delta\in(0,1)\) and \(A_{1},A_{2}>0\) be some absolute constants. Suppose that Assumptions 1, 2, and 4 (more precisely, Eq.(16)) hold and \(n\) is sufficiently large so that_
\[\sqrt{\frac{s_{\beta}\log(ed/s_{\beta})}{n}}+\sqrt{\frac{\log(1/ \delta)}{n}}<\min\left\{\frac{\kappa_{R}^{2}}{A_{1}\phi_{s_{\beta}}},1\right\}.\]
1. _If_ \(\lambda>\frac{4}{n}\left|\sum_{i=1}^{n}R_{i}X_{i}\epsilon_{i}\right|\right|_{ \infty}>0\) _and_ \(d\geq s_{\beta}+2\)_, then with probability at least_ \(1-\delta\)_,_ \[\left|\left|\widehat{\beta}-\beta_{0}\right|\right|_{2}\lesssim \frac{\sqrt{s_{\beta}}\lambda}{\kappa_{R}^{2}}.\]
2. _If, in addition, Assumption_ 3 _holds and_ \(\lambda=A_{2}\sigma_{\epsilon}\sqrt{\frac{\phi_{1}\log(d/\delta)}{n}}\)_, then with probability at least_ \(1-\delta\)_,_ \[\left|\left|\widehat{\beta}-\beta_{0}\right|\right|_{2}\lesssim \frac{\sigma_{\epsilon}}{\kappa_{R}^{2}}\sqrt{\frac{\phi_{1}s_{\beta}\log(d/ \delta)}{n}}.\]
**Remark 3**.: _If \(\log d=o(n)\) with \(d>n\) and \(\sigma_{\epsilon},\phi_{1}>0\) are independent of \(n\) and \(d\), then we obtain the asymptotic consistency result \(\left|\left|\widehat{\beta}-\beta_{0}\right|\right|_{2}=O_{P}\left(\frac{1}{ \kappa_{R}^{2}}\sqrt{\frac{s_{\beta}\log d}{n}}\right)\) by setting \(\delta=\frac{1}{d}\)._
The proof of Theorem 3 is in Section E.3. It implies that the Lasso pilot estimate \(\widehat{\beta}\) in (2) based on the complete-case data is consistent. This property is a consequence of our MAR assumption and the lower eigenvalue bound condition on \(\mathrm{E}(RXX^{T})\) (Assumption 4), because the conditional distribution of \(Y\) given \(X\) remains unchanged on the complete-case data (\(R=1\)) and the loss function in (2) is "strictly convex" with the cone in which \(\widehat{\beta}\) lies.
Next, we present the consistency result for the dual solution \(\widehat{\ell}(x)\) to the sparse \(r_{\ell}\)-approximation \(\widetilde{\ell}(x)\). Among other things, the rate of consistency for \(\widehat{\ell}(x)\) depends on the rate of consistency \(r_{\pi}=r_{\pi}(n,\delta)\) for estimating propensity scores.
**Theorem 4** (Consistency of the dual solution \(\widehat{\ell}(x)\)).: _Let \(\delta\in(0,1)\), \(x\in\mathbb{R}^{d}\) be a fixed covariate vector, and \(A_{1},A_{2},A_{3}>0\) be some large absolute constants. Suppose that Assumptions 1, 2, 4, 5,
_and 6 hold as well as_
\[\sqrt{\frac{s_{\ell}(x)\log(ed/s_{\ell}(x))}{n}}+\sqrt{\frac{\log(1/\delta)}{n}}+r _{\pi}<\min\left\{\frac{\kappa_{R}^{2}}{A_{1}\phi_{s_{\ell}(x)}},1\right\}\quad \text{ and }\quad d\geq s_{\ell}(x)+2.\]
1. _Suppose that_ \(\gamma\geq\big{|}\big{|}\frac{5}{2}\sum_{i=1}^{n}\widehat{\pi}_{i}X_{i}X_{i}^ {T}\ell_{0}(x)+5nx\big{|}\big{|}_{\infty}\) _and_ \(r_{\ell}<\min\left\{\frac{\kappa_{R}^{2}}{\big{|}x\big{|}_{2}}\sqrt{\frac{2 \gamma}{13\nu}},\,\frac{1}{2}\right\}\)_, where_ \(\nu>0\) _is defined in (_58_) of Lemma_ E.9_. Then, with probability at least_ \(1-\delta\)_,_ \[\Big{|}\Big{|}\widehat{\ell}(x)-\widetilde{\ell}(x)\Big{|}\Big{|}_{2}\lesssim \frac{\gamma\sqrt{s_{\ell}(x)}}{n\cdot\kappa_{R}^{2}}+\frac{\phi_{s_{\ell}(x)} \left|\left|x\right|\right|_{2}}{\kappa_{R}^{4}}\cdot r_{\ell}.\]
2. _We denote_ \(r_{\gamma}\equiv r_{\gamma}(n,\delta)=\frac{\sqrt{\phi_{1}}}{\kappa_{R}}\left| \left|x\right|\right|_{2}\sqrt{\frac{\log(d/\delta)}{n}}+\frac{\sqrt{\phi_{1} \phi_{s_{\ell}(x)}}\left|\left|x\right|\right|_{2}}{\kappa_{R}^{2}}\cdot r_{ \pi}(n,\delta)\)_. If, in addition,_ \(\frac{\gamma}{n}=A_{2}\cdot r_{\gamma}\) _and_ \[r_{\ell}\leq\min\left\{\frac{1}{2},\,\left[\left(\frac{A_{2}^{2}\,\kappa_{R}^ {6}\,\phi_{1}}{A_{3}^{2}\left|\left|x\right|\right|_{2}\phi_{s_{\ell}(x)}} \right.\right)^{\frac{1}{4}}\left(\frac{\log(d/\delta)}{n}\right)^{\frac{1}{4} }+\left(\frac{A_{2}^{2}\phi_{1}\kappa_{R}^{4}}{A_{3}^{2}\phi_{s_{\ell}(x)} \left|\left|x\right|\right|_{2}^{2}}\right)^{\frac{1}{4}}\sqrt{r_{\pi}}\right] \right\},\] _then with probability at least_ \(1-\delta\)_,_ \[\Big{|}\Big{|}\widehat{\ell}(x)-\widetilde{\ell}(x)\Big{|}\Big{|}_{2}\lesssim \frac{\left|\left|x\right|\right|_{2}}{\kappa_{R}^{3}}\left[\sqrt{\frac{\phi_ {1}s_{\ell}(x)\log(d/\delta)}{n}}+\frac{\phi_{s_{\ell}(x)}}{\kappa_{R}}\cdot r _{\ell}+\frac{\sqrt{\phi_{1}\phi_{s_{\ell}(x)}s_{\ell}(x)}}{\kappa_{R}}\cdot r _{\pi}\right].\]
**Remark 4**.: _If \(\log d=o(n)\) with \(d>n\), \(\left|\left|x\right|\right|_{2}=O(1)\), and \(\phi_{1},\phi_{s_{\ell}(x)}\) are independent of \(n\) and \(d\), then we obtain the asymptotic consistency result_
\[\Big{|}\Big{|}\widehat{\ell}(x)-\widetilde{\ell}(x)\Big{|}\Big{|}_{2}=O_{P} \left(\frac{1}{\kappa_{R}^{3}}\sqrt{\frac{s_{\ell}(x)\log d}{n}}+\frac{r_{ \ell}}{\kappa_{R}^{4}}+\frac{r_{\pi}\sqrt{s_{\ell}(x)}}{\kappa_{R}^{4}}\right)\]
_by setting \(\delta=\frac{1}{d}\). This asymptotic rate of convergence comprises two parts. The first part is of the order \(O_{P}\left(\frac{1}{\kappa_{R}^{3}}\sqrt{\frac{s_{\ell}(x)\log d}{n}}+\frac{r _{\ell}}{\kappa_{R}^{4}}\right)\), which is the oracle rate of convergence for \(\Big{|}\Big{|}\widehat{\ell}(x)-\widetilde{\ell}(x)\Big{|}\Big{|}_{2}\) when the true propensity scores are known; see Lemma E.1 of Section E.4. The second part is of the order \(O_{P}\left(\frac{r_{\pi}\sqrt{s_{\ell}(x)}}{\kappa_{R}^{2}}\right)\), which is related to the consistency of the propensity score estimation._
The proof of Theorem 4 is in Section E.4. An immediate consequence of this result is that the asymptotic rate of convergence of \(\Big{|}\Big{|}\widehat{\ell}(x)-\ell_{0}(x)\Big{|}\Big{|}_{2}\) is the same as in Remark 4, because by triangle inequality,
\[\Big{|}\Big{|}\widehat{\ell}(x)-\ell_{0}(x)\Big{|}\Big{|}_{2}\leq\Big{|}\Big{|} \widehat{\ell}(x)-\widetilde{\ell}(x)\Big{|}\Big{|}_{2}+\Big{|}\Big{|} \widetilde{\ell}(x)-\ell_{0}(x)\Big{|}\Big{|}_{2}\lesssim\Big{|}\Big{|} \widehat{\ell}(x)-\widetilde{\ell}(x)\Big{|}\Big{|}_{2}+\frac{||x||_{2}}{\kappa_ {R}^{2}}\cdot r_{\ell},\]
and the last term on the right-hand side is asymptotically negligible compared to the second term
\(\frac{\phi_{s_{\ell}(x)}\|x\|_{2}}{\kappa_{R}^{3}}\cdot r_{\ell}\) in Theorem4(b). This implies that the dual solution \(\widehat{\ell}(x)\) is consistent to the population dual solution \(\ell_{0}(x)\) under the setup of Remark4 if
\[\frac{1}{\kappa_{R}^{3}}\sqrt{\frac{s_{\ell}(x)\log d}{n}}\to 0,\quad\frac{r_{ \ell}}{\kappa_{R}^{4}}\to 0,\quad\frac{r_{\pi}\sqrt{s_{\ell}(x)}}{\kappa_{R}^{4}} \to 0\quad\text{ as }\quad n\to\infty.\]
In particular, we only need to consistently estimate the propensity scores, while the rate of consistency \(r_{\pi}\) is essentially irrelevant. Hence, we are completely free to choose whatever consistent estimator suits us best, including those from modern machine learning methods such as deep neural networks (Farrell et al., 2021), random forests (Gao et al., 2022), and support vector machines with universal kernels (such as the Gaussian radial basis function) (Steinwart, 2001). For concreteness, we present in PropositionC.1 of SectionC that under comparatively mild regularity conditions, the Lasso-type generalized linear regression estimator (26) is a feasible option.
The assumptions in Theorem4 are also sufficient to establish the strong duality between the primal debiasing program (3) and its dual (6):
**Lemma 5** (Sufficient conditions for strong duality).: _Let \(\delta\in(0,1)\). Under the assumptions of Theorem4(b), there exists \(\gamma>0\) such that \(\frac{\gamma}{n}\asymp r_{\gamma}\equiv r_{\gamma}(n,\delta)\) and the strong duality as well as the relation (7) in Proposition1 hold with probability at least \(1-\delta\)._
The consistency results for the Lasso pilot estimate \(\widehat{\beta}\) and the dual solution \(\widehat{\ell}(x)\) also imply that under mild regularity conditions, the "Conditional variance II" term in Proposition2 is asymptotically negligible. The proof of Lemma6 is in SectionE.5. This result completes the motivation of the debiased program (3) from the perspective of (conditional) bias-variance trade-off in Section2.3.
**Lemma 6** (Negligible conditional variance term).: _Suppose that Assumptions1,2,3,4,5, and6 hold as well as_
\[\left[s_{\beta}\log(d/s_{\beta})+s_{\ell}(x)\log(d/s_{\ell}(x))\right]^{2}=O \left(\sqrt{n}\right),\quad\frac{||x||_{2}}{\kappa_{R}^{2}}\sqrt{\phi_{2s_{ \ell}(x)}\phi_{2s_{\beta}}}=O(1),\quad\text{ and }\quad r_{\ell}=O(1).\]
_If \(\left|\left|\widehat{\beta}-\beta_{0}\right|\right|_{2}=o_{P}(1)\) and \(\left|\left|\widehat{\ell}(x)-\widetilde{\ell}(x)\right|\right|_{2}=o_{P}(1)\), then the "Conditional variance II" term in Proposition2 is asymptotically negligible in the sense that_
\[\left(\beta_{0}-\widehat{\beta}\right)^{T}\left[\sum_{i=1}^{n}\widehat{w}_{i }^{2}(x)\widehat{\pi}_{i}\left(1-\widehat{\pi}_{i}\right)X_{i}X_{i}^{T} \right]\left(\beta_{0}-\widehat{\beta}\right)=o_{P}(1).\]
### Asymptotic Normality
In this section, we leverage the consistency results from Section3.3 to establish the asymptotic normality of the debiased estimator \(\widehat{m}^{\text{debias}}(x;\widehat{\mathbf{w}})\) in (4). As mentioned in Section3.1, the key observation for applying the CLT is the linear expansion (13) based on the dual solution.
**Theorem 7** (Asymptotic normality of the debiased estimator).: _Let \(x\in\mathbb{R}^{d}\) be a fixed query point and \(s_{\max}=\max\left\{s_{\beta},s_{\ell}(x)\right\}\). Suppose that Assumptions1,2,3,4,5, and6 hold and \(\lambda,\gamma,r_{\ell}>0\)
_are specified as in Theorem 3(b) and Theorem 4(b), respectively. Furthermore, we assume that \(\sigma_{\epsilon}\phi_{1}\phi_{s_{\max}}^{3/2}\left\|x\right\|_{2}=O(1)\),_
\[\frac{(1+\kappa_{R}^{2})s_{\max}\log(nd)}{\kappa_{R}^{4}}=o\left(\sqrt{n} \right),\quad\text{ and }\quad\frac{(1+\kappa_{R}^{4})\sqrt{s_{\max}\log(nd)}}{\kappa_{R}^{6}} \left(r_{\ell}+r_{\pi}\right)=o(1). \tag{20}\]
_Then,_
\[\sqrt{n}\left[\widehat{m}^{\text{debias}}(x;\widehat{\mathbf{w}})-m_{0}(x)\right] \overset{d}{\to}\mathcal{N}\left(0,\,\sigma_{m}^{2}(x)\right).\]
The proof of Theorem 7 can be found in Section E.6, and we make two remarks for this paramount result.
**Remark 5** (Semi-parametric efficiency).: _Given any fixed dimension \(d>0\), the asymptotic variance_
\[\sigma_{m}^{2}(x)=\sigma_{\epsilon}^{2}\cdot x^{T}\left[\operatorname{E} \left(RXX^{T}\right)\right]^{-1}x=\sigma_{\epsilon}^{2}\cdot x^{T}\left\{ \operatorname{E}\left[\pi(X)XX^{T}\right]\right\}^{-1}x\]
_for our debiased estimator achieves the semi-parametric efficiency bound among all asymptotically linear estimators with the MAR outcome (Muller and Keilegom, 2012). It also attains the efficiency bound for de-sparsified Lasso in Theorem 3 of Jankova and van de Geer (2018) under Gaussian covariates and noises. This key property demonstrates the superiority of our debiasing inference method in terms of statistical efficiency._
**Remark 6** (Necessary sparsity condition).: _Compared with the consistency results in Theorem 4, we impose a stricter growth condition \(s_{\max}=o\left(\frac{\sqrt{n}}{\log d}\right)\) on the sparsity level when \(d>n\) in order to establish the asymptotic normality, provided that other model parameters \(\sigma_{\epsilon},\phi_{s_{\max}},\kappa_{R}\) are independent of \(n\) and \(d\). Such a requirement on the sparsity level is a standard and essentially necessary condition in the high-dimensional debiased inference literature in pursuit of asymptotic normality (van de Geer et al., 2014; Javanmard and Montanari, 2014, 2018; Cai et al., 2021); see also the discussion in Section 8.6 of Jankova and van de Geer (2018)._
In order to utilize the asymptotic normality of Theorem 7 and conduct inference on \(m_{0}(x)\) in practice, we need to estimate the asymptotic variance \(\sigma_{m}^{2}(x)\). The following Proposition 8 verifies that the objective function \(\sum_{i=1}^{n}\widehat{\pi}_{i}\widehat{w}_{i}^{2}(x)\) in our debiasing program (3) is a consistent estimator of \(\sigma_{m}^{2}(x)\) under the known noise level \(\sigma_{\epsilon}^{2}\); see Section E.7 for its proof. This result again motivates the design of our debiasing program as in Section 2.3 and supports the formulation of our asymptotically valid confidence interval (5). When the noise level \(\sigma_{\epsilon}^{2}\) is unknown in practice, one can always replace it with a consistent estimator \(\widehat{\sigma}_{\epsilon}^{2}\) without losing any asymptotic efficiency and obtain the final estimator \(\widehat{\sigma}_{\epsilon}^{2}\sum_{i=1}^{n}\widehat{\pi}_{i}\widehat{w}_{i} ^{2}(x)\) for the asymptotic variance. There are various approaches available in the literature to construct such an estimator for \(\sigma_{\epsilon}^{2}\)(Zhang, 2010; Fan et al., 2012; Dicker, 2012; Bayati et al., 2013; Belloni and Chernozhukov, 2013; Reid et al., 2016). In our simulation studies and real-world application, we use the scaled Lasso (Sun and Zhang, 2012) to obtain \(\widehat{\sigma}_{\epsilon}^{2}\); see Section A for details.
**Proposition 8** (Consistent estimate of the asymptotic variance).: _Let \(x\in\mathbb{R}^{d}\) be a fixed query point. Suppose that Assumptions 1, 2, 4, 5, and 6 hold and \(\gamma,r_{\ell}>0\) are specified as in Theorem 4(b). Furthermore, we assume that \(||x||_{2}^{2}\phi_{s_{\ell}(x)}^{2}=O(1)\), \(\frac{(1+\kappa_{R}^{3})}{\kappa_{R}^{5}}\sqrt{\frac{s_{\ell}(x)\log(nd)}{n}}=o (1)\), and \(\frac{(1+\kappa_{R}^{4})}{\kappa_{R}^{6}}\left[r_{\ell}+r_{\pi}\sqrt{s_{\ell}( x)}\right]=o(1)\). Then,_
\[\left|\sum_{i=1}^{n}\widehat{\pi}_{i}\widehat{w}_{i}^{2}(x)-x^{T}\left[\mathrm{ E}\left(RXX^{T}\right)\right]^{-1}x\right|=o_{P}(1).\]
## 4 Experiments
In this section, we evaluate the empirical performance of our proposed debiasing inference method in Section 2.1 and compare it with some existing high-dimensional inference methods in the literature through comprehensive simulation studies.
### Basic Simulation Design and Methods to Be Compared
We generate the i.i.d. data \(\{(Y_{i},R_{i},X_{i})\}_{i=1}^{n}\subset\mathbb{R}\times\{0,1\}\times\mathbb{ R}^{d}\) from the following linear model
\[Y_{i}=X_{i}^{T}\beta_{0}+\epsilon_{i}\quad\text{ with }\quad X_{i}\bot\! \!\!\bot\epsilon_{i},\quad Y_{i}\bot\!\!\!\bot R_{i}|X_{i},\quad\text{ and }\quad X_{i}\sim\mathcal{N}_{d}(\mathbf{0},\Sigma), \tag{21}\]
where \(d=1000\) and \(n=900\) unless stated otherwise. In order to investigate the effect of different covariance structures \(\Sigma\in\mathbb{R}^{d\times d}\), sparsity patterns of the true regression vector \(\beta_{0}\in\mathbb{R}^{d}\), and designs of the query point \(x\in\mathbb{R}^{d}\) on the finite-sample performances of the candidate high-dimensional inference methods, we consider all possible combinations of the following simulation designs.
**Choices of the covariance matrix:** We consider two standard designs of \(\Sigma=(\Sigma_{jk})_{j,k=1}^{d}\) from the literature:
1. The circulant symmetric matrix \(\Sigma^{\mathrm{cs}}\) in Javanmard and Montanari (2014a) defined by \(\Sigma_{jj}=1\), \(\Sigma_{jk}=0.1\) when \(j+1\leq k\leq j+5\) or \(j+d-5\leq k\leq j+d-1\) with \(\Sigma_{jk}=0\) elsewhere for \(j\leq k\), and \(\Sigma_{jk}=\Sigma_{kj}\);
2. The Toeplitz (or auto-regressive) matrix \(\Sigma^{\mathrm{ar}}\) in van de Geer et al. (2014) defined by \(\Sigma_{jk}=0.9^{|j-k|}\).
**Designs of the regression vector:** For the true value of \(\beta_{0}\in\mathbb{R}^{d}\), we consider three different scenarios as:
1. \(\beta_{0}^{sp}=\left(\underbrace{\sqrt{5},...,\sqrt{5}}_{5},0,...,0\right)^{ T}\in\mathbb{R}^{d}\) is sparse;
2. \(\beta_{0}^{de}\propto\left(1,\frac{1}{\sqrt{2}},...,\frac{1}{\sqrt{d}}\right)^ {T}\in\mathbb{R}^{d}\) is dense with \(\left|\left|\beta_{0}^{de}\right|\right|_{2}=5\);
3. \(\beta_{0}^{pd}\propto\left(1,\frac{1}{2},...,\frac{1}{d}\right)^{T}\in\mathbb{ R}^{d}\) is pseudo-dense \(\left|\left|\beta_{0}^{pd}\right|\right|_{2}=5\).
**Designs of the query point:** We conduct experiments on the following four different choices of the query point \(x\in\mathbb{R}^{d}\):
1. \(x^{(1)}=(1,0,...,0)^{T}\in\mathbb{R}^{d}\) to infer the individual effect of the first (significantly nonzero) component of \(\beta_{0}\);
2. \(x^{(2)}=\left(1,\frac{1}{2},\frac{1}{4},0,0,0,\frac{1}{2},\frac{1}{8},0,...,0 \right)^{T}\in\mathbb{R}^{d}\) to infer the joint effects of a few components of \(\beta_{0}\);
3. \(x^{(3)}=\left(0,...,0,\underbrace{1}_{100^{th}},0,...,0\right)^{T}\in\mathbb{ R}^{d}\) to infer an inactive (_i.e._, truly zero or close to zero) component of \(\beta_{0}\);
4. \(x^{(4)}=\left(1,\frac{1}{2^{2}},...,\frac{1}{d^{2}}\right)^{T}\in\mathbb{R}^{d}\) for the circulant symmetric covariance cases, and \(x^{(4)}=\left(1,\frac{1}{2},...,\frac{1}{d}\right)^{T}\in\mathbb{R}^{d}\) for the Toeplitz covariance cases. The purpose of choosing a relatively dense query point \(x^{(4)}\) is to study the joint effect of all the components of \(\beta_{0}\).
**MAR mechanism:** After \(Y_{i},i=1,...,n\) are sampled according to (21), we define the missingness indicators \(R_{i},i=1,...,n\) for \(Y_{i},i=1,...,n\) through the MAR mechanism by
\[\text{P}(R_{i}=1|X_{i})=\frac{1}{1+\exp\left(-1+X_{i7}-X_{i8}\right)}\quad \text{ for }\quad i=1,...,n. \tag{22}\]
In general, the above MAR mechanism yields around \(28\%\) of missingness for the outcome variables \(Y_{i},i=1,...,n\), making the complete-case data even more high-dimensional. Additional simulation results under a simpler MCAR setting are deferred to Section D.2.
**Noise distribution:** We first generate the noise variables \(\epsilon_{i},i=1,...,n\) independently from \(\mathcal{N}(0,1)\). Other types of the noise distributions that violate the sub-gaussian noise condition (Assumption 3) are considered in Section 4.3.
**Methods to be compared:** To demonstrate the statistical efficiency of our proposed debiasing method **("Debias")**, we compare it with several existing inference methods in the literature. The detailed implementation of our proposed method can be found in Section A. There we also explain the three different criteria "min-CV", "1SE", and "min-feas" for selecting the tuning parameter \(\gamma>0\) via cross-validation. We implement all of the following comparative methods on (i) the complete-case (CC) data \((X_{i},Y_{i})\in\mathbb{R}^{d}\times\mathbb{R}\) with \(R_{i}=1\) for \(i=1,...,n\), (ii) the inverse probability weighted (IPW) data \(\left(\frac{X_{i}}{\sqrt{\pi_{i}}},\frac{Y_{i}}{\sqrt{\pi_{i}}}\right)\in \mathbb{R}^{d}\times\mathbb{R}\) with \(R_{i}=1\) for \(i=1,...,n\), and (iii) the oracle fully observed data \((X_{i},Y_{i}),i=1,...,n\). Here, \(\widehat{\pi}_{i},i=1,...,n\) are estimated propensity scores by the Lasso-type logistic regression as described in Section C.
1. **"DL-Jav"** is the debiased Lasso proposed by Javanmard and Montanari (2014a) that solves \(d\) convex programs for an unbiased estimator for \(\beta_{0}\) and a consistent estimate for the asymptotic covariance matrix. The method is implemented by the source code sslasso provided in their paper. We select the tuning parameters as their Section 5.1.
2. **"DL-vdG"** is the debiased Lasso proposed by van de Geer et al. (2014) that solves \(d\) nodewise regressions for the debiased estimator of \(\beta_{0}\) and the estimate of its asymptotic covariance matrix. This method is implemented via the function lasso.proj in the R package hdi(Dezeure et al., 2015).
3. **"R-Proj"** is the ridge projection method proposed by Buhlmann (2013) that produces a bias-corrected estimator of \(\beta_{0}\) via a ridge regression estimator. This method is implemented by the function ridge.proj in the R package hdi. We initialize the estimation of \(\beta_{0}\) with an estimate obtained from the scaled Lasso. Since it is difficult to obtain a consistent estimate of the asymptotic covariance matrix from the function ridge.proj, we only run "R-Proj" when the query point is \(x^{(1)}\) or \(x^{(3)}\). This means that we only conduct inference on single coordinates of \(\beta_{0}\) under these scenarios.
4. **"Refit"** first runs the scaled Lasso to obtain a pilot estimate \(\widehat{\beta}\) and then computes a final the least-square estimate based on covariates in the support set of \(\widehat{\beta}\)(Belloni and Chernozhukov, 2013).
Confidence intervals for "DL-Jav", "DL-vdG", and "R-Proj" are constructed according to their asymptotic normality results, while the confidence intervals from "Refit" are based on the assumption that the selected model by the Lasso pilot estimate \(\widehat{\beta}\) contains the true support set. Contrary to our asymptotic normality theory in Section3.4, these estimators need not be semi-parametrically efficient.
### Simulation Results Under the Gaussian Noise Distribution
We evaluate the performances of our proposed debiasing method and the existing inference methods stated above via the absolute bias \(\big{|}\widehat{m}^{\text{debias}}(x;\widehat{\mathbf{w}})-m_{0}(x)\big{|}\), the average coverage of confidence intervals with the 95% nominal coverage probability, and the average length of confidence intervals across 1000 Monte Carlo experiments for each simulation scenario stated in Section4.1. Since it is very time-consuming to run "DL-vdG", we only execute 500 Monte Carlo experiments for this method under each scenario.
We provide comparative plots in terms of the three evaluation metrics and normality test on selected simulation settings in Figure2 and Figure3 while deferring the full simulation results for the circulant symmetric and Toeplitz (or auto-regressive) covariance design to Table1 and Table2 in SectionD.1, respectively. In summary, our proposed debiasing method removes roughly the same amount of bias as the debiased Lasso "DL-Jav" and the Lasso refitting "Refit" but yields relatively shorter confidence intervals and more accurate coverage probabilities than these two methods. While the ridge projection "R-Proj" sometimes produces better coverage probabilities than other methods, it always has larger biases and more conservative confidence intervals.
The superiority of our debiasing method is better revealed when the query point \(x\in\mathbb{R}^{d}\) has many nonzero entries. Since the competing methods are asymptotically normal only coordinate-wise, they suffer from substantial biases and inflated variances if applied to a dense query point. In
contrast, the debiased estimator from our method is asymptotically semi-parametrically efficient for arbitrary \(x\) in Theorem7 and its asymptotic variance can be naturally estimated in Proposition8. Hence, the asymptotic normality of our debiased estimator can better exemplify in finite samples than the competing methods. All these simulation results consolidate the derived theoretical properties of our debiasing method in Section3 and demonstrate its effectiveness in conducting valid statistical inference under missing outcomes.
Figure 2: Simulation results with sparse \(\beta_{0}^{sp}\) and sparse \(x^{(2)}\) for the circulant symmetric covariance matrix \(\Sigma^{\text{cs}}\) under the Gaussian noise \(\mathcal{N}(0,1)\) and the MAR setting. **Top Left:** Boxplots of the absolute bias. **Top Right:** Coverage probabilities with standard error bars. **Bottom Left:** Average lengths of confidence intervals with standard error bars. **Bottom Right:** Four comparative “QQ-plots” of the transformed studentized debiased estimators \(\Phi\left(\sqrt{n}\cdot\widehat{\sigma}_{n}^{-1}(x)\left[\widehat{m}(x)-m_{0}( x)\right]\right)\) obtained from different debiasing methods, where \(\widehat{\sigma}_{n}(x)^{2}\) is the estimated (asymptotic) variance of \(\widehat{m}(x)\) by each method and \(\Phi(\cdot)\) is the CDF of \(\mathcal{N}(0,1)\). We also highlight the recommended rules “1SE” and “min-feas” for our proposed debiasing method via dashed rectangles in the first three panels.
### Simulation Results Under Heavy-Tailed Noise Distributions
The consistency and asymptotic normality results for our debiasing inference methods are derived under the sub-gaussian noise condition (Assumption 3), and our previous simulation results in Section 4.2 are conducted with normally distributed noises accordingly. We showcase through
Figure 3: Simulation results with pseudo-dense \(\beta_{0}^{pd}\) and sparse \(x^{(2)}\) for the Toeplitz (or auto-regressive) covariance matrix \(\Sigma^{\mathrm{ar}}\) under the Gaussian noise \(\mathcal{N}(0,1)\) and the MAR setting. **Top Left:** Boxplots of the absolute bias. **Top Right:** Coverage probabilities with standard error bars. **Bottom Left:** Average lengths of confidence intervals with standard error bars. **Bottom Right:** Four comparative “QQ-plots” of the transformed studentized debiased estimators obtained from different debiasing methods, where \(\widehat{\sigma}_{n}(x)^{2}\) is the estimated (asymptotic) variance of \(\widehat{m}(x)\) by each method and \(\Phi(\cdot)\) is the CDF of \(\mathcal{N}(0,1)\). We also highlight the recommended rules “1SE” and “min-feas” for our proposed debiasing method via dashed rectangles in the first three panels.
additional simulations that the asymptotic normality of our debiased estimator (4) is valid even when the distribution of the noise \(\epsilon\) in model (1) is not sub-gaussian. To this end, we follow the simulation designs in Section4.1 but explore two heavy-tailed noise distributions as:
1. \(\epsilon\) follows a Laplace distribution with mean \(0\) and scale parameter \(\frac{1}{\sqrt{2}}\). Note that \(\epsilon\) becomes sub-exponentially distributed and has variance \(\text{Var}(\epsilon)=1\).
2. \(\epsilon\) follows a mean-zero \(t\)-distribution with \(2\) degrees of freedom. In this case, \(\epsilon\) is not even sub-exponential and has infinite variance.
Since "DL-vdG" is time-consuming to run and follows a similar debiasing approach as "DL-Jav", we choose not to implement "DL-vdG" on the above two heavy-tailed distributions of \(\epsilon\). We present the selected plots and asymptotic normality validations in Figure4 and Figure5, while again deferring the full comparative simulation results for the circulant symmetric and Toeplitz covariance designs to Table3-4 and Table5-6 in SectionD.1, respectively. In short, the debiased estimator appears to be approximately normally distributed in finite samples even when the noise distribution is heavy-tailed, while other competing methods fail to maintain their asymptotic normality. Other conclusions from these simulation results are similar to what we have described in Section4.2.
Simulation Results for the Proposed Debiasing Method With Nonparametrically Estimated Propensity Scores
As shown in Theorem4 and Theorem7, the consistency and asymptotic normality of the proposed debiasing method do not require any parametric assumption on the propensity score \(\text{P}(R=1|X)\) under MAR condition. Therefore, we now conduct additional simulation studies for our debiasing method in which the propensity scores \(\pi(X_{i}),i=1,...,n\) estimated by the following nonlinear/nonparametric machine learning methods based on the observed data \(\{(X_{i},R_{i})\}_{i=1}^{n}\). All of these methods are implemented in the scikit-learn package (Pedregosa et al., 2011) in Python.
**Naive Bayes ("NB"):** We adapt the Gaussian naive Bayes method (Zhang, 2004) to model the propensity scores through
\[\pi(X_{i})=\text{P}(R_{i}|X_{i})=\text{P}(R_{i}|X_{i1},...,X_{in})=\frac{\text {P}(R_{i})\cdot\prod_{k=1}^{n}\text{P}(X_{ik}|R_{i})}{\text{P}(X_{i1},...,X_{ in})}\]
for any \(i=1,...,n\), where \(\text{P}(X_{ik}|R_{i})\) is the density of \(\mathcal{N}(\mu_{R},\sigma_{R}^{2})\) and \(\mu_{R},\sigma_{R}\) are estimated by the maximum likelihood approach.
**Random Forest ("RF"):** We implement the random forest method (Breiman, 2001) with \(100\) trees, bootstrapping samples, and the Gini impurity to measure the quality of a split.
**Support Vector Machine ("SVM"):** We fit a support vector machine (Chen et al., 2005) with the Gaussian radial basis function to classify the missingness indicator \(R_{i}\) based on the observed covariate vector \(X_{i}\in\mathbb{R}^{d}\) for \(i=1,...,n\). The outputs of the trained SVM are regarded as the surrogates of primitive estimated propensity scores.
**Neural Network ("NN"):** We train a neural network model (Hinton, 1990) with two hidden layers of size \(80\times 50\) and use the rectified linear unit function \(h(x)=\max\{x,0\}\) as the activation function. The learning rate \(\eta_{\text{NN}}\) is initially set to \(0.001\) as long as the training loss keeps decreasing and is adaptively changed to \(\eta_{\text{NN}}/5\) when two consecutive epochs fail to decrease the training loss.
For the above methods, we also consider their calibrated versions through the Platt's logistic model (Platt, 1999) that fits an extra logistic regression on the estimated propensity scores \(\widehat{\pi}_{i},i=1,...,n\). However, since the errors of estimated propensity scores and the final performances of the proposed debiased estimator worsens after we apply the calibration to the estimated propensity scores, we choose not to report them here.
The covariate vectors \(X_{i}\in\mathbb{R}^{d},i=1,...,n\) are again sampled independently from \(\mathcal{N}_{d}(\mathbf{0},\Sigma^{\text{cs}})\) with \(d=1000\) and \(n=900\), and the noise variables \(\epsilon_{i},i=1,...,n\) are generated independently from \(\mathcal{N}(0,1)\) as in Section4.1. To increase the complexity of estimating the propensity scores, we generate the missing indicators \(R_{i},i=1,...,n\) for the outcome variables \(Y_{i},i=1,...,n\) through a
Figure 4: Simulation results with dense \(\beta_{0}^{de}\) and sparse \(x^{(4)}\) for the circulant symmetric covariance matrix \(\Sigma^{\text{cs}}\) under the Laplace \(\left(0,\frac{1}{\sqrt{2}}\right)\) distributed noises and the MAR setting as in Section4.1. **Top Middle:** Coverage probabilities with standard error bars. **Top Right:** Average lengths of confidence intervals with standard error bars. **Middle Panels:** Three comparative “QQ-plots” of the transformed studentized debiased estimators \(\Phi\left(\sqrt{n}\cdot\widehat{\sigma}_{n}^{-1}(x)\left[\widehat{m}(x)-m_{0} (x)\right]\right)\) obtained from different debiasing methods, where \(\widehat{\sigma}_{n}(x)^{2}\) is the estimated (asymptotic) variance of \(\widehat{m}(x)\) by each method and \(\Phi(\cdot)\) is the CDF of \(\mathcal{N}(0,1)\).
different MAR mechanism than the one in Section 4.1 as:
\[\mathrm{P}(R_{i}=1|X_{i})=\Phi\left(-4+\sum_{k=1}^{K}Z_{ik}\right), \tag{23}\]
where \(\Phi(\cdot)\) is the CDF of \(\mathcal{N}(0,1)\) and the vector \((Z_{i1},...,Z_{iK})\) contains all polynomial combinations of the first eight components \(X_{i1},...,X_{i8}\) of the covariate vector \(X_{i}\) with degrees less than or equal to two (_i.e._, including the linear, quadratic, and one-way interaction terms).
The simulation results for our debiasing method under the oracle propensity scores ("Oracle"), the Lasso-type logistic regression (27) ("LR"), and the aforementioned nonparametric methods for the propensity score estimation are shown in Figure 6 and Figure 7. We provide more comprehensive results with the evaluation metrics from Section 4.2 and an additional measure for the average mean absolute error ("Avg-MAE") for the estimated propensity scores in Table 7 as well as additional "QQ-plots" in Figure 9 of Section D.1.
The main conclusion from this simulation study is that the performance of our debiasing method
Figure 5: Simulation results with pseudo-dense \(\beta_{0}^{pd}\) and dense \(x^{(4)}\) for the Toeplitz (or auto-regressive) covariance matrix \(\Sigma^{\mathrm{ar}}\) under the \(t_{2}\) distributed noises and the MAR setting as in Section 4.1. **Top Left:** Boxplots of the absolute bias. **Top Middle:** Coverage probabilities with standard error bars. **Top Right:** Average lengths of confidence intervals with standard error bars. **Middle Panels:** Three comparative “QQ-plots” of the transformed studentized debiased estimators \(\Phi\left(\sqrt{n}\cdot\widehat{\sigma}_{n}^{-1}(x)\left[\widehat{m}(x)-m_{0} (x)\right]\right)\) obtained from different debiasing methods, where \(\widehat{\sigma}_{n}(x)^{2}\) is the estimated (asymptotic) variance of \(\widehat{m}(x)\) by each method and \(\Phi(\cdot)\) is the CDF of \(\mathcal{N}(0,1)\).
Figure 6: Simulation results for our debiasing method with dense \(\beta_{0}^{de}\) and sparse \(x^{(2)}\) when the propensity scores are estimated by various machine learning methods under the new MAR setting (23). **Top Left:** Boxplots of the absolute bias. **Top Right:** Coverage probabilities with standard error bars. **Middle Left:** Average lengths of confidence intervals with standard deviation bars. **Middle Right:** Letter-valued plots of the MAE of estimated propensity scores. **Bottom Two Rows:** Histograms of the studentized debiased estimators under different methods. We also order the performance metrics according to their means in the first four panels.
Figure 7: Simulation results for our debiasing method with sparse \(\beta_{0}^{sp}\) and (weakly) dense \(x^{(4)}\) when the propensity scores are estimated by various machine learning methods under the new MAR setting (23). **Top Left:** Boxplots of the absolute bias. **Top Right:** Coverage probabilities with standard error bars. **Middle Left:** Average lengths of confidence intervals with standard deviation bars. **Middle Right:** Letter-valued plots of the MAE of estimated propensity scores. **Bottom Two Rows:** Histograms of the studentized debiased estimators under different methods. We also order the performance metrics according to their means in the first four panels.
can be further improved when the propensity scores are better estimated. Under the MAR mechanism (23) and Gaussian design on covariate vectors \(X_{i},i=1,...,n\), neural network and Gaussian naive Bayes models outperform other machine learning methods in estimating the propensity scores and lead to the subsequent debiased estimators with lower biases, better coverages, and shorter confidence intervals in general. Conversely, the Lasso-type logistic regression is mis-specified and consequently degrades the performance of the resulting debiased estimator. Another interesting phenomenon from these simulation results is that our debiasing method sometimes has better performances when using the estimated propensity scores \(\widehat{\pi}_{i},i=1,...,n\) than plugging in the oracle propensity scores \(\pi_{i}=\pi(X_{i}),i=1,...,n\). Such a paradox has been analyzed in the propensity score matching (Rosenbaum, 1987), the IPW estimator for causal inference (Robins et al., 1992; Su et al., 2023) and other general parameter estimation problems in the presence of a nuisance parameter (Henmi and Eguchi, 2004; Hitomi et al., 2008; Lok, 2021). A rigorous study of this paradox on our debiasing method is beyond the scope of this paper.
## 5 Real-World Application
We showcase a real application of our proposed debiasing method to the stellar mass inference on selected galaxies in the Sloan Digital Sky Survey, Fourth Phase and Data Release 16 (SDSS-IV DR16; Ahumada et al. 2020)1.
Footnote 1: See [https://www.sdss4.org/dr16](https://www.sdss4.org/dr16). 2 See [https://www.sdss4.org/dr17/spectro/galaxy_firefly/](https://www.sdss4.org/dr17/spectro/galaxy_firefly/).
### Background and Study Design
SDSS-IV DR16 contains three main survey data: the Extended Baryon Oscillation Spectroscopic Survey (eBOSS; Dawson et al. 2016), Mapping Nearby Galaxies at APO (MaNGA; Bundy et al. 2014), and APO Galactic Evolution Experiment 2 (APOGEE-2; Majewski et al. 2017), among which the eBOSS survey and its predecessor BOSS cover a broad range of cosmological scales and observes most galaxies in the Universe. While a couple of value added catalogs for the estimated stellar masses of observed galaxies based on the BOSS spectra are available (Blanton et al., 2005; Conroy et al., 2009; Chang et al., 2015), we focus on the eBOSS Firefly value-added catalog (Comparat et al., 2017)2 for our stellar mass inference study here given its timeliness and completeness. In more details, this catalog fits combinations of single-burst stellar population models to spectroscopic data that have a positive definite redshift and are classified as galaxies by SDSS-IV DR16, following an iterative best-fitting process controlled by the Bayesian Information Criterion in order to produce stellar masses and other stellar properties of the galaxies (Wilkinson et al., 2017). There are a fraction of missing data in the estimated stellar masses by the Firefly catalog due to the limiting usage of the observational run in SDSS-IV DR16 dedicated to galaxy targets, data contamination, and misclassification of galaxies as stars (Comparat et al., 2017). Given that we will incorporate various spectroscopic and photometric properties of the observed galaxies into the covariate vector
for our stellar mass inference task, it is reasonable that the estimated stellar masses are missing at random.
Since nearly all the existing catalogs only yield point estimates to the stellar mass, we intend to leverage our debiasing inference method to answer two scientific questions:
* _Question 1:_ How can we produce uncertainty measures (or confidence intervals) for the estimated stellar mass of a newly observed galaxy based on its observed spectroscopic and photometric properties?
* _Question 2:_ Is it statistically significant that the stellar mass is negatively correlated with its distance to the nearby cosmic filament structures?
To this end, we fetch the spectroscopic and photometric properties of \(n=1185\) observed galaxies within a thin redshift slice \(0.4\sim 0.4005\) from SDSS-IV DR16 database as basic covariates. Such a thin redshift slice helps control the redshift distortions caused by the Kaiser effect (_i.e._, galaxies falling towards the galaxy cluster; Kaiser 1987) and the "Finger-Of-God" effects (_i.e._, the elongation of galaxy distributions along the light-of-sight direction; Jackson 1972). Among the galaxies in the selected redshift slice, 30.2% of their stellar masses are missing in the Firefly catalog under the "Chabrier" initial stellar mass function (Chabrier, 2003) and the "MILES" stellar population model (Maraston and Stromback, 2011). In addition, the covariate vector for each galaxy consists of right ascension, declination, ugriz bands and their measurement errors, model and cmodel magnitudes for ugriz bandpasses, galactic extinction corrections for ugriz bandpass, median signal-to-noise per pixel within each ugriz bandpass, original and best-fit template spectra projected onto ugriz filters as well as their inverse variances, sky flux in each of the ugriz imaging filters, and signal-to-noise squared for spectrograph #1 and #2, at g=20.20, r=20.20, i=20.20 for SDSS spectrograph spectra. To capture the nonlinear patterns of stellar mass estimation and handle those extreme values, we generate additional covariates by applying the logarithmic transformations \(x\mapsto\mathrm{sign}(x)\cdot\log(1+|x|)\) to ugriz bands, galactic extinction corrections, and those flux features. We also remove those covariates whose correlation coefficients are higher than 0.95 before generating univariate B-spline base covariates of polynomial order 3 with 40 knots. The reason why we adopt the B-spline base covariates instead of the usual polynomial combinations is that the covariates generated by univariate B-spline bases tends to be less linearly correlated and can facilitate the subsequent inference task. To address _Question 2_ above, we compute the angular diameter distances from the galaxies to the two-dimensional spherical cosmic filaments constructed by the directional subspace constrained mean shift algorithm in Zhang and Chen (2023); Zhang et al. (2022) on SDSS-IV DR16 data.3 We also control the confounding effects of nearby galaxy clusters by including the angular diameter distances from galaxies to the estimated local modes and intersections of filaments as extra covariates. The final dimension of the covariate vector for each galaxy in the selected redshift slice is \(d=1409\), and we take the usual logarithm of the Firefly stellar mass of each galaxy as the outcome variable.
### Results
When approaching _Question 1_, we randomly select a galaxy in the nearby redshift slice \(0.4005\sim 0.401\) as a new observation and construct its associated covariate vector as the query point \(x^{(Q1)}\) following the procedures in Section 5.1. As for _Question 2_, we set the query point as \(x^{(Q2)}=(0,0,1,0,...,0)^{T}\in\mathbb{R}^{d}\), where the only nonzero entry corresponds to the covariate of the angular diameter distance to cosmic filaments. We estimate the propensity scores (_i.e._, the probabilities of having a non-missing stellar masses) on the galaxies in the redshift slice by the Lasso-type logistic regression ("LR") in Section C and the neural network ("NN") model described in Section 4.4. Nevertheless, as shown by the left panel of Figure 8 that the estimated propensity scores by the neural network model are too extreme, _i.e._, close to 0 or 1, we also consider calibrating them through the Platt's logistic model (Platt, 1999), which is denoted by "NNcal".
Figure 8 displays the inference results by our proposed debiasing method and the classical approaches in Section 4.1 applied to the complete-case data. Due to the sparsity structure of the design matrix created by B-spline bases, "DL-Jav" and "DL-vdG" fail to produce their inference results. For _Question 1_, the 95% confidence intervals obtained from our debiasing method cover the stellar mass of the new galaxy in the Firefly catalog no matter how we estimate the propensity scores or select the tuning parameter, while the Lasso refitting method underestimates the stellar mass in a statistically significant level. For _Question 2_, the 95% confidence intervals produced by our debiasing and Lasso refitting methods for the regression coefficient of the distance to cosmic filaments are both below 0, aligning with the existing findings that galaxies are less massive when they are farther away from the cosmic filaments (Alpaslan et al., 2016; Kraljic et al., 2018; Malavasi et al., 2022). On the contrary, the 95% confidence interval obtained from the ridge projection method contains zero and cannot draw a statistically significant conclusion about the relation between stellar masses of galaxies and their distances to cosmic filaments. These results consolidate the usefulness of our proposed debiasing method in practice.
Figure 8: Stellar mass inference with our debiasing and related methods. **Left Panel:** Estimated propensity scores by “LR”, “NN”, and “NNcal”. **Middle Panel:** 95% confidence intervals by different methods for the estimated stellar mass of the new galaxy, addressing _Question 1_. **Right Panel:** 95% confidence intervals by different methods for the estimated regression coefficient associated with the distance to cosmic filaments, addressing _Question 2_.
Discussion
This paper proposes a novel debiasing method for conducting valid inference on high-dimensional linear models with MAR outcomes. We establish the asymptotic normality of the debiasing method and illuminate its statistical and computational efficiencies through the dual formulation of the key debiasing program. Simulation studies and real-world applications of our proposed method demonstrate its superiority over the existing high-dimensional inference methods in the literature. There are several potential applications and extensions that can further advance the impacts of our debiasing method.
**1. Applications in causal inference:** As discussed by Ding and Li (2018); Chakraborty et al. (2019), the missing outcome setting in this paper incorporates the classical setup in causal inference under the framework of potential outcome (Rubin, 1974) as a special case. Specifically, the observable data in causal inference problems are of the form \((\mathbb{Y},T,X)\in\mathbb{R}\times\{0,1\}\times\mathbb{R}^{d}\), where \(T\in\{0,1\}\) denotes a binary treatment assignment indicator and \(\mathbb{Y}=T\cdot Y(1)+(1-T)\cdot Y(0)\) with \(Y(0),Y(1)\) being the potential outcome of \(Y\) within the control group \(T=0\) and the treatment group \(T=1\) respectively. Given that at most one of the potential outcome \(Y(0),Y(1)\) for each subject is observed, the potential outcome framework can be related to our problem setting by taking \((Y,R)=(Y(1),T)\) in the treatment group or \((Y,R)=(Y(0),1-T)\) in the control group. Therefore, our debiasing inference method can be applied to high-dimensional causal inference problems.
First, our debasing method can be used to conduct valid statistical inference on the regression function (or the conditional mean outcome) of the treatment group if we focus on the observational data \(\{(Y_{i},R_{i},X_{i})\}_{i=1}^{n}=\{(Y(1)_{i},T_{i},X_{i})\}_{i=1}^{n}\) or the control group if the data of interest is \(\{(Y_{i},R_{i},X_{i})\}_{i=1}^{n}=\{(Y(0)_{i},T_{i},X_{i})\}_{i=1}^{n}\). Unlike the classical approaches in causal inference, where the observational data \(\{(\mathbb{Y}_{i},T_{i},X_{i})\}_{i=1}^{n}\) are split into treatment and control halves in order to conduct statistical inference on the estimand of interest, we utilize all the covariate vectors \(X_{i}\in\mathbb{R}^{d},i=1,...,n\) in both treatment and control groups to address the above inference problems, providing extra efficiency gains.
Second, under Assumption 1, our proposed debiasing program (3) can also be extended to conduct statistical inference on the linear average conditional treatment effect (ACTE) \(\text{E}[Y(1)-Y(0)|X]\) with no unmeasured confounding. To this end, borrowing an idea from Giessing and Wang (2021), the debiasing program (3) can be modified as follows:
\[\operatorname*{arg\,min}_{\mathbf{w}_{(0)},\mathbf{w}_{(1)}\in\mathbb{R}^ {n}}\sum_{i=1}^{n}\left[\widehat{\pi}_{i}w_{i(1)}^{2}+(1-\widehat{\pi}_{i})w_ {i(0)}^{2}\right]\] \[\text{subject to }\left|\left|x-\frac{1}{\sqrt{n}}\sum_{i=1}^{n}w_{i(1)} \cdot\widehat{\pi}_{i}\cdot X_{i}\right|\right|_{\infty}\leq\frac{\gamma_{1}} {n}\quad\text{ and }\quad\left|\left|x-\frac{1}{\sqrt{n}}\sum_{i=1}^{n}w_{i(0)} \left(1-\widehat{\pi}_{i}\right)X_{i}\right|\right|_{\infty}\leq\frac{\gamma_ {2}}{n},\]
where \(\gamma_{1},\gamma_{2}>0\) are the tuning parameters and \(\widehat{\pi}_{i},i=1,...,n\) are the estimated treatment assign
ment probabilities (_i.e._, propensity scores). The debiased estimator (4) will become
\[\widehat{m}^{\text{debias}}(x;\widehat{\mathbf{w}}_{(1)},\widehat{\mathbf{ w}}_{(0)})\] \[=x^{T}\left(\widehat{\beta}_{(1)}-\widehat{\beta}_{(0)}\right)+ \frac{1}{\sqrt{n}}\sum_{i=1}^{n}\left[\widehat{w}_{i(1)}\cdot T_{i}\left(Y(1)_ {i}-X_{i}^{T}\widehat{\beta}_{(1)}\right)-\widehat{w}_{i(0)}\cdot(1-T_{i}) \left(Y(0)_{i}-X_{i}^{T}\widehat{\beta}_{(0)}\right)\right].\]
However, this new estimator may no longer embrace the asymptotically semi-parametric efficiency, and we will leave the thorough theoretical investigation to the future work. Other related works for inferring the linear CATE with high-dimensional covariates can be found in Chernozhukov and Semenova (2018).
**2. Model mis-specification and robustness:** Theoretical guarantees of our debiasing method are validated under the linear model (1) and sub-Gaussianity assumptions. While we have conducted additional simulations in Section 4.3 to certify the robustness of our debiasing method against the violations of the sub-Gaussian noise assumption, it is of research interest to investigate the performances and theoretical implications of our debiasing method when the assumed linear model is misspecified. Given that our analysis of the inference with random design is unconditional on the covariate vector \(X\in\mathbb{R}^{d}\), the modification technique in Buhlmann and van de Geer (2015) might provide some insights.
**3. Missing covariates:** Extending our debiasing method to address issues related to incomplete covariates remains an open problem. Notably, the problem is more challenging under high-dimensional settings because there are numerous covariates, resulting in a large number of potential missing patterns (up to \(2^{d}\) at most). While there have been some efforts to leverage the MCAR assumption (Wang et al., 2019) to tackle this problem, it is noteworthy that MCAR is a strong assumption that may not hold in many real-world scenarios. A possible remedy is to use the multiple imputation method (Carpenter et al., 2023), but it is unclear how to design a reliable imputation model.
## Acknowledgement
We thank Armeen Taeb for helpful discussions about the computational part of this paper. YZ is supported in part by YC's NSF grant DMS-2141808. AG is supported by NSF grant DMS-2310578. YC is supported by NSF grants DMS-1952781, 2112907, 2141808, and NIH U24-AG07212.
Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS-IV acknowledges support and resources from the Center for High Performance Computing at the University of Utah. The SDSS website is www.sdss.org.
SDSS-IV is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, Center for Astrophysics -- Harvard & Smithsonian, the Chilean Participation Group, the French Participation Group, Instituto de Astrofisica de Ca
narias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) / University of Tokyo, the Korean Participation Group, Lawrence Berkeley National Laboratory, Leibniz Institut fur Astrophysik Potsdam (AIP), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astrophysik (MPA Garching), Max-Planck-Institut fur Extraterrestrische Physik (MPE), National Astronomical Observatories of China, New Mexico State University, New York University, University of Notre Dame, Observatorio Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Autonoma de Mexico, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University.
|
2309.07737 | Eleven Competing Phases in the Heisenberg-Gamma (J$Γ$) Ladder | The spin-orbit generated $\Gamma$ interaction is known to induce strong
frustration and to be significant in realistic models of materials. To gain an
understanding of the possible phases that can arise from this interaction, it
is of considerable interest to focus on a limited part of parameter space in a
quasi one-dimensional model where high precision numerical results can be
obtained. Here we study the Heisenberg-Gamma (J$\Gamma$) ladder, determining
the complete zero temperature phase diagram by analyzing the entanglement
spectrum (ES) and energy susceptibility. A total of 11 different phases can be
identified. Two of the phases, the antiferromagnetic Gamma (A$\Gamma$) and
ferromagnetic Gamma (F$\Gamma$) phases, have previously been observed in the
Kitaev-Gamma ladder, demonstrating that the A$\Gamma$-phase is a symmetry
protected topological phase (SPT) protected by $TR\times \mathcal{R}_{b}$
symmetry, the product of time-reversal ($TR$) and $\pi$ rotation around the
$b$-axis ($\mathcal{R}_{b}$), while the F$\Gamma$-phase is related to a
rung-singlet phase through a local unitary transformation. Three other phases,
$\Upsilon$, $\Omega$ and $\delta$ show no conventional order, a doubling of the
entanglement spectrum and for the $\Upsilon$ and $\Omega$-phases a gap is
clearly present. The $\delta$-phase has a significantly smaller gap and
displays incommensurate correlations, with a peak in the static structure
factor, $S(k)$ continuously shifting from $k/\pi\mathord{=}2/3$ to
$k\mathord{=}\pi$. In the $\Omega$-phase we find pronounced edge-states
consistent with a SPT phase protected by the same $TR\times \mathcal{R}_{b}$
symmetry as the A$\Gamma$-phase. The precise nature of the $\Upsilon$ and
$\delta$-phases is less clear. | Sebastien J. Avakian, Erik S. Sørensen | 2023-09-14T14:15:45Z | http://arxiv.org/abs/2309.07737v1 | # Eleven Competing Phases in the Heisenberg-Gamma (\(J\Gamma\)) Ladder
###### Abstract
The spin-orbit generated \(\Gamma\) interaction is known to induce strong frustration and to be significant in realistic models of materials. To gain an understanding of the possible phases that can arise from this interaction, it is of considerable interest to focus on a limited part of parameter space in a quasi one-dimensional model where high precision numerical results can be obtained. Here we study the Heisenberg-Gamma (\(JT\)) ladder, determining the complete zero temperature phase diagram by analyzing the entanglement spectrum (ES) and energy susceptibility. A total of 11 different phases can be identified, among them the well known rung-singlet (RS) phase and 5 other phases, FM, FM-Z, FM-XY, AF and AF-Z, with conventional long-range magnetic order. The 3 ferromagnetic phases, FM, FM-Z and FM-XY simultaneously have non-zero scalar chirality. Two other phases, the antiferromagnetic Gamma (\(A\Gamma\)) and ferromagnetic Gamma (\(FT\)) phases, have previously been observed in the Kitaev-Gamma ladder, demonstrating that the \(A\Gamma\)-phase is a symmetry protected topological phase (SPT) protected by \(TR\times{\cal R}_{b}\) symmetry, the product of time-reversal (\(TR\)) and \(\pi\) rotation around the \(b\)-axis (\({\cal R}_{b}\)), while the \(\Gamma\)-phase is related to the RS phase through a local unitary transformation. The 3 remaining phases, \(\Upsilon\), \(\Omega\) and \(\delta\) show no conventional order, a doubling of the entanglement spectrum and for the \(\Upsilon\) and \(\Omega\)-phases a gap is clearly present. The \(\delta\)-phase has a significantly smaller gap and displays incommensurate correlations, with a peak in the static structure factor, \(S(k)\) continuously shifting from \(k/\pi{=}2/3\) to \(k{=}\pi\). In the \(\Omega\)-phase we find pronounced edge-states consistent with a SPT phase protected by the same \(TR\times{\cal R}_{b}\) symmetry as the \(A\Gamma\)-phase. The precise nature of the \(\Upsilon\) and \(\delta\)-phases is less clear.
## 1 Introduction
Often when modelling magnetic materials, the interaction terms are assumed not to depend on the direction of the bond, in the sense that terms that can only be distinguished by their spatial orientation are taken to be equivalent. Significant interest in models where this is not the case and interactions depend on the direction of bonds, have arisen with Kitaev's exact solutions for the ground-state of a simple local Hamiltonian with bond-directional interactions on a honeycomb lattice, the Kitaev Honeycomb model (KHM) [1]. Bond-directional interactions have previously been considered in the wider context of quantum compass (Kugel-Khomskii) models [2, 3, 4, 5], however, for the KHM a spin liquid ground-states can rigorously be demonstrated [1]. The bond-directional Kitaev interaction (\(K\)) in the KHM is of the Ising type and can be realized in real materials, as demonstrated by Jackeli et al. [6]. This has established the class of Kitaev materials [7, 8, 9, 10, 11] that are currently being intensely studied. Among the most promising candidate materials is \(\alpha\)-RuCl\({}_{3}\)[12, 13, 14], a material with two-dimensional honeycomb layers. For \(\alpha\)-RuCl\({}_{3}\) there is growing consensus[8, 9, 15, 16] that the Kitaev interaction is ferromagnetic \(K\)\(<\)0, however, other interactions are clearly also present [17, 18] with the \(\Gamma\)-interaction the largest [19, 20]. On a given a bond with a Kitaev interaction of the form \(KS^{\gamma}S^{\gamma}\), the \(\Gamma\)-interaction takes the form \(\Gamma(S^{\alpha}S^{\beta}+S^{\beta}S^{\alpha})\) and it is estimated [8, 9, 15, 16] that \(\Gamma\)\(>\)0 in \(\alpha\)-RuCl\({}_{3}\). Interaction terms of the usual Heisenberg form with strength \(J\) are also believed to be non-negligible, but smaller than the \(\Gamma\)-interaction. Several other interaction terms, such as \(\Gamma^{\prime}\), \(J_{2}\) and \(J_{3}\) are sometimes also taken into considerations, but they are believed to be even smaller in magnitude for \(\alpha\)-RuCl\({}_{3}\), and we do not discuss them here even though they might crucially influence the phase-diagram due to the very high degree of frustration. The phase diagram of \(\alpha\)-RuCl\({}_{3}\) is of significant current interest due to the experimental observation a magnetically disordered phase under an applied magnetic field [13, 14, 21] which has been interpreted as a spin liquid phase [22, 23]. The precise nature of this phase is debated [24, 25, 26, 27] and a full understanding of the complete phase diagram is lacking. Furthermore, it is clear that the complete phase-diagram of the \(K\)-\(J\)-\(\Gamma\) model of \(\alpha\)-RuCl\({}_{3}\) in a magnetic field is very complex and extremely challenging to determine precisely [17, 28, 29, 30]. It is therefore very valuable to study the phase diagram of low dimensional versions of this model in a highly restricted part of parameter space where almost exact results can be obtained from state of the art numerical calculations, and here we focus on the Heisenberg Gamma (\(J\Gamma\)) in a ladder geometry.
While the ladder is a highly restrictive geometry, it can still lead to important insights into the possible phases of the full two-dimensional models, and it includes crucial interactions not present in a purely one-dimensional model. We also note that classes of ladder materials exists that have been shown to closely model the ladder geometry [31, 32], so called spin-ladder materials, and one might hope that it will be possible to find similar materials with bond-directional interactions. The ladder geometry is also very attractive since almost exact results can be obtained for extremely large systems or directly in the thermodynamic limit, in stark contrast to the two-dimensional lattice where exact diagonalization results are limited to very small sizes [17, 33]. Multi-leg models have been studied [34, 35] but systematic studies
are challenging. Kitaev's [1] solutions of the honeycomb model can be extended to include the ladder [36] but is not applicable when \(J{\neq}0\) or \(\Gamma{\neq}0\). The Kitaev-Heisenberg model in a ladder geometry has been studied using numerical techniques [37, 38, 39] finding 6 distinct phases at zero field as the ratio \(J/K\) is varied, in remarkable good agreement with exact diagonalization results for the honeycomb lattice [40]. Likewise, the Kitaev-Gamma ladder has also been investigated [41, 42, 43], and in this case 8 distinct phases can be identified in zero field versus \(\Gamma/K\). The Heisenberg-Gamma (\(J\Gamma\)) ladder is relatively less explored, and from the exact diagonalization results in Ref. [17] the line in the phase diagram of the honeycomb lattice corresponding to the \(J\Gamma\)-model appear to only cross a modest number of phases. Here we show that the phase diagram of the \(J\Gamma\)-ladder is significant richer, with a total of 11 distinct phases appearing in zero field. In addition to 5 phases, FM, FM-Z, FM-XY, AF and AF-Z, with conventional long-range magnetic order we observe three previously discussed phases, the RS, FI and A\(\Gamma\)-phases, where the RS and FI-phases are related by a local unitary transformation [44]. However, we also find three new potential SPT phases, that we denote \(\Upsilon\), \(\Omega\) and \(\delta\). These 2 phases show no conventional order, a doubling of the entanglement spectrum and for the \(\Upsilon\) and \(\Omega\)-phases a relatively clear gap, consistent with SPT behavior.
SPT phases in gapped one-dimensional spin systems can be classified using a projective symmetry analysis [45, 46, 47]. Usually the site symmetries given by \(D_{2}{=}\{E,R_{x},R_{y},R_{z}\}\) is considered where \(R_{x}(R_{y},R_{z})\) is a \(\pi\) rotation about the \(x(y,z)\) axis. The projective analysis can be extended to ladders [48, 49, 50, 51, 52, 53] where the additional symmetry \(\sigma\), arising from interchanging the legs of the ladder is also included, and the group \(D_{2}\times\sigma\) is considered. It is important to note that a local unitary transformation, \(U_{6}\) exists [54], that maps the Kitaev-Gamma (\(K\Gamma\)) ladder to a model with \(D_{2}\times\sigma\) site symmetry. While \(\sigma\) is not a good symmetry for the Kitaev-Heisenberg (\(KJ\)) ladder, it does have the \(D_{2}\) symmetry. However, neither \(D_{2}\) nor \(\sigma\) is a good symmetry for the \(J\Gamma\)-ladder and the \(U_{6}\) transformation is not useful. Instead, the \(\sigma\) symmetry is replaced by a non-symmorphic symmetry, involving both reflection and translation. In the following, we therefore mainly focus on the time-reversal (TR) symmetry present in the \(J\Gamma\)-ladder in zero field. It can therefore be argued that the effect of the \(\Gamma\)-interaction is particularly relevant for the \(J\Gamma\)-ladder that is our focus here, due to the significant reduction in the symmetry.
The outline of the paper is as follows. In section 2, we introduce the \(J\Gamma\)-ladder, the geometry and the parametrization. The bulk of our results are obtained using DMRG and iDMRG techniques, and in section 3 we discuss the specifics of our numerical methods along with the conventions used. In section 4 we present our main results for the phase-diagram of the \(J\Gamma\)-ladder, demonstrating the presence of 11 distinct phases. In section 5 the magnetically ordered phases are discussed along with the chiral ordering we observe in the ferromagnetic phases. The RS, A\(\Gamma\) and FI-phases are discussed in section 6. Finally, the three new potential SPT phases, \(\Upsilon\), \(\Omega\) and \(\delta\) and their classification are discussed in section 7 where we also discuss the projective symmetry analysis of the time-reversal (TR) symmetry.
## 2 The \(J\Gamma\)-ladder
Our focus is on the two leg ladder derived from the honeycomb lattice. To get as faithful a representation of the honeycomb lattice as possible, we consider a small strip of the honeycomb material and create a two leg ladder by ensuring that the bonds that are cut perpendicular to the length of the ladder are paired together, effectively imposing periodic boundary conditions in the perpendicular direction. This is illustrated in Fig. 1 where the dashed bonds arise due to the periodic boundary conditions. We assume these interactions to be of the same strength as the direct coupling between the legs, shown as the solid rungs in Fig. 1. On each bond of the ladder, we introduce an isotropic Heisenberg interaction of strength \(J\). The second interaction is the \(\Gamma\)-interaction, an asymmetric exchange interaction that crucially varies between bonds and is not the same for every bond. The corresponding Hamiltonian is then
\[H=\sum_{\langle i,j\rangle}J{\bf S}_{i}\cdot{\bf S}_{j}\ +\sum_{\langle i,j \rangle_{\gamma}}\Gamma(S_{i}^{\alpha}S_{j}^{\beta}+S_{i}^{\beta}S_{j}^{ \alpha}), \tag{1}\]
where \(\langle i,j\rangle_{\gamma}\) denotes the nearest neighbor bond of type \(\gamma\). The possible kinds of bonds are \(\gamma=x,y,z\) labeling the possible values of \((\alpha,\beta)\) as \((y,z)\), \((x,z)\), and \((x,y)\) respectively. In other words, \(\gamma\) labels the missing spin component exchange present in \(S_{i}^{\alpha}S_{j}^{\beta}\).
Figure 1: The two clusters of the two leg \(J\Gamma\)-ladder with alternating \(x\) and \(y\) bonds along the legs of the ladder, connected by \(z\) bonds along the rungs. Here the \(x\), \(y\) and \(z\) refers to the variation in the \(\Gamma\)-interaction. The dashed lines indicate \(z\) bonds arising from imposing periodic boundary conditions in the perpendicular direction. (a) Cluster A formed from a regular ladder with the red line indicating a bond cut and the blue line a rung cut, relevant for forming the reduced density matrix. (b) Cluster B formed from cutting the rungs from cluster.
### Clusters and Boundaries
The \(J\)-\(\Gamma\) ladder, Eq. (1), comprises alternating \(x\) and \(y\) bonds along the both legs, connected by \(z\) bonds along the rungs. The unit cell of the ladder then consists of 4 sites, as shown in Fig. 1(a) and (b). When discussing properties of the model derived from the reduced density matrix of a bipartition of the lattice, it is important to take into account if the partition cuts a rung or only the legs of the ladder. This is illustrated by the blue and red lines in Fig. 1(a) showing a rung (blue) and bond (red) cut respectively. When considering edge-states that may appear in SPT phases, we shall consider finite open segments of the ladder, and it is then crucial to consider how the open boundary conditions are imposed. For the ladder, we consider the two possibilities shown in Fig. 1(a) and (b). We refer to the first (regular) cluster as A and the second (rung cut) cluster as B. The degeneracy of the ground-state in a SPT phase strongly depends on whether open or periodic boundary conditions are applied, however, as we shall see in the following, the degeneracy of the ground-state can also depend on whether cluster A or B is used.
### Parametrization and connections to known models
The overall scale for the coupling constants, \(J\) and \(\Gamma\) are not relevant, and it is therefore convenient to parameterize them in the following way
\[J=\sin(\phi),\ \Gamma=\cos(\phi). \tag{2}\]
The phase space of the model can then be parameterized by the angle \(\phi\). Some points in the phase-diagram correspond to models that have previously been studied in detail. At \(\phi{=}\pi/2\) the \(J\Gamma\)-ladder is simply the antiferromagnetic Heisenberg ladder for which it has been established that the ground-state is a rung-singlet (RS) state with a sizable gap [55, 56, 31]. Similarly, at \(\phi{=}3\pi/2\) we find the well known ferromagnetic Heisenberg ladder that we expect to show gapless spin wave excitations.
The model with a pure antiferromagnetic \(\Gamma\)-interactions occurring at \(\phi{=}0\), has previously been studied in detail [41, 42, 43] and it is known that the model is in a SPT phase with a gap. In addition, a string order parameter has been found [43]. Interestingly, the same antiferromagnetic \(\Gamma\)-model on the two-dimensional honeycomb lattice is believed to exhibit a gapless spin liquid phase [57] although other scenarios have been discussed [58, 28, 35].
Finally, at \(\phi{=}\pi\) we find the ferromagnetic \(\Gamma\)-ladder. If we at this point apply the local unitary \(U_{6}\) transformation [54], the \(\Gamma\)-ladder can be mapped to an _antiferromagnetic_ (AF) spin ladder with nearest neighbor interactions only of the type \(S_{i}^{x}S_{j}^{x}\), \(S_{i}^{y}S_{j}^{y}\) and \(S_{i}^{z}S_{j}^{z}\). Such an AF spin ladder has been shown [42] to be in the same phase as the isotropic AF spin ladder with isotropic \(\mathbf{S}_{i}\cdot\mathbf{S}_{\mathbf{j}}\) interactions on each bond which is known to be in the rung-singlet phase, as discussed above. The \(\Gamma\)-phase, of which the \(\phi{=}\pi\) is part, can therefore also be labelled RS\({}_{U_{6}}\) since it is related to the RS-phase through the local unitary \(U_{6}\) transformation.
## 3 Methods
The main tool used in this analysis is the finite density matrix renormalization group (DMRG) [59, 60, 61, 62, 63, 64] and its infinite sized version, the infinite density matrix renormalization group (iDMRG) [65]. The finite size version will be used to obtain the ground state and the next 4 excited states with open boundary conditions (OBC) and periodic boundary conditions (PBC). For the OBC, we mainly use a maximal bond dimension \(D{=}1000\) and a precision of \(\epsilon{=}10^{-13}\), while with PBC we typically use \(D{=}1200\) and \(\epsilon{=}10^{-11}\). To obtain the ground state in the thermodynamic limit, produce the phase diagram, and calculate the bulk correlation functions, we use iDMRG with \(D{=}1000\) and \(\epsilon{=}10^{-11}\). In order to ensure that we detect all possible phases, the maximum resolution we use is \(\Delta\phi/\pi=0.001\).
To detect the quantum critical points (QCP), we use two measures of the ground state wavefunction, the first being the susceptibility of the ground state energy per spin \(e_{0}\) with respect to \(\phi\)
\[\chi^{e}_{\phi}=-\frac{\partial^{2}e_{0}}{\partial\phi^{2}}. \tag{3}\]
In finite systems, at a quantum critical point, \(\chi^{e}\) been shown to scale as [66, 67, 68]
\[\chi^{e}\sim N^{2/\nu-d-z}. \tag{4}\]
Here \(\nu\) and \(z\) are the correlation and dynamical critical exponents and \(d\) is the dimension. Hence, only when \(2/\nu{>}d+z\) will \(\chi^{e}\) diverge. If we assume that \(z{=}1\) and with \(d=1\) we find that \(\nu<1\) as a condition for a divergence to occur. In addition, the divergence might be very narrow and could be missed if \(\Delta\phi\) is not sufficiently small. In principle, when studying systems in the thermodynamic limit with iDMRG, \(\chi^{e}\) should be infinite at the QCP but the finite resolution in \(\phi\) will instead show a very sharp spike instead. It is therefore very useful to have a complementary way of determining the phase diagram, and for quasi one-dimensional models this can be obtained from the entanglement spectrum. If we cut the ladder across the bond \(n\) and form the reduced density matrix \(\rho_{n}\) the entanglement spectrum (ES) can be obtained from the eigenvalues \(\lambda_{i}\) of \(\rho_{n}\). The eigenvalues change slowly away from a quantum critical point but rapidly near a QCP. Sometimes the so-called Schmidt gap [69, 70, 71, 72], the difference between the two largest eigenvalues, is studied, but here we focus on just the leading eigenvalue \(\lambda_{1}\) which defines the single copy entanglement [73]
\[SCE=-\ln(\lambda_{1}). \tag{5}\]
When the ground state is in a product state, we must have that \(\lambda_{1}{=}1\) and \(\lambda_{n}{=}0\), \(\forall n{>}1\) implying that \(SCE\) = 0. On the other hand, if our system is not in a product state, \(\lambda_{1}{<}1\), we must have \(SCE>0\). In the ladder geometry, the only two unique bipartitions are made by either cutting through two leg bonds or through two leg bonds and a rung, as shown in i Fig. 1(a). As previously outlined, we shall refer to this as a 'bond' cut and a 'rung' cut, respectively. While either cut can be used for our purposes, we mainly use the rung cut, the blue line in Fig. 1(a), when studying the \(SCE\).
In order to characterize the magnetic ordering of the phases, we study the spin correlation functions \(\langle S^{\alpha}_{i}S^{\alpha}_{i+n}\rangle\) as well as the on-site magnetization \(\langle S^{\alpha}_{i}\rangle\). In addition, we also study the
scalar chirality. For any 3 spins S\(=\)\(\sigma/2\) at sites \(i\), \(j\) and \(k\) the scalar chirality is defined as follows,
\[\kappa=\langle\sigma_{i}\cdot(\sigma_{j}\times\sigma_{k})\rangle. \tag{6}\]
From this definition it seems likely that a non-zero \(\kappa\) will be accompanied by more conventional magnetic ordering and in section 5 we discuss the chiral ordering in more detail.
## 4 Phase Diagram
Our main results for the phase diagram are shown in Fig. 2 where we show iDMRG results for \(\chi^{e}\) in the top panel along with results for \(SCE\) in the bottom panel. The quantum critical points are indicated by the dashed vertical lines. An astonishingly large number of phases is observed, 11 in total. From left to right we label these phases as, F\(\Gamma\), FM, FM-Z, \(\Upsilon\), \(\Omega\), FM-XY, A\(\Gamma\), \(\delta\), AF, RSand AF-Z. While most of these phases show some type of magnetic ordering, the F\(\Gamma\), \(\Upsilon\), \(\Omega\), A\(\Gamma\), \(\delta\) and RS phases do not. We discuss all phases in further detail in the subsequent sections. At most of the quantum critical points (QCP) we find complete agreement between the divergence in \(\chi^{e}\) and sharp features in the \(SCE\). One exception is the FM-XY to A\(\Gamma\) transition which do not show a clear divergence in \(\chi^{e}\), on the other hand, it is clearly visible in the \(SCE\). This is consistent with a value of the correlation length exponent \(\nu\)\(>\)\(1\) at this transition. A similar observation can be made about the RS to AF-Z transition. A summary of the results from Fig. 2 can be found in Table 1 where the critical values of \(\phi\) are listed for all phases along with their characteristics.
Figure 2: Phase diagram of the \(J\Gamma\)-ladder as function of \(\phi/\pi\) from iDMRG with a unit cell of \(N\)=24 and a resolution \(\Delta\phi/\pi=0.001\). The top panel shows \(\chi^{e}\) while the bottom panel the \(SCE\) from bond \(N/2-1\) in a rung cut. The dashed lines indicates the quantum critical points.
### Degeneracies
A characteristic feature of SPT phases is that edge-states appear under open boundary conditions. A well-known example is the \(S\)=1 spin chain, where \(S\)=1/2 edge-states appear [74, 75, 76] above a four-fold degenerate ground-state with OBC. Another characteristic is a degeneracy of all eigenvalues in the entanglement spectrum [45, 77, 78, 79, 46]. Such a degeneracy is necessary for non-trivial transformations under the projective symmetry, as we discuss further in section 7. For the determination of the ground-state degeneracy with OBC it is crucial to distinguish between the two different clusters from Fig. 1(a) and (b) and we therefore refer to the resulting ground-state degeneracies as \(d^{OBCA}_{gs}\) and \(d^{OBC_{B}}_{gs}\). For the entanglement spectrum, which we obtain from iDMRG calculations, it is important to distinguish between the rung cut and bond cut discussed above and shown as the blue and red line in Fig. 1(a). Our results for these degeneracies for all the potential SPT phases are listed in Table 2. We find in all cases a 4-fold degeneracy of the ground-state using A or B cluster. If the degeneracy is on the A(B) cluster, the ground-state on the B(A) cluster is
\begin{table}
\begin{tabular}{c|c c c} \hline \hline Phase & \(\phi_{c}\) / \(\pi\) & Magnetic Ordering & Energy Gap \\ \hline AI & 1.983 - 0.025 & None & Yes \\ \(\delta\) & 0.025 - 0.077693 & None & Possibly Gapless \\ AF & 0.077693 - 0.380 & AFM & Yes \\ RS & 0.380 - 0.790 & RS & Yes \\ AF-Z & 0.790 - 0.840 & AFM-Z & Yes \\ FT & 0.840 - 1.110 & None & Yes \\ FM & 1.110 - 1.500 & FM & Yes \\ FM-Z & 1.500 - 1.775 & FM-Z & Yes \\ \(\Upsilon\) & 1.775 - 1.820 & None & Yes \\ \(\Omega\) & 1.820 - 1.840 & None & Yes \\ FM-XY & 1.840 - 1.983 & FM-XY & Yes \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of the main features of all phases of the \(J\Gamma\)-ladder. The phase symbol and the critical values of \(\phi/\pi\) for which the phase exists are listed, as well as the magnetic ordering. The last column indicates the presence of an energy gap in the spectrum in the thermodynamic limit
\begin{table}
\begin{tabular}{c|c c c c c|c c} \hline \hline Phase & \(d_{rung}\) & \(d_{leg}\) & \(d^{OBCA}_{gs}\) & \(d^{OBC_{B}}_{gs}\) & \(d^{PBC}_{gs}\) & \(\mathcal{O}^{A}_{\mathrm{TR}}\) & \(\mathcal{O}^{B}_{\mathrm{TR}}\) \\ \hline A\(\Gamma\) & 1 & 2 & 4 & 1 & 1 & -1 & 1 \\ \(\delta\) & 2 & 1 & 1 & 4 & 1 & 1 & -1 \\ \(\Upsilon\) & 2 & 1 & 1 & 4 & 1 & 1 & -1 \\ \(\Omega\) & 1 & 2 & 4 & 1 & 1 & -1 & 1 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Summary of the main features of the potential SPT phases of the \(J\Gamma\)-ladder. The \(d_{rung}\) and \(d_{bond}\) are the degeneracies in the spectrum of the reduced density matrix formed on cluster A or B respectively. The \(d^{OBCA}_{gs}\), \(d^{OBC_{B}}_{gs}\), and \(d^{PBC}_{gs}\) are the ground state degeneracies in open or periodic boundary conditions with \(OBC_{A}\) and \(OBC_{B}\) referring to cluster A and B respectively. \(\mathcal{O}^{A}_{\mathrm{TR}}\) and \(\mathcal{O}^{B}_{\mathrm{TR}}\) are the projective phase factors under time-reversal (See section 7.4).
non-degenerate. Furthermore, if the 4-fold degeneracy is present on the A(B) cluster, the ES show degeneracy on the bond cut (rung cut) and no degeneracy on the alternate cut. For completeness, we also list the degeneracy under PBC in Table 2. These observations are consistent with the presence of SPT phases. Of these 5 phases, the A\(\Gamma\) and F\(\Gamma\)-phases have been studied elsewhere [41, 42, 43] but before analyzing the remaining 3 potential SPT phases, \(\Upsilon\), \(\Omega\) and \(\delta\), we turn to a discussion of the magnetically ordered phases.
## 5 Magnetically Ordered Phases
### AF Phases
There are two phases with clear long-range AF magnetic ordering, the AF and AF-Z-phases. In Fig. 3(b) and (c) we show results for the spin correlations for each phase.
* AF-phase: For \(\phi/\pi\in(0.077693,0.380)\), we have the AF-phase. As can be seen in Fig. 3(b) the spin correlations are clearly long-range and isotropic between the \(x\), \(y\) and \(z\) components. The \(\Gamma\)-term is non-zero throughout the AF-phase, and contrary to what one might expect, the AF-phase is not gapped. In fact, as we discuss in section 7, the correlation length in this phase is rather short, indicating the presence of a well-defined gap.
Figure 3: Spin correlation functions \(\langle S_{1}^{\alpha}S_{1+n}^{\alpha}\rangle\) along the lower leg of the ladder from \(n\)=3 to \(n\)=199, as obtained from iDMRG. Here, \(n\) is the site index from Fig. 1. The A\(\Gamma\)-phase at \(\phi{=}0.064\pi\). (b) The AF-phase at \(\phi{=}0.249\pi\). (c) The RS-phase at \(\phi{=}0.499\pi\). (d) The AF-Z-phase at \(\phi{=}0.799\pi\).
* AF-Z-phase: For \(\phi/\pi\in(0.79,0.84)\), the spin correlations look similar to those of the AF phase and are again long-range. However, in this phase the \(S_{1}^{z}S_{n}^{z}\) correlations are larger than the \(S_{1}^{x}S_{n}^{x}\) and \(S_{1}^{y}S_{n}^{y}\) correlations, which are equal, as can be seen from our iDMRG results shown in Fig. 3(d). Hence, we denote this phase the AF-Z-phase. The correlation length is finite, indicating a well-defined gap.
### FM Phases and chiral ordering
Three of the phases, FM, FM-Z and FM-XY, have long-range ferromagnetic correlations, as can be seen from the results for the spin correlation functions \(\langle S_{1}^{\alpha}S_{1+n}^{\alpha}\rangle\) shown in Fig. 4.
* FM-phase: For \(\phi/\pi\in(1.110,1.500)\) the spin correlations shown in Fig. 4(b) show clear long-range ferromagnetic order. Furthermore, all three spin correlation functions appear identical and the phase can be identified as an isotropic ferromagnetic phase. As was the case for the AF phases, the FM-phase is gapped, a fact that we infer from the presence of a relatively short correlation length.
* FM-Z-phase: Neighboring the FM-phase is the FM-Z-phase for \(\phi/\pi\in(1.500,1.775)\), This phase is similar to the FM-phase but in the FM-Z-phase the \(S_{1}^{z}S_{n}^{z}\) correlations are larger than the \(S_{1}^{x}S_{n}^{x}\) and \(S_{1}^{y}S_{n}^{y}\), which are equal, as illustrated in Fig. 4(c). We therefore
Figure 4: Spin correlation functions \(\langle S_{1}^{\alpha}S_{1+n}^{\alpha}\rangle\) as obtained from iDMRG along the lower leg (odd numbered sites) of the ladder starting at \(n=3\) and ending at \(n=199\). Here, \(n\) is the site index from Fig. 1. (a) The FT-phase at \(\phi/\pi\)=0.899. (b) The FM-phase at \(\phi/\pi\)=1.299. (c) The FM-Z-phase at \(\phi/\pi\)=1.649. (d) The FM-XY-phase at \(\phi/\pi\)=1.849.
denote the phase FM-Z. Similar to the FM-phase, the FM-Z-phase has a finite correlation length and a gap.
* FM-XY-phase: The last ferromagnetic phase is the FM-XY-phase appearing for \(\phi/\pi\in(1.840,\!1.983)\). Depending on which leg of the ladder is analyzed, the spin correlations have either the \(S_{1}^{x}S_{n}^{x}\) or \(S_{1}^{y}S_{n}^{y}\) correlation marginally larger than the other at small \(n\) and then finally equalling each other at larger \(n\). Along both legs, the \(S_{1}^{z}S_{n}^{z}\) correlations are smaller than the other two and non-zero. We therefore denote the phase FM-XY. Correlations in the FM-XY-phase are characterized by a sizable, but still finite correlation length and therefore a finite gap.
It is interesting to note that precisely at \(\phi/\pi\)=3/2 the \(J\Gamma\)-ladder is simply a ferromagnetic Heisenberg ladder, since \(\Gamma\)=0. For the FM Heisenberg ladder we would expect gapless spin-wave excitations and an infinite correlation length. This is indeed the case, since for the \(J\Gamma\)-ladder this point corresponds to the transition between the FM and FM-Z phases. However, both the FM and FM-Z-phases are gapped, demonstrating the strong effect of the \(\Gamma\)-interaction.
From the structure of the \(\Gamma\) term, it is plausible that it will favor chiral ordering. Such ordering can accompany conventional magnetic ordering but is sometimes observed even in the absence of magnetic ordering if three or four-spin interaction terms are present [80, 81]. Since we are considering a ladder, we have to be careful about the handedness of the 3 spins used to measure the chirality if we want to have a consistent sign convention. If we label the \(i\)'th spin on two-legs of the ladder as \({\bf S}_{i,1}\) and \({\bf S}_{i,2}\) where \(1\) and \(2\) refer to the bottom-leg and top-leg, respectively, we can then define the scalar chirality by going around clockwise as
Figure 5: Scalar chirality in the FM-XY-phase. Top panel: The scalar chirality \(|\kappa|\) versus \(\phi/\pi\). Lower panel: An example of the staggered pattern of chirality in the FM-XY-phase at \(\phi=1.9\pi\). Each triangle has \(|\kappa|\)=\(0.133\). The scalar chirality in the FM and FM-Z-phases displays an identical staggering, but \(|\kappa|\) is an order of magnitude weaker.
follows.
\[\kappa=\langle\sigma_{i,1}\cdot(\sigma_{i,2}\times\sigma_{i+1,1})\rangle. \tag{7}\]
We get a consistent sign by always going around clockwise. For example, for the upper triangles where we get \(\kappa=\langle\sigma_{i,2}\cdot(\sigma_{i+1,2}\times\sigma_{i+1,1})\rangle\). For a pictorial representation, if the \(\kappa\) is positive (negative), we assign blue (red) arrows \(i\to j\to k\) for \(\kappa=\langle\sigma_{i}\cdot(\sigma_{j}\times\sigma_{k})\rangle\) and all even permutations of \(i,j,k\), with clockwise (anti-clockwise) circulation.
The scalar chirality is non-zero in all three ferromagnetic phases, as can be seen from the top panel of Fig. 5. It is largest in the FM-XY-phase, where a staggered pattern of chirality is observed, alternating in sign between neighboring plaquettes. A sketch of the staggered pattern is shown in the lower panel of Fig. 5. At \(\phi\)=\(1.9\pi\) in the FM-XY-phase we find \(|\kappa|\)=\(0.133\) with \(|\kappa|\) going to zero at the quantum critical points of the FM-XY-phase as shown in the top panel of Fig. 5. The same staggered pattern of the chirality is also observed in the FM and FM-Z-phases, but the overall magnitude of \(|\kappa|\) is about an order of magnitude weaker.
## 6 A\(\Gamma\), F\(\Gamma\) and RS-phases
As already discussed briefly in section 2, the points \(\phi\)=0, \(\pi/2\) and \(\pi\) within the A\(\Gamma\), RS and F\(\Gamma\) phases respectively, (see Fig. 2), have previously been studied for ladder systems. Below we list the corresponding phases and their associated properties.
* RS-phase: For \(\phi/\pi\in(0.380,0.790)\), we find a rung singlet (RS) phase. This follows from the fact that the phase contains the point \(\phi\)=\(\pi/2\), where \(J\)=1 and \(\Gamma\)=0, corresponding to the antiferromagnetic Heisenberg ladder. Its ground state is known to be in a disordered rung singlet phase [55, 56], where the spins on each rung of the ladder are coupled into a spin singlet. This can be confirmed by increasing the Heisenberg coupling of the rungs of the ladder, approaching the product state of rung-singlets. The spin correlations for this phase are shown in Fig. 3(c) and do not show any long-range magnetic ordering as one would expect. The RS-phase is gapped, and it has been classified as a trivial SPT phase [48].
* F\(\Gamma\)-phase: This phase extends over the region \(\phi/\pi\in(0.840,1.110)\) and includes the point \(\phi\)=\(\pi\) corresponding to the pure ferromagnetic \(\Gamma\) point with \(J\)=0, \(\Gamma\)=-1. A local unitary transformation, \(U_{6}\), has been found [54] that maps the ferromagnetic \(\Gamma\)-ladder to a model with _antiferromagnetic_ anisotropic Heisenberg couplings, which is known [41, 42] to be in the same phase as the isotropic AF Heisenberg ladder known to be in the RS-phase. Counterintuitively, the F\(\Gamma\)-phase is then simply related to the RS-phase through the \(U_{6}\), a phase that is usually associated with antiferromagnetic interactions, and the F\(\Gamma\)-phase is therefore often labelled RS\({}_{U_{6}}\). As to be expected, spin correlations in the F\(\Gamma\)-phase do not show long-range order, as shown in Fig. 4(a). The F\(\Gamma\)-phase is gapped, with a finite correlation length. Since the F\(\Gamma\)-phase is related to the RS-phase through the local unitary \(U_{6}\) transformation, the F\(\Gamma\)-phase is also a trivial SPT [48].
* A\(\Gamma\)-phase: This phase extends over only a small region \(\phi/\pi\in(1.983,\)0.025) and the pure AF \(\Gamma\) point at \(\phi\)=0, with \(J\)=0 and \(\Gamma\)=1, has previously been studied in detail [41, 42]. Spin correlations are shown in Fig. 3(a) show no long-range magnetic order but a characteristic period 3 variation along the leg of the ladder. The phase has a small gap and a sizeable correlation length, with \(\xi\sim 41a\) at the pure AF \(\Gamma\) point. The A\(\Gamma\)-phase is a SPT phase protected by \(TR\times\mathcal{R}_{b}\) symmetry, the product of time-reversal (\(TR\)) and \(\pi\) rotation around the \(b\)-axis (\(\mathcal{R}_{b}\)) and a string-order parameter has been found [43].
Above, we have briefly discussed the 5 magnetically ordered phases, AF, AF-Z, FM, FM-Z and FM-XY along with the 3 previously known phases, RS, \(\mathrm{FT}\) and A\(\Gamma\). We now turn to a discussion of the 3 remaining phases, \(\Upsilon\), \(\Omega\) and \(\delta\), all of which show no long-range magnetic order and can be considered as potential SPT phases.
## 7 Potential new SPT phases
### Correlation length
As is often the case, the most complex part of the phase diagram in Fig. 2 is the proliferation of phases around the antiferromagnetic \(\Gamma\)-point, with \(\phi\)=\(\pi\). We therefore study part of this phase diagram in more detail by explicitly calculating the correlation length. For translationally invariant matrix product states (MPS), obtained from iDMRG calculations, the transfer matrix can be defined. For normalized states, the largest eigenvalue of the transfer matrix must be unity, and the second-largest eigenvalue determines the correlation length through the relation \(\xi=-N_{c}/\ln(|\lambda_{2}|)\). Here, \(N_{c}\) is the number of sites in the unit cell used to define the transfer matrix. For quasi one-dimensional systems, the correlation length is related to the gap, \(\Delta\)
Figure 6: The correlation length, \(\xi\), versus \(\phi/\pi\) for \(\phi/\pi\in[-0.25,0.1]\) as obtained from the transfer matrix with bond dimension \(D=120,200,300\). The inset shows a close up of the \(\delta\)-AF transition. The dotted lines correspond to the transitions listed in Table 1.
through the relation \(\xi=v/\Delta\)[82], with \(v\) a characteristic velocity, expected to be \({\cal O}(1)\). If the MPS is obtained with a bond dimension \(D\), the transfer matrix is a \(D^{2}\times D^{2}\) matrix, hindering calculations of \(\xi\) with a very larger bond dimension. However, a significant advantage is that an estimate of the correlation length can be obtained without explicit calculations of correlation functions.
In Fig. 6 we show results for the correlation length in the region \(\phi/\pi\in[-0.25,0.1]\) from iDMRG calculations with a bond dimension of \(D\)=120, 200 and 300. We first note that \(\xi\) shows a divergence at all previously noted quantum critical points. Secondly, all phases shown, FM-Z, \(\Upsilon\), \(\Omega\), FM-XY, A\(\Gamma\), \(\delta\) and AF, have finite correlation lengths corresponding to a gapped phase. At the mid-point of the potential new SPT phases, we find approximatively \(\xi_{\Upsilon}\sim 21a\) for the \(\Upsilon\)-phase, \(\xi_{\Omega}\sim 30a\) for the \(\Omega\)-phase and \(\xi_{\delta}\sim 57a\) for the \(\delta\)-phase, with \(a\) the lattice spacing. Note that, the spin correlation functions shown in Fig. 3, 4 and 8 are shown along a single leg of the ladder but versus the site index \(n\) from Figure 1. As a function of \(n\), they should therefore decay on a length scale that is given by the correlation length in Fig. 6, obtained from the transfer matrix.
### Spin gap
To confirm the presence of a spin gap in the \(\Upsilon\), \(\Omega\) and \(\delta\) phases, we have explicitly evaluated the gap using exact diagonalization on small systems, and finite size DMRG calculations with periodic boundary conditions (PBC) on somewhat larger systems. Our results are shown in Fig 7. The dependence on the system size \(N\) is not smooth, as one might have expected from
Figure 7: Energy gaps of the \(\delta\)-phase at \(\phi\)=0.064\(\pi\), the \(\Upsilon\)-pase at \(\phi\)=1.799\(\pi\) and the \(\Omega\)-phase at \(\phi\)=1.829\(\pi\) between the ground state and the first excited state obtained through finite DMRG with a maximal bond dimension \(D\)= 1200 and periodic boundary conditions. The system sizes shown correspond to \(N\)=12 sites to \(N\)=48 in increments of 4 sites.
the high degree of frustration present in the systems. However, it seems clear that the results for the \(\Upsilon\)and \(\delta\)-phases will converge to a finite small value in the thermodynamic limit, with the gap in the \(\Upsilon\)-phase slightly larger than in the \(\Omega\)-phase. This is consistent with our results for \(\xi\) that indicate a smaller \(\xi\) in the \(\Upsilon\)-phase and therefore likely also a larger gap, when compared to the \(\Omega\)-phase if the velocities are assumed the same. The results for the \(\delta\)-phase are more ambiguous, and it seems possible that the gap will tend to zero as \(N\rightarrow\infty\). However, our results for the \(\xi\) in the \(\delta\)-phase close to the \(\delta\)-AF transition are very stable and only show a small dependence on the bond dimension \(D\). At \(\phi\)=0.077\(\pi\) we have the previously quoted value of \(\xi\sim 57a\) obtained with \(D\)=300, \(\xi\sim 53a\) (\(D\)=200) and \(\xi\sim 48a\) (\(D\)=120). Although these result indicate a sizable correlation length of \(\xi\sim 62a\) as \(D\rightarrow\infty\), the calculations are very stable, excluding the possibility of a correlation length diverging with \(D\) and lending strong support to the presence of a small but finite gap in the \(\delta\)-phase. It would be interesting to explore the alternative scenario of a gapless \(\delta\) phase by studying the spin stiffness [83] in this phase.
Figure 8: Spin correlation functions, \(\langle S_{1}^{\alpha}S_{1+n}^{\alpha}\rangle\) of the potential SPT phases versus \(n\), the site index from Fig. 1. Results are for the (a) \(\delta\)-phase at \(\phi\)=0.064\(\pi\), (b) the \(\Upsilon\)-phase at \(\phi\)=1.799\(\pi\) and (c) the \(\Omega\)-phase at \(\phi\)=1.829\(\pi\) as obtained from iDMRG. Results are shown for correlations along the first leg of the ladder, starting at \(n=3\) and ending at \(n=199\).
### Spin correlation functions
The spin correlation functions for the three phases, \(\Upsilon\), \(\Omega\) and \(\delta\) are shown in Fig. 8 as obtained from iDMRG calculations. While the \(\Upsilon\) and \(\Omega\)-phases show largely ferromagnetic correlations, the \(\delta\)-phase correlations are intermittently negative, showing a more antiferromagnetic nature. However, in all 3 phases, the spin correlation functions quickly approach zero. No long range magnetic order is observed. As can be seen in Fig. 5 there are no chiral correlations in the \(\Upsilon\) and \(\Omega\) phases, and we have verified that the same is the case for the \(\delta\) phase. The three phases therefore appear to have no discernible order, consistent with the phases being gapped SPT phases.
### Projective symmetry analysis of time reversal
While the usual Landau classification of phases do not distinguish between SPT phases, it is possible to develop a classification of such phases based on a projective symmetry analysis [84, 46, 85, 47, 86, 87]. This classification takes it starting point in the MPS form of the wave functions for gapped SPT phases where a crucial degeneracy of the entire entanglement spectrum was noted [45, 78].
To proceed, one writes the MPS wavefunction in its canonical form [88, 89, 90, 91]:
\[|\Psi\rangle=\sum_{j_{1},\ldots,j_{N}}\Gamma^{[1]}_{j_{1}}\Lambda^{[2]}\Gamma^ {[2]}_{j_{2}}\ldots\Lambda^{[N]}\Gamma^{[N]}_{j_{N}}|j_{1},\ldots,j_{N}\rangle, \tag{8}\]
where the \(\Gamma^{[n]}_{j_{n}}\) are complex matrices and the \(\Gamma^{[n]}\), real, positive, square diagonal matrices. If
Figure 9: The time-reversal phase factor \(|\lambda|\mathcal{O}_{\rm TR}\) versus \(\phi/\pi\) as obtained from iDMRG, with \(|\lambda|\) the leading eigenvalue of the generalized transfer matrix (10). The red circles correspond to cluster A from Fig. 1(a) and the blue triangles to cluster B from Fig. 1(b).
we consider infinite systems with translational symmetry from the perspective of iDMRG, the set of matrices on any unit cell becomes the same \(\Gamma_{j}^{[n]}\)=\(\Gamma_{j}\), \(\Gamma^{[n]}\) = \(\Gamma\) for all \(n\), although they may vary within the unit cell. We now consider a site symmetry operation \(g\). In the spin basis this symmetry operation will be represented by a unitary matrix, \(\Sigma_{jj^{\prime}}(g)\). One can then establish [92, 45] that the \(\Gamma_{j}\) matrices of bond dimension \(D\), must transform as [45, 79]:
\[\sum_{j^{\prime}}\Sigma_{jj^{\prime}}(g)\Gamma_{j^{\prime}}=e^{i\theta}U^{ \dagger}(g)\Gamma_{j}U(g), \tag{9}\]
Here, \(e^{i\theta}\) is a phase factor, and the unitary matrices \(U(g)\) commute with the \(\Gamma\) matrices, and form a \(D\)-dimensional projective representation of the symmetry group of the wave-function. Exploiting the full machinery of the MPS formulation, one can show that the \(U(g)\) matrices be determined from the unique eigenvector of the generalized transfer matrix with eigenvalue \(|\lambda|\)=1 [45, 79]. The generalized transfer matrix is here defined as:
\[T^{\Sigma}_{\alpha\alpha^{\prime};\beta\beta^{\prime}}=\sum_{j}\left(\sum_{j^ {\prime}}\Sigma_{jj^{\prime}}\Gamma_{j^{\prime},\alpha\beta}\right)(\Gamma_{ j,\alpha^{\prime}\beta^{\prime}})^{*}\Lambda_{\beta}\Lambda_{\beta^{\prime}}, \tag{10}\]
and it is therefore possible to determine the \(U(g)\) matrices numerically once the ground-state wavefunction has been determined in a translationally invariant MPS form. The projective representation is reflected in the fact that if \(\Sigma(g)\Sigma(h)\)=\(\Sigma(gh)\), then
\[U(g)U(h)=e^{i\phi(g,h)}U(gh), \tag{11}\]
where the phases \(\phi(g,h)\) are characteristic of the topological phase.
For the \(J\Gamma\)-ladder there are few site symmetries and the ladder does not satisfy \(D_{2}\) symmetry, nor is it symmetric with respect to interchanging the legs, \(\sigma\). We therefore focus only on time-reversal \((TR)\), defined by \(\Gamma_{j}\rightarrow\sum_{j^{\prime}}\left[e^{i\pi S^{y}}\right]_{jj^{\prime} }\Gamma_{j^{\prime}}^{*}\), with \(\star\) denoting complex conjugation. In this case, it can be established that [45]\(U_{\rm TR}U_{\rm TR}^{\star}\)=\(e^{i\phi(TR,TR)}\mathbb{1}\) from which it follows that \(\phi(TR,TR)\)=0 or \(\pi\). If \(\phi(TR,TR)\)=\(\pi\) we see that \(U_{\rm TR}\) is an antisymmetric matrix and one can then show [45] that this is only possible if all eigenvalues of the entanglement spectrum have even multiplicity, thereby linking the non-trivial value of \(e^{i\phi(TR,TR)}\)=-1 to the degeneracy of the entanglement spectrum. Furthermore, if the largest eigenvalue of the generalized transfer matrix is smaller than one, \(|\lambda|\)\(<\)\(1\), then time-reversal is not a good symmetry of the MPS representing the phase.
For time reversal, the phase factor \(e^{i\phi(TR,TR)}\) can be extracted by defining [79]:
\[\mathcal{O}_{\rm TR}\equiv\frac{1}{D}\Tr\left(U_{\rm TR}U_{\rm TR}^{\star} \right), \tag{12}\]
with the \(D\times D\) matrices \(U_{\rm TR}\) extracted numerically from the generalized transfer matrix (10). For instance, for the \(S\)=1 spin chain in the Haldane phase, one finds \(\mathcal{O}_{\rm TR}\)= \(-1\)[45, 78]. In Fig. 9 we show results for \(|\lambda|\mathcal{O}_{\rm TR}\) versus \(\phi/\pi\) for the two different clusters A and B from Fig. 1. We multiply with \(|\lambda|\), the leading eigenvalue of the generalized transfer matrix so that one may immediately see when a phase does not respect the TR symmetry which should be the case for the magnetically ordered phases. This is clearly the case for the FM-Z, FM-XY and AF phases in Fig. 9 where \(|\lambda|\mathcal{O}_{\rm TR}\) quickly deviates from \(\pm\)1. In section 4.1 we discussed the degeneracy of the ES in the different phases for both cluster A and B. As is clear from
Fig. 9, the non-trivial phase factor \(\mathcal{O}_{\rm TR}{=}-1\) follows the ES degeneracy and jumps between cluster A and B precisely at the quantum critical points. A summary of the results for \(\mathcal{O}_{\rm TR}^{A,B}\) for the different phases are included in table 2. Fig. 9 shows that if the corresponding cluster is selected, the \(\Upsilon\), \(\Omega\), and \(\delta\) phases all transform non-trivially under TR, as a non-trivial SPT phase. However, we also note that the RS-phase has \(\mathcal{O}_{\rm TR}^{A}{=}1\) and \(\mathcal{O}_{\rm TR}^{B}{=}-1\), and this phase is known to be a trivial SPT phase [48]. For a definite classification of the \(\Upsilon\), \(\Omega\), and \(\delta\) phases as non-trivial SPT phases, a further analysis is therefore needed.
For a more complete picture, it is therefore interesting to study the non-symmorphic symmetry, \(\sigma\times{\rm tr}(1)\) where \(\sigma\) is the aforementioned operator that interchanges the legs of the ladder while \({\rm tr}(1)\) is a translation by 1 lattice spacing in the direction _along_ the leg of the ladder. This is a symmetry of the Hamiltonian, Eq. (1), but, as far as we can tell, not a symmetry of any of the \(\Upsilon\), \(\Omega\), and \(\delta\) phases. Still, these phases could be protected by other symmetries that we have not been able to analyze.
### Uniform field as an active operator
Each of the three phases, \(\Upsilon\), \(\Omega\), and \(\delta\) show a 4-fold degenerate ground-state. It is of considerable interest to determine what perturbations will split the degeneracy between these four states, the so-called active operators [47, 48, 85, 86]. The \(J\Gamma\)-ladder does not have the usual site symmetries associated with 180\({}^{\circ}\) rotation about the \(x(y,z)\) axis. However, it does possess the previously mentioned \(\sigma\times{\rm tr}(1)\) symmetry. To study the active operators, we therefore first consider the behavior of the operator \(S_{\alpha}^{T}{=}\sum_{i}S_{i}^{\alpha}\) within the manifold of the 4 ground-states that we label \(|1\rangle,|2\rangle,|3\rangle\) and \(|4\rangle\). We note that these operators do not break the \(\sigma\times{\rm tr}(1)\) symmetry. We must have \(\langle i|S_{\alpha}^{T}|i\rangle\)=0 \(\forall i\), since the \(\Upsilon\), \(\Omega\), and \(\delta\) phases are not magnetically ordered. However, \([S_{\alpha}^{T},H]\neq 0\) so we can diagonalize the matrix \(\langle i|S_{\alpha}^{T}|j\rangle\) and study the eigenvalues. A non-zero \(s_{\alpha}\) indicates that the \(S_{\alpha}^{T}\) operator is active, splitting the states. In the present case, with \(\alpha=x,y,z\), all 4 eigenvalues are sometimes non-zero, which is difficult to interpret. However, given the underlying honeycomb lattice, it is natural to instead study the eigenvalues of \(S_{\alpha}^{T}\) with \(\alpha=a,b,c\), the axis of the honeycomb lattice. Here, \(a\) is a unit vector in the \([11\bar{2}]\) direction, \(b\) in the \([1\bar{1}0]\) direction and \(c\) in the \([111]\) direction. In this case, the results are of the much simpler form \((s_{\alpha},-s_{\alpha},0,0)\), closely resembling what one finds for the \(S\)=1 spin chain in the Haldane phase where the 4 ground-states correspond to two free \(S\)=1/2 excitations at each end, yielding \((1,-1,0,0)\) for \(S_{x,y,z}^{T}\). Results for \(s_{\alpha}\) with \(\alpha=a,b,c\) versus system size, \(N\), for the \(\Upsilon\), \(\Omega\), and \(\delta\) phases are shown in Fig. 10, in each case with cluster A or B from Fig. 1 yielding the 4 degenerate ground-states. For finite systems, the 4 states are not completely degenerate but split by a small amount, decreasing with \(N\). Some variation with \(N\) is therefore to be expected. In addition, given the results from Fig. 6, showing a large correlation length in all three phases, it is natural to expect that rather large system sizes are needed to see a clear separation of any states localized at the end of the open segments. From studies of the edge excitations in the \(S\)=1 chains it is known that these excitations fall off as \(\exp(-x/\xi)\) from the end of the chain, with \(\xi\) the bulk correlation length [93], extending far into the chain as the borders of the Haldane phase
are approached [94]. The clearest results are obtained for the \(\Omega\) phase, where the results in Fig. 10(c) show that there is no response to a field in the \(a\) and \(c\) directions. Furthermore, \(s_{b}\) seems to stabilize around a value \(s_{b}\sim 3-4\), consistent with well-defined edge states. This is the same behavior observed in the A\(\Gamma\) phase which was interpreted as a SPT phase protected by \(TR\times\mathcal{R}_{b}\) symmetry, the product of time-reversal (\(TR\)) and \(\pi\) rotation around the \(b\)-axis (\(\mathcal{R}_{b}\)) [43].
The results for the \(\delta\) phase, shown in Fig. 10(a) are more difficult to interpret. With the limited size available, it seems possible that all \(s_{\alpha}\) could attain a finite small value as \(N\rightarrow\infty\), or \(s_{a}\) and \(s_{c}\) could reach zero with \(s_{b}\) finite, or all could go to zero. The \(\delta\) phase has the largest correlation length of the three phases, and values of \(N\) beyond what we have been able to reach are needed to resolve this.
For the \(\Upsilon\) phase, shown in Fig. 10(b), \(s_{c}\) quickly reach a small finite value \(s_{c}\sim 0.5\)
Figure 10: Leading eigenvalue \(s_{\alpha}\) of the total magnetization \(S_{\alpha}^{T}\) along axis \(\alpha\), with \(\alpha=a,b,c\), versus cluster size \(N\). (a) \(\delta\)-phase at \(\phi{=}0.064\pi\) with cluster B. (b) \(\Upsilon\)-phase at \(\phi{=}1.799\pi\) with cluster B. (c) \(\Omega\)-phase at \(\phi{=}1.829\pi\) with cluster A. For all size until \(N=28\), \(s\) is obtained with exact diagonalization (in blue, orange, and green) while the remaining larger sizes are obtained with finite DMRG (red, purple, and brown).
consistent with \(S_{c}^{T}\) being an active operator. However, surprisingly, \(s_{a}\) and \(s_{b}\) increase with \(N\) out to the largest value of \(N\). This is not consistent with \(S_{a}^{T}\) and \(S_{b}^{T}\) being active operators. We now turn to a brief description of some specific results for the three potential SPT phases.
### \(\Omega\) phase
The most promising SPT candidate is the \(\Omega\) phase, where we have shown that \(S_{b}^{T}\) is an active operator. It is then interesting to explicit demonstrate the appearance of the edge states by applying a small field term of form \(-h_{b}S_{b}^{T}\) to the ladder. However, since the states are not eigenstates of \(S_{b}^{T}\), this coupling is not simply a Zeeman term, although for small enough \(h_{b}\) the change in energy should be linear in \(h_{b}\). Hence, such a linear regime has to be located, and the field carefully applied within the linear regime. We select a field term of \(h_{b}=10^{-5}\), small enough that for \(N\)=200 the change in energy is significantly smaller than the gap in the system. Yet, this field is large enough that the very small finite splitting of the four states is irrelevant. The resulting edge states are shown in Fig. 11 as obtained from finite DMRG calculations with OBC using cluster A from Fig. 1. Here \(n\) corresponds to the site index, with odd \(n\) for the lower leg of the ladder and even \(n\) for the upper. Evidently, \(\langle S_{n}^{b}\rangle\) is the same for both legs. Furthermore, the peak in \(\langle S_{n}^{b}\rangle\) is not at sites 1, 2 but instead occurs for sites \(n\)=5,6. The decay of the amplitude of \(\langle S_{n}^{b}\rangle\) is consistent with the previous estimate of the bulk correlation length of \(\xi_{\Omega}\sim\)30\(a\), and we note that with the small field applied we find \(\langle S_{b}^{T}\rangle\sim\)3, slightly below the value of 3-4 estimated from the results in Fig. 10 from much
Figure 11: Magnetization \(\langle S_{n}^{b}\rangle\) from finite DMRG calculations along the \(b=[1\bar{1}0]\) direction with a uniform field of \(h_{b}=10^{-5}\) applied in the \(b\) direction at every site. Results are shown for a \(N=200\) ladder in the \(\Omega\) phase with \(\phi\)=\(1.829\pi\). The magnetization on each leg is almost identical.
smaller systems with \(h_{b}\)=0.
### \(\Upsilon\) phase
In a manner similar to the \(\Omega\) phase, we can study the edge-states appearing in the \(\Upsilon\) phase when the four degenerate ground-states are split by a small field \(h_{c}\), along the \(c\) direction, \([111]\). We use a field strength of \(|h_{c}|\)=3\(\times\)10\({}^{-4}\) in the linear regime of the field. The results are shown in Fig. 12 as obtained from finite DMRG calculations with OBC using cluster B from Fig. 1. Edge states at either end of the ladder are clearly visible. In contrast to the results for the \(\Omega\) phase, the two legs of the ladder do not show identical behavior. In fact, due to the shape of cluster B, \(\langle S_{n}^{c}\rangle\) on the upper leg at the left half of the ladder is identical to the results on the _lower_ leg on the right half of the ladder. As expected, the results in Fig. 12 show \(S_{c}^{T}\sim 0.52\), consistent with our finding of \(s_{c}\sim 0.5\) in the \(\Upsilon\) phase.
### \(\delta\) phase
For the \(\delta\) phase, we have not been able to obtain a clear picture of any eventual edge states. One reason for this is likely the very large correlation length, in excess of \(\xi_{\delta}\sim\)57\(a\) throughout the phase. However, another important effect is the appearance of pronounced incommensurate correlations, as we shall now discuss. The first thing we note is that from the results presented in Fig. 2 and Fig. 6, it is clear that the correlation length diverges and the gap goes to zero at either end of the \(\delta\) phase. The \(\delta\) phase is then a well
Figure 12: Magnetization \(\langle S_{n}^{c}\rangle\) from finite DMRG calculations along the \(c=[111]\) direction with a uniform field of \(|h_{c}|\)=3\(\times\)10\({}^{-4}\) applied in the \([111]\) direction at every site. Results are shown for a \(N=202\) ladder on cluster B with OBC, in the \(\Upsilon\) phase with \(\phi\)=1.799\(\pi\).
defined phase and not simply a part of either the \(\mathrm{A}\Gamma\) or AF phases marked by the onset of incommensurate correlations. We can analyze the correlations by Fourier transforming the \(\langle S_{i}^{z}S_{i+n}^{z}\rangle\) correlation functions along the first leg of the ladder. The resulting structure factors \(S^{zz}(k)\) are shown in Fig. 13 for values of \(\phi/\pi\) starting in the \(\mathrm{A}\Gamma\) phase and ending in the AF phase. In the \(\mathrm{A}\Gamma\) phase at \(\phi\)=0 the correlations along a leg has a simple periodicity of 3, corresponding to a peak in the structure factor at \(k\)=2/3\(\pi\). On the other hand, in the AF phase, the peak \(S^{zz}(k)\) must be at \(k\)=\(\pi\). However, inside the \(\delta\) phase, the peak in \(S^{zz}(k)\) moves continuously from \(k\)=2/3\(\pi\) to \(k\)=\(\pi\). Close to the \(\delta\)-AF transition we show in the inset of Fig. 13 higher precision results for the behavior of \(S^{zz}(k)\). To within our numerical precision, it appears that the peak value of \(S^{zz}(k)\) does not jump at the quantum critical points, but indeed moves continuously between the two limits.
Figure 13: Structure factor \(S^{zz}(k)\) from the \(\langle S_{i}^{z}S_{i+r}^{z}\rangle\) correlation functions of the AF-\(\Gamma\), \(\delta\) and AF phases obtained from iDMRG along the first leg of the ladder, with \(r\) is measured along the leg. The red color indicates where \(S^{zz}(k)\) is in the AF-\(\Gamma\) phase, transitioning to the \(\delta\) phase in blue, and ending in the AF phase in green. \(S^{zz}(k)\) at \(\phi=0\) is pointed out as having two peaks, one at \(k=0\) and at \(k\approx 2/3\). The last point in the \(\delta\) phase in this sweep is also pointed out at \(\phi=0.077\pi\) before the transition to the AF phase occurs. The inset is a higher precision calculation of \(S^{zz}(k)\) near the \(\delta\)-AF transition. Starting at \(\phi=0.077314\pi\) in the \(\delta\) phase in blue, \(\phi\) is increased to a maximum value of \(\phi=0.077716\pi\) in green in the AF phase. The last value in the \(\delta\) phase occurs at \(\phi=0.077675\pi\) while the first value in the AF occurs at \(\phi=0.077695\pi\), both centered around \(k=1\).
## 8 Conclusion
Stranger things have happened, but the observation of eleven well-defined phases for the zero field phase diagram of the \(J\Gamma\)-ladder is remarkable. The proliferation of phases is due to the presence of the \(\Gamma\) interaction term, which lowers the symmetry of the model, allowing for a finely tuned competition between the various phases. It would be of considerable interest to identify low-dimensional materials representative of this model. Presently, we are not aware of any clear candidates. However, the class of Kitaev materials is rapidly expanding, and it is thus plausible that materials with dominant antiferromagnetic Heisenberg interactions and sub dominant \(\Gamma\)-interactions can be found.
Among the eleven phases we have identified three new phases, the \(\Upsilon\), \(\Omega\) and \(\delta\) phases which do not show signs of any ordinary long-range magnetic order and could potentially be SPT phases. We have also not found any indication of valence bond ordering. However, we cannot rigorously rule out that the states can be reduced to trivial product states on a large enough length scale. However, such a length scale would have to be sizable, and this scenario seems unlikely. Among the three phases, the \(\Omega\) phase appears as the most likely SPT phase and clear edge states are observed when a magnetic field is applied along the \([1\bar{1}0]\) direction. Similarly, for the \(\Upsilon\) phase, the application of a field along the \([111]\) direction induces clear edge states.
In future work, it would be fascinating to investigate the phase diagram of the \(J\Gamma\) ladder in the presence of an applied field, which we expect to show an abundance of new and intriguing phases.
We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC) through Discovery Grant No. RGPIN-2017-05759. This research was enabled in part by support provided by SHARCNET (sharcnet.ca) and the Digital Research Alliance of Canada (alliancecan.ca). Part of the numerical calculations were performed using the ITensor library [95].
|
2309.05120 | Routing and charging game in ride-hailing service with electric vehicles | This paper studies the routing and charging behaviors of electric vehicles in
a competitive ride-hailing market. When the vehicles are idle, they can choose
whether to continue cruising to search for passengers, or move a charging
station to recharge. The behaviors of individual vehicles are then modeled by a
Markov decision process (MDP). The state transitions in the MDP model, however,
depend on the aggregate vehicle flows both in service zones and at charging
stations. Accordingly, the value function of each vehicle is determined by the
collective behaviors of all vehicles. With the assumption of the large
population, we formulate the collective routing and charging behaviors as a
mean-field Markov game. We characterize the equilibrium of such a game, prove
its existence, and numerically show that the competition among vehicles leads
to ``inefficient congestion" both in service zones and at charging stations. | Kenan Zhang, John Lygeros | 2023-09-10T19:40:32Z | http://arxiv.org/abs/2309.05120v1 | # Routing and charging game in ride-hailing service with electric vehicles
###### Abstract
This paper studies the routing and charging behaviors of electric vehicles in a competitive ride-hailing market. When the vehicles are idle, they can choose whether to continue cruising to search for passengers, or move a charging station to recharge. The behaviors of individual vehicles are then modeled by a Markov decision process (MDP). The state transitions in the MDP model, however, depend on the aggregate vehicle flows both in service zones and at charging stations. Accordingly, the value function of each vehicle is determined by the collective behaviors of all vehicles. With the assumption of the large population, we formulate the collective routing and charging behaviors as a mean-field Markov game. We characterize the equilibrium of such a game, prove its existence, and numerically show that the competition among vehicles leads to "inefficient congestion" both in service zones and at charging stations.
## I Introduction
The past decade has witnessed rapid growth in the electric vehicle (EV) market. In 2021, global EV sales increased by 109%, more than doubled in 2020, and maintained a considerable growth rate of 55% in 2022 [1]. Governments and local authorities play a critical role in promoting the adoption of EVs. In 2021, the US announced its target of 50% share of EVs in new car sales by 2023, along with a funding package of $7.5 billion for charging infrastructure [2]. Early this year, EU gave the final approval to end the sale of fossil fuel vehicles by 2035 [3].
Envisioning the wide adoption of EVs, recent research has been devoted to modeling their coupled charging and routing behaviors in the transportation system [4]. The early studies mostly focus on the planning of charging stations [5, 6]. On the other hand, studies at the operational level often take the perspective of grid operators. To induce a desirable EV charging pattern, the grid operator either directly optimizes the charging price [7], or indirectly designs the power generation plan, which then gives the charging price via the locational marginal pricing (LMP) mechanism [8, 9]. All these studies target private EVs and assume they must recharge at least once during their trips. However, the current battery capacity of most EV models is more than enough for a single trip within the city. A recent report also shows that most private EVs charge at home or at work [10]. In contrast, routing and charging are indeed important decisions for EVs in ride-hailing services (e.g., taxi and ride-sourcing). These vehicles usually travel for a much longer distance every day compared to private vehicles [11]. Moreover, ride-hailing vehicles are driven by travel demand and move across different regions in the city. Hence, they are more likely to charge at public charging stations. This routing and charging problem is expected to be more prevailing given the increasing popularity of shared mobility services [12] and the fact that major players (e.g., Uber and Lyft) are electrifying their fleets [13]. Research on this topic, however, is still rare in the literature, with most existing studies assuming that the vehicle routing and charging are centrally controlled by the service operator [14, 15, 16]. Two exceptions are [17] and [18], both of which consider the case where EVs provide a transport service by ride-hailing and an energy service by vehicle-to-grid. Yet, both models are static and ignore charging behaviors.
In this paper, we extend our previous work [19] to study the electric ride-hailing vehicle routing problem (eRIVER) in a spatiotemporal market. Specifically, we assume that each vehicle makes routing and charging decisions to maximize its own profit over a single-day operation. We model each vehicle's behaviors by a Markov decision process (MDP) and carefully design the state transition functions to reflect the physical interactions among vehicles in each service zones and at each charging station. We then show that the value function of each vehicle depends on the collective behaviors of all vehicles. When the fleet size is sufficiently large, the impact of each vehicle on the aggregate vehicle flows is negligible. This motivates us to formulate collective vehicle behaviors as a mean-field game. In the remainder of this paper, we first present the eRIVER model, define the equilibrium and discuss its existence, then conduct numerical experiments to explore the equilibrium vehicle flows in a stylized network market.
## II The driver model
Consider a spatiotemporal ride-hailing market that is discretized into \(N\) zones containing \(L\) charging stations, and \(T\) time steps with equal length \(\Delta\). For simplicity, we assume all charging stations are homogeneous with capacity \(C\) and charging efficiency \(e\). Let \(\mathcal{N}\) and \(\mathcal{L}\) be the sets of service zones and charging stations, respectively. The travel time from zone \(i\in\mathcal{N}\) to zone \(j\in\mathcal{N}\) (station \(l\in\mathcal{L}\)) is denoted by \(\tau_{ij}^{t}\) (\(\tau_{il}^{t}\)) and is measured in units of \(\Delta\). Similarly, the vehicle charging time is measured in units of \(\Delta\) as well. Consider a fleet of \(M\) self-interested and homogeneous electric vehicles operating in the market, each with battery capacity \(B\). To be consistent with other variables, the state of charge (SOC) is also discretized and measured in units of \(\Delta\) and the battery consumption rate is \(\xi\) per time step. We further assume vehicles do not leave the charging station until they are fully charged. Hence, a vehicle that starts charging
at SOC \(b=0,\ldots,B-1\) will leave the charging station after \(\hat{\tau}_{b}=\lceil(B-b)/e\rceil\) time steps.
While the fleet size \(M\) is finite, we consider it is sufficiently large such that the aggregate behaviors of vehicles can be represented by continuous flows. This is a common assumption in transportation research [20] and closely related to mean-field game [21], the game theoretical framework adopted in eRIVER. Specifically, two vehicle flows are defined as follows:
* \(y^{t}_{i,b}\): idle vehicles in zone \(i\) at time \(t\) with SOC \(b\).
* \(z^{t}_{l,b}\): vehicles arriving at station \(l\) at time \(t\) with SOC \(b\).
### _Matching in zones_
Let \(q^{t}_{i}\) be the number of passengers arriving in zone \(i\) at time \(t\). Assume passengers can only be matched with vehicles in the same zone, and they leave the ride-hailing market for an alternative travel mode after one period of wait. Meanwhile, each idle vehicle can only be matched with one passenger (i.e., no ride-pooling). Then, for each idle vehicle in the same zone, the probability of successfully picking up a passenger after one period of search is given by
\[m^{t}_{i}=f(q^{t}_{i},y^{t}_{i};\theta^{t}_{i}), \tag{1}\]
where \(y^{t}_{i}=\sum_{b>0}y^{t}_{i,b}\) and \(\theta^{t}_{i}\) denotes the time and location-specific parameter (e.g., road density, travel speed). In what follows, \(m^{t}_{i}\) is referred to as the _meeting probability_. To ensure the model is realistic, we assume that the function \(f\) increases with passenger demand (\(\mathrm{d}f/\mathrm{d}q\geq 0\)) and decreases with vehicle supply (\(\mathrm{d}f/\mathrm{d}y\leq 0\)).
Accordingly, after one period of search, \(m^{t}_{i}\) of the idle vehicles in zone \(i\) successfully pick up a passenger. They will start delivering their trips from the next time step and become idle again after several time steps (depending on the trip duration). The other vehicles are going to make a new routing and charging decision in zone \(i\) at time \(t+1\).
### _Charging at stations_
When a vehicle arrives at a charging station, some vehicles could already be there, either charging or waiting to charge. Besides, there could be a group of other vehicles arriving at the same time. Hence, the waiting time till the vehicle starts charging, denoted by \(\omega\), is probabilistic depending on both vehicles arriving at time \(t\) and those arriving earlier, which are collectively represented by \(\mathbf{z}_{\leq t,l}=\{z^{t}_{l,b}\}_{t^{\prime}\leq t,b}\). The probability mass function of the waiting time \(\omega\) for a vehicle arriving at station \(l\) at time \(t\) is characterized as
\[w^{t}_{l}(\omega)=g(\mathbf{z}_{\leq t,l},\omega). \tag{2}\]
By definition, \(w^{t}_{l}(\omega)\in[0,1],\forall\omega\in\mathbb{N}\) and \(\sum_{\omega}w^{t}_{l}(\omega)=1\). Accordingly, a newly arrived vehicle with SOC \(b\) first waits for \(\omega\) time steps with probability \(w^{t}_{l}(\omega)\) and then charges for \(\hat{\tau}_{b}\) time steps before leaving with a full battery.
### _MDP for individual vehicle_
Now, we are ready to specify the finite-horizon Markov decision process (MDP) for each vehicle.
#### Iii-B1 State \(s\in\mathcal{S}\)
Given by the time step \(t\), location \(k\in\mathcal{N}\cup\mathcal{L}\cup\{0\}\), and SOC \(b\). Here, location \(0\) denotes an offline state when the vehicle runs out of battery.
#### Iii-B2 Action \(a\in\mathcal{A}\)
The set of feasible actions depends on the vehicle's state. When the vehicle is in zone \(i\), it can choose either to continue cruising or move to a charging station. Hence, the action set is given by \(\mathcal{N}_{i}\cup\mathcal{L}\), where \(\mathcal{N}_{i}\) includes zone \(i\) itself and its neighbor zones. When the vehicle is about to leave station \(l\), it can choose one of the neighbor zones for cruising and thus the action belongs to the set of neighbor zones of station \(l\), denoted by \(\mathcal{N}_{l}\). Once the vehicle runs out of battery, it can do nothing but stay offline.
#### Iii-B3 State transition \(P:\mathcal{S}\times\mathcal{A}\times\mathcal{S}\rightarrow[0,1]\)
The probability of transitions between every pair of states under each action. The meeting probability \(m^{t}_{i}\) and the waiting time distribution \(w^{t}_{l}\) play a critical role here. For instance, a transition from state \(s=(t,i,b)\) to state \(s^{\prime}=(t+1,j,b-1)\) means the vehicle starting from zone \(i\) fails to find a passenger in zone \(j\) and thus the transition probability is given by \(P(s^{\prime}|s,a)=1-m^{t}_{j}\). If the same vehicle picks up a passenger traveling from zone \(j\) to zone \(k\) and the vehicle has sufficient battery to finish the trip, then \(s^{\prime}=(t+1+\tau_{jk},k,b-1-\tau_{jk})\) and \(P(s^{\prime}|s,a)=m^{t}_{j}\alpha^{t}_{jk}\), where \(\alpha^{t}_{jk}\) denote the fraction of passengers in zone \(j\) traveling to zone \(k\) at time \(t\). After making a charging decision, the vehicle first travels to the station and then waits for a while if there is a queue. If the waiting time is \(\omega\), the next state becomes \(s^{\prime}=(t+\tau_{il}+\omega+\hat{\tau}_{b-\tau_{il}},l,B)\). The corresponding transition probability is \(P(s^{\prime}|s,a)=w^{t+\tau_{il}}_{l}(\omega)\).
Note that we restrict the action set of cruising to the current and neighbor zones because idle ride-hailing vehicles continuously cruise before picking up a passenger. Even if they have a clear search target, the path can be decomposed into a sequence of local cruising destinations. For the same reason, we do not specify the cruising time between zones. Besides, vehicles only need to may decisions when they are idle and when they just finish charging, thanks to the fixed travel time and changing efficiency. Hence, the state vector does not include vehicle operation status and the state transitions are not synchronized.
#### Iii-B4 Reward \(r:\mathcal{S}\times\mathcal{A}\times\mathcal{S}\rightarrow\mathbb{R}\)
The immediate reward associated with each state transition. Its value is non-zero in three cases: (i) picking up a passenger, (ii) starting to charge, (iii) being offline. The first case induces a positive reward \(p_{jk}\) defined as the trip fare between zones \(j\) and \(k\), while the other two lead to a negative reward. The reward in (ii) is \(c_{l}\hat{\tau}_{b}\) where \(c_{l}\) is the charging price per unit of time. That in (iii) is an arbitrarily large penalty \(\kappa\) for running out of battery during the day.
#### Iii-B5 Discount factor \(\gamma\in(0,1]\)
express how much future rewards are taken into consideration in the current time step. In this study, \(\gamma\) is set to be 1 because we study a single-day operation. The notation is thus omitted in the equations hereafter for simplicity.
The objective of each vehicle is to maximize its cumulative reward over time under a policy \(\pi:\mathcal{S}\rightarrow\Delta(\mathcal{A})\) and initial
state distribution \(\rho\), which is given by
\[V_{\rho}(\pi|\mathbf{y},\mathbf{z})=\] \[\mathbb{E}_{s_{0}\sim\rho}\left[\mathbb{E}_{a\sim\pi(\cdot|s),(s,a, s^{\prime})\sim P(\cdot|\mathbf{y},\mathbf{z})}\left[\left.\sum_{(s,a,s^{ \prime})}r(s,a,s^{\prime})\right|s_{0}\right]\right]. \tag{3}\]
The key difference of the value function (3) from a classic MDP is the conditioning on the aggregate vehicle flows \(\mathbf{y}=\{y_{i,b}^{t}\}_{t,i,b}\) and \(\mathbf{z}=\{z_{l,b}^{t}\}_{t,l,b}\). When these are fixed, one can solve for the optimal policy via dynamic programming. However, as will be shown in the next section, \(\mathbf{y}\) and \(\mathbf{z}\) are induced by the mean policy among the vehicles. Therefore, vehicles are not solving independent MDP problems but playing a mean-field Markov game.
### _Mean-field equilibrium of eRIVER_
Since the vehicles are homogeneous and the fleet size is sufficiently large, we may use the mean policy among all vehicles to represent their collective routing strategies. Let \(\delta(\pi)\) be the probability density function of policies among the vehicles and \(\Omega\) be the set of feasible policies, the mean policy is given by
\[\bar{\pi}=\int_{\pi\in\Omega}\pi\delta(\pi)\mathrm{d}\pi. \tag{4}\]
Note that vehicles do not necessarily share the same policy even though they are homogeneous. In what follows, we show how the vehicle flows \(\mathbf{y}\) and \(\mathbf{z}\) can be fully determined by \(\bar{\pi}\) and the initial vehicle distribution \(\rho\). To this end, we introduce \(x_{k,b}^{t},k\in\mathcal{N}\cup\mathcal{L}\) to denote the vehicle flow in each zone and at each station before making the next routing and charging decision. Accordingly, we have
\[y_{i}^{t}=\sum_{k\in\mathcal{N}}\sum_{i\,b>0}x_{k,b}^{t}\bar{\pi} (a=i|s=(t,k,b)) \tag{5}\] \[\quad+\sum_{k\in\mathcal{L}_{i}}x_{k,B}^{t}\bar{\pi}(a=i|s=(t,k,B )),\] \[z_{l}^{t}=\sum_{k\in\mathcal{N}}\sum_{b\geq\tau_{kl}}x_{k,b}^{t- \tau_{kl}}\bar{\pi}(a=l|s=(t-\tau_{kl},k,b). \tag{6}\]
On the other hand, \(x_{k,b}^{t}\) can also be written as a function of \(\mathbf{y}\) and \(\mathbf{z}\). Specifically, for \(k\in\mathcal{N}\),
\[x_{k,b}^{t}=(1-m_{k}^{t-1})y_{k,b+1}^{t-1}+\sum_{t^{\prime}=1}^{t}\sum_{i\in \mathcal{N}(k,t,t^{\prime})}\alpha_{ik}^{t^{\prime}}m_{i}^{t^{\prime}}y_{i,b+ \tau_{ik}}^{t^{\prime}}, \tag{7}\]
where \(\mathcal{N}(k,t,t^{\prime})=\{i\in\mathcal{N}:t=t^{\prime}+1+\tau_{ik}\}\). Likewise for \(k\in\mathcal{L}\),
\[x_{k,B}^{t}=\sum_{b}\sum_{\omega}w_{k}^{t-\omega-\hat{\tau}_{b}}(\omega)z_{k, b}^{t-\omega-\hat{\tau}_{b}}. \tag{8}\]
Finally, the initial vehicle flows are determined by the initial state distribution, i.e., \(x_{k,b}^{0}=M\rho(s=(0,k,b))\).
Accordingly, we introduce a mapping \(\mu\) such that \((\mathbf{y},\mathbf{z})=\mu(\bar{\pi})\) and define the mean-field equilibrium as follows.
**Definition 1** (Mean-field equilibrium in eRIVER): _A mean policy \(\bar{\pi}^{*}\) is called a mean-field equilibrium (MFE) of eRIVER if it satisfies_
\[\bar{\pi}^{*}\in\arg\max_{\pi}V_{\rho}(\pi|\mu(\bar{\pi}^{*})). \tag{9}\]
Classically, MFE for a Markov game is usually defined on a tuple of stationary policy and state (or state-action) distribution. See, for example, [22, 23]. The notion of stationarity is required due to the infinite-horizon setting. Since eRIVER is modeled in a finite horizon, the mean-field distribution is determined by the mean policy. Besides, all information required for the routing problem is expressed by the aggregate flow \((\mathbf{y},\mathbf{z})\), which can also be seen as an integrated version of state-action distribution when normalized by the fleet size \(M\).
Under mild conditions, one can show that an MFE of eRIVER is guaranteed to exist.
**Proposition 1** (Existence of equilibrium in eRIVER): _If \(f\) and \(g\) are continuous in \((\mathbf{y},\mathbf{z})\), there exists at least one MFE for the eRIVER problem._
Note that (9) can be written as a fixed point \(\bar{\pi}^{*}\in\phi(\bar{\pi}^{*})\), where \(\phi\) is a set-valued function and can be decomposed as \(\phi=\psi\circ\mu\). Specifically, \(\psi\) maps from the aggregate vehicle flow to the set of optimal policies and \(\mu\) maps from a mean policy to the aggregate vehicle flows. As shown in [19], the existence of this fixed point can be proved by Kakutani's fixed point theorem [24, Theorem 8.6]. The only non-trivial condition is that \(\phi\) has a closed graph. In our setting (i.e., the feasible set of policies \(\Omega\) is a compact set in a Hausdorff space), this is equivalent to proving \(\phi\) is upper hemicontinuous. Following [19], we first show \(\mu\) is single-valued and continuous with \(\bar{\pi}\), and then prove \(\psi\) is hemicontinuous.
By (5) and (6), \(\mathbf{y}\) and \(\mathbf{z}\) are continuous in \(\mathbf{x}=\{x_{k,b}^{t}\}_{t,k,b}\) and \(\bar{\pi}\). In turn, (7) and (8) suggest that \(\mathbf{x}\) is continuous in \(\mathbf{y}\) and \(\mathbf{z}\) given that \(f\) and \(g\) are continuous functions of \((\mathbf{y},\mathbf{z})\). Therefore, \(\mu\) is continuous in \(\bar{\pi}\) and the mapping is single-valued. The remaining task is to show \(\psi\) is hemicontinuous. This is done by invoking Berge's maximum theorem [25, Theorem 3.5] with observations: (i) \(\Omega\) is independent of the aggregate vehicle flows \((\mathbf{y},\mathbf{z})\), and (ii) the value function (3) is continuous due to continuous state transition \(P(s^{\prime}|a,s)\). The latter also results from the continuity of \(f\) and \(g\).
We note that the assumption imposed in Proposition 1 often holds. First, \(f\) can be easily designed to be a continuous function of \(\mathbf{y}\) (see Appendix A). Establishing the continuity of waiting time distribution is more challenging, as it depends on a sequence of vehicle flows \(\mathbf{z}_{\leq t,l}\). In Appendix B, we derive a closed-form expression of \(g\) and show it is indeed continuous in \(\mathbf{z}\).
## III Numerical Analysis
### _Settings and solution algorithm_
We analyze the equilibria of eRIVER on a stylized network, shown in Figure 1. The default values of exogenous model parameters are reported in Table I.
The meeting probability and charging waiting time distribution are specified according to Appendices A and B. All vehicles are idle at the beginning of the study horizon and, in the first set of experiments, they are evenly distributed over service zones with a full battery, i.e., \(\rho(s_{0}=(0,i,B))=1/N,\ i\in\mathcal{N}\).
We apply the Frank-Wolfe algorithm [26] to compute the equilibrium. In each iteration, we first perform a forward propagation to load the vehicle flows using the current mean policy and then conduct a backward propagation to solve an optimal policy and update the mean policy with it. This iterative procedure is summarized in Algorithm 1.
```
0: Demand \(\{q_{i}^{t}\}\); parameters in Tab. I; gap threshold \(\delta\)
0: Equilibrium policy \(\bar{\pi}^{*}\)
0: Initiate random policy \(\bar{\pi}^{(0)}\).
1:for\(n=0,1,\dots,\)do
2: Forward propagation: Load vehicle flows by (5) and (6) using policy \(\bar{\pi}^{(n)}\).
3: Backward propagation: Solve a policy \(\hat{\pi}\) that maximizes (3) by dynamic programming.
4: Update policy \(\bar{\pi}^{(n+1)}=(1-\eta)\bar{\pi}^{(n)}+\eta\hat{\pi}\) with step size \(\eta=1/(n+1)\).
5: Compute gap \(g=||\bar{\pi}^{(n+1)}-\bar{\pi}^{(n)}||_{1}\).
6:if\(g<\delta\)then
7: break
8:endif
9:endfor
10:return\(\bar{\pi}^{*}=\bar{\pi}^{(n)}\)
```
**Algorithm 1** Solution algorithm for eRIVER
We compute the eRIVER equilibria for three demand profiles, where the origin-destination (OD) pattern is always assumed to be balanced, i.e., \(\alpha_{ij}^{t}=1/N,\forall i,j,t\).
* Uniform: invariant over time and space (\(q_{i}^{t}=20,\forall t,i\));
* Peak/offpeak: uniform over space with a temporal pattern shown in Figure 2;
* Central/peripheral: uniform over time but concentrated in the central zone (\(q_{i}^{t}=40\) for \(i=6\) otherwise 15).
Figure 3 illustrates the performance of Algorithm 1 in the case of _Peak/offpeak_ demand; results in other cases are similar. The policy gap reduces to a magnitude of 10\({}^{-4}\) within 500 iterations. Yet, due to the diminishing step, the convergence is sublinear and the value gap (i.e., normalized difference between value function and Q-values with positive vehicle flows) stablized around 10\({}^{-3}\).
### _"Congestion" in service zones and charging stations_
Figure 4 plots equilibrium vehicle flows in service zones and at charging stations, normalized by their respective maximum values in each case. The darker the color, the larger the vehicle flows. In all three cases, vehicles show a strong preference for the central zone even though it has the same demand as other zones in the cases of _Uniform_ and _Peak/offpeak_ demand. Besides, severe charging queues are observed at time \(t=4\) because most vehicles arrive at the charging stations at time \(t=3\) and \(t=4\).
Thanks to the simple network structure, results in Figure 4 are symmetric. Hence, in what follows, we only compare the vehicle flows between the central zone (\(i=6\)) and a peripheral zone (\(i=0\)), and between an inner station
\begin{table}
\begin{tabular}{|l|c|c||c|c|c|} \hline Notation & Unit & Value & Notation & Unit & Value \\ \hline \(N\) & & 7 & \(\tau_{ij}\) & \(\Delta\) & 1 & \(j\in\mathcal{N}_{i}\) \\ \(L\) & & 6 & & & 2 & \(j\notin\mathcal{N}_{i}\) \\ \(T\) & & 12 & \(\tau_{il}\) & \(\Delta\) & 1 & \(l\in\mathcal{L}_{i}\) \\ \(\Delta\) & hr & 0.25 & & & 2 & \(l\in\mathcal{L}_{i}\) \\ \(M\) & veh & 500 & & & 3 & otherwise \\ \(B\) & \(\Delta\) & 4 & \(p_{ij}\) & \(\$\) & 1 & \(\tau_{ij}=1\) \\ \(C\) & veh & 20 & & & 2 & \(\tau_{ij}=2\) \\ \(e\) & \(\Delta\) & 3 & \(c_{l}\) & \(\$\) & 0.5 & \(l\in\mathcal{L}\) \\ \(\xi\) & \(\Delta\) & 1 & \(\kappa\) & \(\$\) & -10 & -10 \\ \hline \end{tabular} Note: \(\mathcal{L}_{i}\) denotes the set of charging stations at the boundary of zone \(i\), and \(\mathcal{L}_{N_{i}}\) denotes the set of charging stations at the boundary of neighbor zones of zone \(i\).
\end{table} TABLE I: Default values of exogenous variables.
Fig. 1: Stylized network with seven service zones \(\mathcal{N}=\{0,1,\dots,6\}\) and six charging stations \(\mathcal{L}=\{A,B,\dots,F\}\).
Fig. 3: Gap over iteration in case Peak/offpeak.
Fig. 2: Temporal demand pattern in case Peak/offpeak.
(\(l=\text{B}\)) and an outer station (\(l=\text{A}\)). Figure 5 illustrates the idle vehicles over time along with the demand in each zone represented by the grey dashed line. It can be seen that in the cases of _Uniform_ and _Peak/offpeak_ demand, a fraction of demand is lost between \(t=2\) and \(t=4\) because most vehicles are either delivering trips or charging. Yet, this issue is rather minor in _Central/peripheral_. Another observation is that, due to its popularity, the central zone
Fig. 4: Normalized vehicle flows in service zones and at charging stations for the three demand patterns. The number under each panel indicates its time step.
Fig. 5: Cruising vehicles in central and peripheral zones.
Fig. 6: Arriving charging vehicles at inner and outer stations.
Fig. 7: Arriving charging vehicles at inner and outer stations under different initial state distributions.
tends to be oversupplied in the second half of the study horizon meanwhile the peripheral zones are undersupplied. This implies that the selfish behaviors of vehicles do not lead to efficient system performance. This is in line with the widely known result in congestion games [27].
In Figure 6, we plot the arriving vehicles with the station capacity. It clearly illustrates the peak of vehicle arrivals at time \(t=3\) and \(t=4\), as well as a secondary peak at time \(t=7\) in _Uniform_ and _Peak/offpeak_. While the first peak is more significant at inner stations, the second one has a more profound impact on outer stations.
### _Impact of initial state distribution_
One possible cause of the severe charging queue at \(t=4\) is that we have initiated vehicles with a full battery and thus they all tend to run out of battery around the same time if they continuously operate from the very beginning. Hence, we conduct another experiment with random initial state distributions. Due to the limited space, we only present the results of _Uniform_; the main findings in other cases are similar.
Figure 7 presents the arriving vehicle flows in two random samples of initial SOC and one random sample of initial locations. It can be seen that the congestion at the charging stations persists and is more sensitive to initial SOC compared to locations. In the two tested cases, the highest peak happens earlier at time \(t=1\), largely because there are more vehicles starting the day with a partial state of charge. Considering that the market has a much larger supply at the beginning (as all vehicles are idle at time \(t=0\)), these vehicles tend to charge at early time steps.
## IV Conclusions
This paper presents a game-theoretical model for the routing and charging behaviors of electric ride-hailing vehicles. Each vehicle's decisions are characterized by a Markov decision process (MDP). We show the state transitions in this MDP depend on the aggregate vehicle flows, which are induced by the mean policy over all vehicles. Accordingly, the collective behaviors can be cast in a mean-field Markov game. We define the equilibrium, prove its existence and investigate the equilibrium vehicle flows through numerical experiments. The results demonstrate the inefficiency of selfish routing and charging decisions, which causes significant congestion both in service zones and at charging stations. This indicates a necessity to design appropriate control and intervention schemes to optimize the coupled system of on-demand mobility service and electric vehicle charging. Moreover, vehicle travel times within and between zones have assumed to be fixed, which may deviate from the reality. Hence, besides "congestion" in the matching and charging processes, future studies shall also take traffic congestion caused by ride-hailing vehicles into consideration.
## Appendix
### _Specification of meeting probability_
We apply the meeting probability of e-hailing derived in [19] as follows:
\[m_{i}^{t}=\begin{cases}1-\exp[-\theta_{i,1}^{t}(\varphi_{i}^{t})^{2}],&\varphi_ {i}^{t}>\tilde{\varphi},\\ 1-\exp(-\theta_{i,2}^{t}\varphi_{i}^{t}),&\varphi_{i}^{t}\leq\tilde{\varphi}, \end{cases} \tag{10}\]
where \(\varphi_{i}^{t}=q_{i}^{t}/y_{i}^{t}\) denotes the demand-supply ratio in the local market and \(\tilde{\varphi}\) is a threshold value that dictates the oversupplied market condition. The values of \(\theta_{i,1}^{t}\), \(\theta_{i,2}^{t}\) and \(\tilde{\varphi}\) are set according to [19], which are calibrated from agent-based simulations.
### _Specification of waiting time distribution_
To specify the probability of waiting time at station \(l\) for any vehicle arriving at time \(t\), we introduce another intermediate variable \(\zeta_{l}^{t}(\omega)\) to denote the available charging spots at time \(t+\omega\). We again treat \(\zeta_{l}^{t}\in[0,C]\) as a continuous variable. Then, (1) is rewritten as
\[w_{l}^{t}(\omega)=\min\left(\frac{\zeta_{l}^{t}(\omega)}{z_{l}^ {t}+\varepsilon},\max\left(0,1-\sum_{\omega^{\prime}<\omega}w_{l}^{t}(\omega^ {\prime})\right)\right), \tag{11}\]
where \(z_{l}^{t}=\sum_{b}z_{l,b}^{t}\) is the total vehicle arrivals at time \(t\), \(w_{l}^{t}(\omega^{\prime})=0\) for \(\omega^{\prime}<0\), and \(\varepsilon>0\) is an arbitrarily small constant to ensure tractable results.
Let us first investigate the continuity of (11) with respect to vehicle flows \(z_{l,b}^{t}\). Note \(\zeta_{l}^{t}(\omega)\) is independent of \(z_{l,b}^{t}\) for \(\omega=0\) but not necessarily for \(\omega>0\) because newly arrived vehicles that finish waiting would take some charging spots. Specifically, after (11) is called to compute \(w_{l}^{t}(\omega)\), \(\zeta_{l}^{t}(\omega+1)\) will be updated based on the average charging time \(\bar{\tau}_{l}^{t}=\sum_{b}z_{l,b}^{t}\tau_{b}/\sum_{b}z_{l,b}^{t}\). The new value of \(\zeta_{l}^{t}(\omega+1)\) will then be used to compute \(w_{l}^{t}(\omega+1)\). Nevertheless, \(\zeta_{l}^{t}(\omega)\) remains continuous with \(z_{l,b}^{t}\) as long as \(\zeta_{l}^{t}(0)\) is continuous in \(z_{l,b}^{t}\), which is guaranteed given the reduced expression \(\min\left(\frac{\zeta_{l}^{t}(0)}{z_{l}^{t}+\varepsilon},1\right)\). Therefore, it is safe to conclude that \(w_{l}^{t}(\omega)\) is continuous in \(z_{l,b}^{t}\), which is numerically demonstrated in Figure 8.
Since the available charging spots in the future jointly depend on \(z_{l}^{t}\), \(w_{l}^{t}(\omega)\) and \(\bar{\tau}_{l}^{t}\) in a continuous way, it is easy to
show \(\zeta_{l}^{t}(\omega)\) is continuous in \(\mathbf{z}_{\leq t,l}\) by induction. This result is numerically demonstrated in Figure 9.
|
2309.05646 | A Novel Supervised Deep Learning Solution to Detect Distributed Denial
of Service (DDoS) attacks on Edge Systems using Convolutional Neural Networks
(CNN) | Cybersecurity attacks are becoming increasingly sophisticated and pose a
growing threat to individuals, and private and public sectors. Distributed
Denial of Service attacks are one of the most harmful of these threats in
today's internet, disrupting the availability of essential services. This
project presents a novel deep learning-based approach for detecting DDoS
attacks in network traffic using the industry-recognized DDoS evaluation
dataset from the University of New Brunswick, which contains packet captures
from real-time DDoS attacks, creating a broader and more applicable model for
the real world. The algorithm employed in this study exploits the properties of
Convolutional Neural Networks (CNN) and common deep learning algorithms to
build a novel mitigation technique that classifies benign and malicious
traffic. The proposed model preprocesses the data by extracting packet flows
and normalizing them to a fixed length which is fed into a custom architecture
containing layers regulating node dropout, normalization, and a sigmoid
activation function to out a binary classification. This allows for the model
to process the flows effectively and look for the nodes that contribute to DDoS
attacks while dropping the "noise" or the distractors. The results of this
study demonstrate the effectiveness of the proposed algorithm in detecting DDOS
attacks, achieving an accuracy of .9883 on 2000 unseen flows in network
traffic, while being scalable for any network environment. | Vedanth Ramanathan, Krish Mahadevan, Sejal Dua | 2023-09-11T17:37:35Z | http://arxiv.org/abs/2309.05646v1 | A Novel Supervised Deep Learning Solution to Detect Distributed Denial of Service (DDoS) attacks on Edge Systems using Convolutional Neural Networks (CNN)
###### Abstract
Cybersecurity attacks are becoming increasingly sophisticated and pose a growing threat to individuals, and private and public sectors. Distributed Denial of Service attacks are one of the most harmful of these threats in today's Internet, disrupting the availability of essential services. This project presents a novel deep learning-based approach for detecting DDoS attacks in network traffic using the industry-recognized DDoS evaluation dataset from the University of New Brunswick, which contains packet captures from real-time DDoS attacks, creating a broader and more applicable model for the real world. The algorithm employed in this study exploits the properties of Convolutional Neural Networks (CNN) and common deep learning algorithms to build a novel mitigation technique that classifies benign and malicious traffic. The proposed model preprocesses the data by extracting packet flows and normalizing them to a fixed length which is fed into a custom architecture containing layers regulating node dropout, normalization, and a sigmoid activation function to out a binary classification. This allows for the model to process the flows effectively and look for the nodes that contribute to DDoS attacks while dropping the "noise" or the distractors. The results of this study demonstrate the effectiveness of the proposed algorithm in detecting DDOS attacks, achieving an accuracy of.9883 on 2000 unseen flows in network traffic, while being scalable for any network environment.
**Keywords: Distributed Denial of Service, Convolutional Neural Networks, Deep Learning, Intrusion Detection, Network Traffic Analysis**
## 1 Introduction
In today's interconnected world, the internet plays a vital role in various domains such as communication, education, business, government, and more. However, with its widespread usage, the prevalence of cyber crimes has also increased, including activities such as spreading misinformation, hacking, and various types of attacks. Among these attacks, Distributed Denial of Service (DDoS)
attacks have emerged as a significant threat, posing risks to basic internet standards and security. These attacks can cause temporary paralysis of business processes, disrupt critical services, and flood networks with malicious traffic [1].
### Impact of DDoS Attacks
In the first half of 2022, the world witnessed a staggering \(6,019,888\) DDoS attacks alone [2]. The sheer volume of these attacks has resulted in substantial financial losses and a lack of consumer trust. A recent study revealed that a single DDoS attack can cost a company over $1.6 million, a huge cost for companies of any size [3]. Also, the financial impact of DDoS attacks goes beyond immediate revenue loss, affecting various aspects of a corporation's operations. During an attack, the targeted service or website becomes inaccessible, leading to a loss of potential revenue and customers. Moreover, reputation damage and loss of consumer trust can have long-term consequences for businesses. The increasing frequency of these attacks necessitates the development of effective mitigation techniques to safeguard services and prevent revenue loss.
Aside from business damages, DDoS attacks have emerged as a significant factor in geopolities, demonstrating their potential to impact international relations and national security. For example, state-sponsored threat actors targeted 128 governmental organizations in 42 countries supporting Ukraine during the Russia-Ukraine conflict [4]. By targeting such entities, the threat actors seek to create a sense of chaos, confusion, and instability within the geopolitical landscape.
### Legacy Detection Methods
Current DDoS detection methods often rely on traditional approaches such as IP filtering or rate-limiting techniques [5] as shown in Figure 2. While these methods have been used for some time and have shown some effectiveness in certain scenarios, they also come with notable limitations that hinder their ability to provide comprehensive protection against evolving DDoS attack techniques.
**Lack of Adaptability:** Traditional methods can struggle to adapt to new and sophisticated DDoS attack patterns. As attackers continuously develop novel methods to bypass existing defenses, traditional techniques may fail to keep up with these dynamic threats [6]. This can lead to an increased number of false negatives, allowing malicious traffic to go undetected.
**Resource Intensiveness:** Some traditional solutions, such as rate-limiting, can consume significant network resources and processing power. Implementing these techniques may impact the overall performance and responsiveness of the network, potentially affecting legitimate user traffic
Figure 1: DDoS Attack Architecture
Figure 2: Current State of DDoS Mitigation Techniques
and leading to service degradation [7]. Further, in commonplace networks, existing defense mechanisms against DDoS attacks have limited success because they cannot meet the considerable challenge of achieving simultaneously efficient detection, effective response, acceptable rate of false alarm, and the real-time transfer of all packets.
**Dependency on Signatures in Attacks:** Some legacy systems rely heavily on signature-based detection, which involves matching incoming traffic patterns against known attack signatures. They use an index of patterns to match the incoming traffic with the known signatures to identify attacks [8]. While this can be effective against known attack types, it falls short against zero-day attacks or variants that have not been previously identified [9]. A common drawback to these entropy-based techniques is the requirement to select an appropriate detection threshold. Given the variation in traffic type and volume across different networks, it is a challenge to identify the appropriate detection threshold that minimizes false positive and false negative rates in different attack scenarios.
**Limited Scalability:** Traditional solutions face scalability issues, particularly when dealing with large-scale attacks that generate massive amounts of traffic. Scaling up these methods to handle such attacks is challenging and resource-intensive [10].
### Deep Learning Based Detection Methods
In the modern state of DDoS detection, there is an increasing usage of neural networks and deep learning techniques (Figure 2) This section provides an overview of some recent research contributions in this space.
In de Assis et al. [11] the authors used an SDN (Software-Defined Networking) model to detect and mitigate DDoS attacks over a targeted server. The proposed SDN model was compared to baseline logistic regression (LR) models, multilayer perceptron (MLP) networks, and Dense MLP [12]. The authors tested the above detection methods over two test scenarios: one using simulated SDN data, and the other using a more broader dataset. The overall results showed that CNN is efficient in detecting DDoS attacks for all these test scenarios and operates autonomously to allow for speed in the detection and mitigation processes. However, a key weakness of this model is its weak result on a more comprehensive dataset, such as CICDDoS 2019 [13].
In Shaaban et al [14] the authors proposed a neural network-based approach for DDoS attack detection. They compared their proposed model with classification algorithms such as k-nearest neighbors (KNN) and decision trees (DT). It was found that their proposed model performed well compared to other classification algorithms, achieving 99% accuracy on both datasets. However, the data was converted to matrix form by single-column padding, which may affect the learning of the model [12], as the spatial dimensions of the input data changed the way the convolution filters interacted with the data. In addition, their dataset lacked many common DDoS attacks (such as Man In The Middle), while only TCP and HTTP flood DDoS attacks were considered for their dataset 1.
Based on these deep learning-based models, this research aims to build upon them by using better datasets that are specialized, such as the CIC DDoS 2019 dataset [13]. By incorporating key improvements such as scalability, flexibility, and reliability (see subsection 1.2), this research will work to improve upon existing models to effectively detect and mitigate DDoS attacks on edge systems.
### Proposed Solution
Motivated by the limitations of current approaches and the demand for an advanced DDoS detection solution, this research aims to develop a novel supervised machine learning model capable of handling any data size and accurately differentiating between malicious and benign traffic. In commonplace networks, existing defense mechanisms against DDoS attacks have limited success because they cannot meet the considerable challenge of achieving simultaneously efficient detection and the real-time transfer of packets [6]. To meet this objective, we leverage Convolutional Neural Networks (CNNs), a deep learning approach that has shown promising success in malware detection [15] but remains relatively under-researched and underutilized in the field of cybersecurity.
Convolutional Neural Networks (CNNs) are a type of deep learning algorithm that has demonstrated success in various applications, including pattern recognition and in industries such as medicine and biology [16]. Specifically well-suited for analyzing visual imagery, CNNs can learn and extract features from raw data, making them a powerful tool for image classification and object recognition tasks.
In the context of cybersecurity, CNNs can be effectively employed to detect and classify malicious network traffic. By analyzing network traffic data, CNNs can learn to identify patterns and features associated with DDoS attacks, enabling them to accurately differentiate between benign and malicious traffic.
### Benchmark Standards
To address the challenge of DDoS attacks, state-of-the-art mitigation techniques should possess certain characteristics.
**Scalability:** Allows the solution to adapt to business growth and handle the increasing size of attacks. Attacks larger than 2 terabits per second (Tbps) have occurred, and there's no indication that attack traffic size will plateau or trend downward in the future.1 For this reason, attacks of large magnitudes should be expected and mitigated.
Footnote 1: [https://www.cloudflare.com/learning/ddos/ddos-mitigation](https://www.cloudflare.com/learning/ddos/ddos-mitigation)
**Flexibility:** Enabling the creation of ad hoc policies and patterns to respond to emerging threats in real time. The system must be adaptable to recognize attacks even when there are large fluctuations in legitimate traffic.2
Footnote 2: [https://www.fortinet.com/resources/cyberglossary/implementation-dos-mitigation-strategy](https://www.fortinet.com/resources/cyberglossary/implementation-dos-mitigation-strategy)
**Reliability:** Ensuring the functionality of the DDoS protection system. Although various methods have been proposed to detect and identify DDoS attacks, many existing approaches do not fully meet these requirements.
**Predictability:** DL methods exhibit the capability to extract features and classify data even with incomplete information [17]. By learning long-term dependencies of temporal patterns, DL methods should effectively identify low-rate attacks.
Based on these standards, this paper aims to contribute to the field by introducing a DL-based DDoS detection architecture on edge systems. By employing CNNs, our proposed model reduces the need for extensive feature engineering and exhibits high detection accuracy. This novel solution has the potential to be deployed by customers and organizations to effectively detect and mitigate DDoS attacks on edge systems. The objective of this project is to create a supervised model capable of handling any data size and consistently and accurately differentiating malicious from benign traffic. Furthermore, this model can be implemented on any network size and is functional on private and public networks alike. The engineering goal of this project is to design and develop a dynamic deep learning model that can accurately identify malicious and benign network traffic across a wide range of attack methods and situations, even when dealing with large amounts of real-time data in short time constraints. Leveraging the capabilities of CNNs and their proven success in other domains, this study aims to develop a state-of-the-art model that can effectively detect and mitigate DDoS attacks on edge systems.
## 2 Methodology
The CNN this study proposes is designed to learn malicious activity from traffic and identify DDoS patterns regardless of their topological positioning. This is a key advantage of CNNs in classic examples of image recognition [18], as they produce consistent output regardless of where a pattern appears in the input. By utilizing this feature in the preprocessing method, this research can utilize the key advantage of CNNs in the context of anomaly detection. This feature learning during model training eliminates the need for extensive feature engineering, ranking, and selection. We employ a novel network traffic preprocessing technique that creates a spatial data representation as input to the CNN to support real-time attack detection. This section introduces the network traffic preprocessing method, the CNN model architecture, and the learning process.
### Dataset
The DDoS evaluation dataset (CIC-DDoS2019) is a dataset of PCAP files that contains both benign and DDoS traffic. This dataset is beneficial to our DDoS attack detection task because it contains real-world examples of DDoS traffic that provide more realistic and accurate results than synthetic datasets. The CIC DDoS2019 dataset has several features that are helpful for our analysis, including the inclusion of benign and DDoS traffic and the use of multiple types of DDoS attacks, including SYN (Synchronized) floods, UDP (User Datagram Protocol) floods, and HTTP floods. [19]
The Canadian Institute of Cybersecurity has also split various attacks into unique timestamps that are used to visualize the dataset in a CSV format. While this research aims to make our model as dataset-agnostic as possible, this will allow us to create a comprehensive frame of reference to visualize and train/test on. (Figure 3)
### Preprocessing Procedure
This section elucidates the imperative process of rendering input data amenable to the Convolutional Neural Network (CNN) model, all the while ensuring that this preprocessing is non-specific to any particular dataset. The essence of this procedure is to construct a dataset-agnostic preprocessing mechanism capable of generating traffic observations following those observed in contemporary online systems, thereby broadening the scope for testing and training, and enhancing the model's effectiveness.
To rigorously analyze the dataset and efficiently implement our CNN model, it becomes data preprocessing is a requisite preliminary step. The primary objective herein is to ascertain fairness and equal distribution of data to attain the utmost precision in results. To achieve this, we embark on the utilization of PCAP (Packet Capture) files housing network traffic data, and employ the Pyshark library for data extraction. These extracted data components are then organized into discrete "flows." This structuring of input data in the form of packet flows gives rise to a spatial data representation, thereby endowing the CNN model with the capacity to discern salient features characterizing both DDoS (Distributed Denial of Service) attacks and benign network traffic [20].
In this comprehensive preprocessing endeavor, we present Algorithm 1, designed to effectuate the transformation of raw PCAP data into labeled samples, aptly tailored for CNN input. This algorithm ingests multiple parameters, including the original PCAP data, a user-defined time interval (\(t\)) for aggregating packets into flows, the maximum permissible number of packets per sample (\(m\)), and labels (\(l\)) that serve to classify each packet, discerning between categories such as DDoS attacks and benign traffic. The algorithm's core aim is to standardize the input data format, thereby simplifying the training and testing of CNN models while preserving data fairness. Symbols and their respective definitions are presented comprehensively in Table 1.
The procedural sequence of the Data Preprocessing algorithm unfolds as follows: commencing with the initialization of an empty set (s) designated for storing flow data, the algorithm proceeds to establish a local variable (\(t_{0}\)), initially set to
Figure 3: Illustration of Proposed Procedure
zero, to function as a time counter. Simultaneously, an identifier (\(id\)) is introduced for packet labeling. Subsequently, the algorithm iteratively processes each packet from the PCAP data, continually updating the identifier (\(id\)) with pertinent packet headers, including Source IP and Destination IP, thereby facilitating accurate labeling. It ascertains whether the current packet signifies the commencement of a fresh time flow, contingent on the evaluation of the time counter (\(t_{0}\)) about the user-specified time interval (\(t\)). Should the number of packets within the current time flow fall below the stipulated maximum (\(m\)), the algorithm appends the packet to the ongoing flow. Consequently, the resultant sample undergoes normalization to accommodate any space, ensuring uniformity. Finally, the algorithm assigns labels to each flow within the model, contingent on the labels (\(l\)) provided, based on their respective identifiers, thereby culminating in the production of a labeled sample, aptly primed for CNN input.
Furthermore, the intrinsic advantages of this algorithm extend to the emulation of the traffic-capturing process inherent to online Intrusion Detection Systems (IDSs). In this context, traffic is collected over a specified time interval (\(t\)) before being submitted to the anomaly detection algorithm. Consequently, such algorithms are necessitated to make decisions based on subsets of traffic flows, devoid of comprehensive knowledge regarding their entire lifespan. To replicate this operational paradigm, attributes of packets associated with the same bi-directional traffic flow are methodically grouped in chronological order.
A rigorous normalization and zero-padding procedure is employed to ensure homogeneity in input sequence lengths. Herein, each attribute value is normalized to a \([0,1]\) scale. Additionally, the samples are augmented with zero-padding to ensure uniformity, with each sample achieving a fixed length (\(n\)), a prerequisite for effective CNN learning over the entire sample set. To preempt any inherent bias towards one class or the other, a
\begin{table}
\begin{tabular}{|c|p{284.5pt}|} \hline
**Symbol** & **Description** \\ \hline \(pcap\) & Input PCAP data, which contains network traffic information \\ \hline \(t\) & Time interval for grouping packets into flows \\ \hline \(m\) & Maximum number of packets per sample (flow) \\ \hline \(l\) & Label for each packet (e.g., distinguishing between DDoS attacks and benign traffic) \\ \hline \(sample\) & Output, labeled samples for input to a CNN model \\ \hline \(s\) & Temporary storage for flow data \\ \hline \(t_{0}\) & Local variable representing the current time counter \\ \hline \(id\) & Identifier for each packet (e.g., based on packet headers like Source IP, Dest IP) \\ \hline \(packet\) & Individual packets within a flow \\ \hline \(flows\) & Individual flow extracted from the network traffic \\ \hline \end{tabular}
\end{table}
Table 1: Symbols for Preprocessing algorithm
balancing procedure is instituted, affording more weight to the minority class or vice versa.
### Final Model Architecture
In the following phase, we proceed with the implementation of our Convolutional Neural Network (CNN) model. The architecture of our CNN model, as illustrated in Figure 3, encompasses a sequence of designed layers, each of which has been rigorously substantiated in a plethora of publications.
**Input Layer:** The initiation of our CNN model involves taking the output generated by Algorithm 1 as the input (Figure 4) for the express purpose of online attack detection. This model functions to classify traffic flows into one of two distinct categories: malicious (i.e., representing Distributed Denial of Service (DDoS) attacks) or benign. The paramount aim here is to optimize the model's simplicity and computational efficiency, rendering it suitable for deployment on resource-constrained devices. In terms of size, the input layer is \(nxm\), where \(m=11\) since \(11\) features are read by the algorithm.
The output produced by the preprocessing algorithm serves as the input for the proposed CNN Architecture to undergo training (Figure 5).
**2D Convolutional Layer:** Our architecture incorporates a 2D convolutional layer equipped with 64 filters, each having a kernel size of 3 x 3. This layer assumes the responsibility of feature extraction from the input data. It achieves this by employing filter sliding mechanisms over the input, calculating dot products [20]. It should be noted that this layer is attuned to accommodate the modified array data detailed in section 2.2.
**Dropout Layer:** Following the convolutional layer, we introduce a dropout layer, employing a recommended dropout rate of 0.5 [21]. This layer's role is to randomly deactivate a certain percentage of input units during each training update, mitigating the risk of overfitting. Within this layer, we employ the Rectified Linear Unit (ReLU) activation function to introduce non-linearity into the model. The ReLU function is expressed mathematically as
\[f(x)=max(0,x)\]
where it essentially replaces negative inputs with zero, thereby turning off these neurons. This layer discerns the relevance of input nodes in the model's decision-making process.
**GlobalMaxPooling2D Layer:** A pivotal component, the GlobalMaxPooling2D layer, executes max pooling on the input data, serving to reduce spatial dimensions while preserving salient features. By including max pooling, the model can focus on the most important features that separate a benign attack from a DDoS attack, making it much more efficient. After the Max Pooling, the output is then flattened to produce a final one-dimensional feature vector, which is used as input to the classification layer. This allows the model to make its final prediction on whether the input represents a benign or malicious traffic flow.
**Final Fully Connected Layer:** The ultimate layer, in the form of a fully connected layer, is equipped with a sigmoid activation function, as described in Roopak's study on malicious traffic using CNN [22]. This layer serves the critical function of computing the final output, delivering a probability estimation regarding the input being a DDoS attack. The sigmoid function is formally represented as
\[f(z)=\frac{1}{1+e^{-z}}\]
The output of this function, denoted as \({}^{\prime}p\),\({}^{\prime}\) ranges between 0 and 1, making it particularly suited for models wherein probability prediction is pivotal. When \(p\) exceeds 0.5, the traffic is classified as a DDoS attack; otherwise, it is classified as benign.
In summary, this model architecture holds notable advantages, especially its fully connected
Figure 4: Output of Preprocessing Algorithm
structure. It exhibits enhanced computational efficiency, with biases and weights exerting a less pronounced impact on the model's performance. This structural attribute augments its suitability for resource-constrained environments and applications.
## 3 Experimental Findings
This section comprehensively outlines the training and evaluation procedures for our CNN model, accompanied by a report of commonly used evaluation metrics for measuring the performance of the model. Supervised learning serves as the foundation of our methodology, leveraging labeled datasets where each network traffic flow is distinctly categorized as either a DDoS attack or benign traffic (see 2.2).
### Common Performance Metrics
To evaluate the performance of our CNN model in the realm of DDoS attack detection, we report a battery of well-established performance metrics. These metrics provide invaluable insights into the model's capacity to accurately distinguish between benign and DDoS traffic. Our analysis commences with a confusion matrix--a cornerstone for assessing classification algorithm performance. This matrix comprises four essential values: True Positives (\(TP\)), False Positives (\(FP\)), True Negatives (\(TN\)), and False Negatives (\(FN\)), where a positive prediction is used to flag potential malicious traffic. These values serve as the building blocks for widely used evaluation metrics:
**Precision:** Precision quantifies the proportion of true positive predictions relative to the total positive predictions
\[Precision=\frac{TP}{TP+FP}\]
It reflects the model's ability to accurately classify DDoS attacks without mislabeling benign traffic.
**Recall:** Recall measures the model's ability to correctly identify all actual positive instances
\[Recall=\frac{TP}{TP+FN}\]
It highlights the model's effectiveness in capturing all DDoS attacks present in the dataset.
\(F1\) **Score:** The \(F1\) score represents a harmonic mean between precision and recall, offering a balanced assessment of the model's performance.
\[F1=\frac{2\cdot(Precision\cdot Recall)}{Precision+Recall}\]
The F1 score takes into account both false positives and false negatives, making it a valuable measure of overall performance.
**Accuracy \((A)\):** Accuracy is defined as the proportion of correctly classified instances and is calculated using the formula:
\[A=\frac{TP+TN}{TP+TN+FP+FN}\]
It provides an overarching view of the model's correctness in its predictions.
These performance metrics, derived from the confusion matrix, allow us to assess the CNN model's ability to distinguish between benign and DDoS traffic effectively. Furthermore, they enable comparisons with other state-of-the-art DDoS detection methods and provide insights into areas for potential improvement. Results are presented in tables and graphs, complemented by statistical analysis to determine the significance of observed differences.
### Training
To train the model, we used the CIC DDoS 2019 Dataset, as discussed in 2.1, renowned as the
Figure 5: Architecture for CNN Model
standard benchmark dataset in the domain of anomaly detection [19]. Using convention, we split the dataset into a Training, Validation, and Testing distribution of \(80:10:10\) respectively. The inclusion of a validation set helps the model tune to optional hyperparameters that fine-tune the prediction in the model. Otherwise, such a split wouldn't be necessary. [23] (Table 2).
For optimization during training, we employ the Adam optimizer, wherein key hyperparameters such as learning rate, batch size, and the number of epochs are tuned. Cross-validation is incorporated to assess the model's performance while mitigating the effects of overfitting. Training and evaluation occur on the preprocessed dataset, utilizing the common performance metrics described above. The inclusion of a validation dataset consistently enhanced accuracy over each epoch, highlighting the model's robustness and capacity to generalize effectively.
The training process involved grid search cross-validation to perform hyperparameter tuning., A maximum of 1000 epochs is permitted for each grid point. Training halts if no discernible improvement in loss minimization is observed for ten consecutive epochs, as determined by the patience variable preset to 10. Through this process, the model attained a training accuracy of.987.
Performance is gauged by the F1 score, which reached a maximum of.984. It was observed that the inclusion of more samples (\(n\)) contributed to higher F1 scores and accuracy.
## 4 Results
The proposed CNN model demonstrates proficiency in classifying previously unseen traffic flows, distinguishing them as benign or malicious, and specifically identifying DDoS attacks. The model was evaluated against a dataset comprising 2000 previously unseen DDoS flow samples from the CIC Dataset. We used a confusion matrix (Figure 6) to calculate the metrics that were outlined in section 3.1.
The results outlined in Table 3 underscore the model's capability to effectively classify network traffic flows, distinguishing between benign and malicious (DDoS) attacks with remarkable precision. Notably, the recall value (0.9784) emphasizes the model's proficiency in correctly identifying a substantial proportion of actual malicious flows.
The model's accuracy of.9883, while maintaining a high True Positive Rate (\(TPR\)) and low False Positive Rate (\(FPR\)) (less than.01), further highlights its robustness in distinguishing between benign and malicious traffic flows. Moreover, the \(F1\) score of.9824 attests to the model's equilibrium between precision and recall.
One of the unique features of this model is its efficiency in processing data, as elucidated in Section 2. The testing set, which encompassed a
\begin{table}
\begin{tabular}{|c|c|} \hline
**Dataset** & **Number of Samples** \\ \hline Training Set & 18,735 \\ Validation Set & 2,082 \\ Test Set & 2,313 \\ \hline Total & 23,130 \\ \hline \end{tabular}
\end{table}
Table 2: Dataset Distribution
\begin{table}
\begin{tabular}{|l|c|} \hline
**Metric** & **Value** \\ \hline Precision & 0.9864 \\ \hline Recall & 0.9784 \\ \hline F1 Score & 0.9824 \\ \hline Accuracy & 0.9883 \\ \hline \end{tabular}
\end{table}
Table 3: Performance Metrics
Figure 6: Confusion Matrix for Results of Proposed Model
significantly larger number of packets, was processed in just 0.28 seconds while consistently achieving high positive rates and minimizing both false positives and false negatives (both less than.01) as seen in Figure 6.
Collectively, these exceptional metrics illustrate the model's potential for practical deployment in network security, particularly concerning DDoS attack detection, where timely identification and mitigation are paramount.
## 5 Discussion
The successful implementation and evaluation of our Convolutional Neural Network (CNN) model for DDoS attack detection in network traffic data exemplifies the promising potential of deep learning techniques in the cybersecurity domain. In this section, we compare the model against state-of-the-art methods, deliberate on the strengths and weaknesses of our approach, and offer avenues for future exploration.
### Comparison with State-of-the-Art Methods
In this subsection, we draw comparisons with the studies referenced in 1.3.
In comparison to De Assis et al.'s work [11], which achieved an accuracy of.954 on the CIC-DDoS 2019 dataset, the proposed CNN model significantly outperforms it across all categories, showcasing its distinguished performance for the task of DDoS attack detection. While efficient, their model demonstrated less accuracy when tested on datasets with more variety of attacks and volume such as the dataset used in this research.
Concerning Shaaban et al.'s work [14], no specific efficiency or performance ratings were reported for comparison. The proposed CNN model in this study contributes to the existing research landscape by providing a robust and high-performing solution for DDoS attack detection, demonstrating its potential applicability in various cybersecurity contexts.
### Strengths and Limitations
#### Strengths:
* **Effective Feature Identification:** The preprocessing algorithm adeptly extracts critical features from network traffic data, empowering the CNN model to acquire robust feature representations. This significantly contributed to the model's high accuracy in distinguishing DDoS attacks from benign traffic.
* **Automated Hyperparameter Tuning:** Our approach incorporates automated hyperparameter tuning, optimizing the model for the specific characteristics of the dataset. This adaptability ensures that the model attains peak performance.
* **Validation-Test Split:** Through the deployment of a validation-test split, our model can adapt to different features within PCAP files, rendering it versatile and adaptable to diverse network conditions. More research, in general, can be used to find the number of hyperparameters that are tuned, to determine the size of the split. [24]
* **ReLU Activation and Kernel Technique:** The utilization of the Rectified Linear Unit (ReLU) activation function and kernel techniques proved effective in discerning the significance of specific features, enhancing the model's interpretability and predictive capabilities.
* **Generalizability:** Our model demonstrated its ability to generalize beyond the training dataset, showcasing its potential for identifying unseen attack patterns effectively.
#### Limitations:
* **Dataset Dependency:** The model's performance is heavily contingent on the quality and diversity of the training dataset. Enhancing its robustness necessitates the inclusion of more diverse data sources and attacks.
* **Zero-day Attacks:** Like many machine learning models [25], our CNN-based approach may grapple with the detection of zero-day attacks or those featuring previously unseen patterns. Continual model updates are imperative to address this limitation.
### Future Work
Though this study has very high promises and outcomes, there are still critical considerations regarding the impact of data preprocessing techniques and other decisions chosen in this model.
While our current methodology, which includes normalization and padding, has yielded favorable results, there is still room for exploration in evaluating alternative preprocessing techniques and optimizing these procedures. Furthermore, although our model exhibits proficiency within a controlled laboratory environment and a structured dataset, there is ample scope for its deployment in more complex real-world scenarios. Our model's role as a Detection System has the potential to expand towards proactive detection and quarantine mechanisms, significantly contributing to network security enhancement.
Further, the high accuracy achieved by the model in controlled testing environments suggests its potential effectiveness in real-world scenarios. The logical next step involves deploying and integrating the model within a practical network security system where efficient and accurate DDoS threat detection is imperative. Additionally, applying and enhancing the model's performance requires careful attention and the adoption of supplementary performance metrics. In this regard, we propose incorporating Receiver Operating Characteristic (ROC) curves and computing the Area Under the ROC Curve (AUC). These metrics extend the model evaluation toolkit and offer a nuanced perspective on its discrimination capabilities.
Addressing the inherent challenge of zero-day attacks, characterized by novel and previously unseen patterns, is imperative for ongoing research. While machine learning models excel under training and evaluation conditions that mirror known patterns, the dynamic nature of cybersecurity necessitates regular model updates to effectively accommodate emerging threats [12].
## 6 Conclusion
This research highlights the potential of Convolutional Neural Networks in the realm of security and anomaly detection broadly. We have fashioned an efficient and accurate DDoS attack detection model that surpasses state-of-the-art methodologies in key metrics. Our approach's adaptability, versatility, and generalization capabilities position it as a promising candidate for real-world deployment in network security systems, where the timely identification and mitigation of DDoS threats are paramount.
DDoS Attacks pose a challenge to not only business servers but individuals as well. In this work, we have presented a CNN-based DDoS detection architecture that offers an effective solution but also advances the field of threat detection and network security in the digital age. The robust performance of our model paves the way for enhanced security measures, protecting critical networks and systems from evolving cybersecurity threats.
Acknowledgments.Many thanks to Gayathri Easwaran and Ravi Ramanathan for providing constant support through the entire process.
## Declarations
### Funding
Not applicable.
### Competing interests
The authors declare that they have no competing interests.
### Ethics approval
Not applicable.
### Consent to participate
Not applicable.
### Consent for publication
Not applicable.
### Availability of data and materials
The dataset supporting the conclusions of this article is available in the DDoS Evaluation Dataset repository,
[https://doi.org/10.1109/CCST.2019.888841](https://doi.org/10.1109/CCST.2019.888841).
### Authors' contributions
VR and KM outlined the motivation and procedure of the research. VR collaborated with KM to produce the algorithm and preprocessing methods. VR, KM, and SD joined the discussion of the work and provided suggestions for future work regarding algorithms and data processing. VR and SD reviewed the manuscript and gave
suggestions on the revision of the details of the article. All authors read and approved the final manuscript.
|
2309.04558 | Towards Interpretable Solar Flare Prediction with Attention-based Deep
Neural Networks | Solar flare prediction is a central problem in space weather forecasting and
recent developments in machine learning and deep learning accelerated the
adoption of complex models for data-driven solar flare forecasting. In this
work, we developed an attention-based deep learning model as an improvement
over the standard convolutional neural network (CNN) pipeline to perform
full-disk binary flare predictions for the occurrence of $\geq$M1.0-class
flares within the next 24 hours. For this task, we collected compressed images
created from full-disk line-of-sight (LoS) magnetograms. We used data-augmented
oversampling to address the class imbalance issue and used true skill statistic
(TSS) and Heidke skill score (HSS) as the evaluation metrics. Furthermore, we
interpreted our model by overlaying attention maps on input magnetograms and
visualized the important regions focused on by the model that led to the
eventual decision. The significant findings of this study are: (i) We
successfully implemented an attention-based full-disk flare predictor ready for
operational forecasting where the candidate model achieves an average
TSS=0.54$\pm$0.03 and HSS=0.37$\pm$0.07. (ii) we demonstrated that our
full-disk model can learn conspicuous features corresponding to active regions
from full-disk magnetogram images, and (iii) our experimental evaluation
suggests that our model can predict near-limb flares with adept skill and the
predictions are based on relevant active regions (ARs) or AR characteristics
from full-disk magnetograms. | Chetraj Pandey, Anli Ji, Rafal A. Angryk, Berkay Aydin | 2023-09-08T19:21:10Z | http://arxiv.org/abs/2309.04558v1 | # Towards Interpretable Solar Flare Prediction with Attention-based Deep Neural Networks
###### Abstract
Solar flare prediction is a central problem in space weather forecasting and recent developments in machine learning and deep learning accelerated the adoption of complex models for data-driven solar flare forecasting. In this work, we developed an attention-based deep learning model as an improvement over the standard convolutional neural network (CNN) pipeline to perform full-disk binary flare predictions for the occurrence of \(\geq\)M1.0-class flares within the next 24 hours. For this task, we collected compressed images created from full-disk line-of-sight (LoS) magnetograms. We used data-augmented oversampling to address the class imbalance issue and used true skill statistic (TSS) and Heidke skill score (HSS) as the evaluation metrics. Furthermore, we interpreted our model by overlaying attention maps on input magnetograms and visualized the important regions focused on by the model that led to the eventual decision. The significant findings of this study are: (i) We successfully implemented an attention-based full-disk flare predictor ready for operational forecasting where the candidate model achieves an average TSS=0.54\(\pm\)0.03 and HSS=0.37\(\pm\)0.07. (ii) we demonstrated that our full-disk model can learn conspicuous features corresponding to active regions from full-disk magnetogram images, and (iii) our experimental evaluation suggests that our model can predict near-limb flares with adept skill and the predictions are based on relevant active regions (ARs) or AR characteristics from full-disk magnetograms.
space weather, solar flares, deep neural networks, attention, and interpretability. +
Footnote †: publicationid: _This is a preprint accepted at the 6th International Conference on Artificial Intelligence and Knowledge Engineering (AIKE), 2023. OIEEE_
## I Introduction
Solar flares are relatively short-lasting events, manifested as the sudden release of huge amounts of energy with significant increases in extreme ultraviolet (EUV) and X-ray fluxes, and are one of the central phenomena in space weather forecasting. They are detected by the X-ray Sensors (XRS) instrument onboard Geostationary Operational Environmental Satellite (GOES) [1] and classified according to their peak X-ray flux level, measured in watts per square meter (\(Wm^{-2}\)) into the following five categories by the National Oceanic and Atmospheric Administration (NOAA): X (\(\geq 10^{-4}Wm^{-2}\)), M (\(\geq 10^{-5}\) and \(<10^{-4}Wm^{-2}\)), C (\(\geq 10^{-6}\) and \(<10^{-5}Wm^{-2}\)), B (\(\geq 10^{-7}\) and \(<10^{-6}Wm^{-2}\)), and A (\(\geq 10^{-8}\) and \(<10^{-7}Wm^{-2}\)) [2]. In solar flare forecasting, M- and X-class flares are large and relatively scarce events and are usually considered to be the class of interest as they are more likely to have a near-Earth impact that can affect both space-based systems (e.g., satellite communication systems) and ground-based infrastructures (e.g., electricity supply chain and airline industry) and even pose radiation hazards to astronauts in space. Therefore, it is essential to have a precise and reliable approach for predicting solar flares to mitigate the associated life risks and infrastructural damages.
Active regions (ARs) are the areas on the Sun (visually indicated by scattered red flags in full-disk magnetogram image, shown in Fig. 1) with disturbed magnetic field and are considered to be the initiators of various solar activities such as coronal mass ejections (CMEs), solar energetic particle (SEP) events, and solar flares [3]. The majority of the approaches for flare prediction primarily target these ARs as regions of interest and generate predictions for each AR individually. The magnetic field measurements, which are the dominant feature employed by the AR-based forecasting techniques, are susceptible to severe projection effects as ARs get closer to limbs to the degree that after \(\pm\)60\({}^{\circ}\) the magnetic field readings are distorted [4]. Therefore, the aggregated flare occurrence probability (for the whole disk), in fact, is restricted by the capabilities of AR-based models. This is because the input data is restricted to ARs located in an area within \(\pm\)30\({}^{\circ}\) (e.g., [5]) to \(\pm\)70\({}^{\circ}\) (e.g., [6]) from the center due to severe projection effects [7]. As AR-based models include data up to \(\pm\)70\({}^{\circ}\), in the context of this paper, this upper limit (\(\pm\)70\({}^{\circ}\)) is used as a boundary for central location (within \(\pm\)70\({}^{\circ}\)) and near-limb
Fig. 1: An annotated full-disk magnetogram image as observed on 2013-05-13 at 02:00:00 UTC, showing the approximate central location (within \(\pm\)70\({}^{\circ}\)) and near-limb (beyond \(\pm\)70\({}^{\circ}\) to \(\pm\)90\({}^{\circ}\)) region with all the visible active regions present at the noted timestamp, indicated by the red flags. Note that the directions East (E) and West (W) are reversed in solar coordinates.
regions (beyond \(\pm\)70\({}^{\circ}\)) as shown in Fig. 1.
Furthermore, to issue a full-disk forecast using an AR-based model, the usual approach involves aggregating the flare probabilities from each AR by applying a heuristic function, as outlined in [8]. This aggregated result estimates the probability of at least one AR experiencing a flare event, assuming that the occurrence of flares in different ARs is conditionally independent and assigning equal weights to each AR during full-disk aggregation. This uniform weighting approach may not accurately capture the true impact of each AR on the probability of predicting full-disk flares [9]. It is essential to note that the specific weights for these ARs are generally unknown, and there are no established methods for precisely determining these weights. While AR-based models are limited to central locations and require a heuristic to aggregate and issue comprehensive forecasts, full-disk models use complete, often compressed, magnetograms corresponding to the entire solar disk. These magnetograms are used for shape-based parameters such as size, directionality, sunspot borders [10], and polarity inversion lines [11]. Although projection effects still prevail in the original magnetogram rasters, deep-learning models can learn from the compressed full-disk images as observed in [12, 13, 14] and issue the flare forecast for the entire solar disk. Therefore, a full-disk model is appropriate to complement the AR-based counterparts as these models can predict the flares that appear on the near-limb regions of the Sun and add a crucial element to the operational systems.
Deep learning-based approaches have significantly improved results in generic image classification tasks; however, these models are not easily interpretable due to the complex modeling that obscures the rationale behind the model's decision. Understanding the decision-making process is critical for operational flare forecasting systems. Recently, several empirical methods have been developed to explain and interpret the decisions made by deep neural networks. These are post hoc analysis methods (attribution methods) (e.g., [15]), meaning they focus on the analysis of trained models and do not contribute to the model's parameters while training. In this work, we primarily focus on developing a convolutional neural network (CNN) based full-disk model with trainable attention modules that can amplify the relevant features and suppress the misleading ones while predicting \(\geq\)M1.0-class solar flares as well as evaluating and explaining our model's performance by visualizing the attention maps overlaying on the input magnetograms to understand which regions on the magnetogram were considered relevant for the corresponding decision. To validate and compare our results, we train a baseline model with the same architecture as our attention model, which however, follows the standard CNN pipeline where a global image descriptor for an input image is obtained by flattening the activations of the last convolutional layer.
By integrating attention modules into the standard CNN pipeline, we attain two significant advantages: enhanced model performance and the ability to gain insight into the decision-making process. This integration not only improves the predictive abilities but also provides an interpretable model that reveals the significant features influencing the model's decisions. The architecture combines the CNN pipeline with trainable attention modules as mentioned in [16]. Both of our model's architectures are based on the general CNN pipeline; details are described later in Sec. IV. The novel contributions of this paper are as follows: (i) We introduce a novel approach of a light-weight attention-based model that improves the predictive performance of traditional CNNs for full-disk solar flare prediction (ii) We utilize the attention maps from the model to understand the model's rationale behind prediction decision and show that the model's decisions are linked to relevant ARs (iii) We show that our models can tackle the prediction of flares appearing on near-limb regions of the Sun.
The remainder of this paper is organized as follows: In Sec. II, we outline the various approaches used in solar flare prediction with contemporary work using deep learning. In Sec. III, we explain our data preparation and class-wise distribution for binary prediction mode. In IV we present a detailed description of our flare prediction model. In Sec. V, we present our experimental design and evaluations. In Sec. VI we present case-based qualitative interpretations of attention maps, and, lastly, in Sec. VII, we provide our concluding remarks with avenues for future work.
## II Related Work
Solar flare prediction currently, to the best of our knowledge, relies on four major strategies: (i) empirical human prediction (e.g., [17, 18]), which involves manual monitoring and analysis of solar activity using various instruments and techniques, to obtain real-time information about changes in the Sun's magnetic field and surface features, which are often precursors to flare activity; (ii) physics-based numerical simulations (e.g., [19, 20]), which involves a detailed understanding of the Sun's magnetic field and the processes that drive flare activity and running simulations models to predict the occurrence of flares; (iii) statistical prediction (e.g., [21, 22]), which involves studying the historical behavior of flares to predict their likelihood in the future using statistical analysis and is closely related to (iv) machine learning and deep learning approaches (e.g., [23, 24, 25, 26, 27, 28, 29, 30]), which involves training algorithms with vast amount of historical data and creating data-driven models that detects subtle patterns associated with flares in solar activity and make predictions.
The rapid advancements in deep learning techniques have significantly accelerated research in the field of solar flare prediction. A CNN-based flare forecasting model trained with solar AR patches extracted from line-of-sight (LoS) magnetograms within \(\pm\)30\({}^{\circ}\) of the central meridian to predict \(\geq\)C-, \(\geq\)M-, and \(\geq\)X-class flares was presented in [5]. Similarly, [26] use a CNN-based model to issue binary class predictions for both \(\geq\)C- and \(\geq\)M-class flares within 24 hours using Space-Weather Helioseismic and Magnetic Imager Active Region Patches (SHARP) data [31] extracted from solar magnetograms using AR patches located within \(\pm 45^{\circ}\) of the central meridian. Both of these models are limited to a small portion of the observable disk in central locations (\(\pm 30^{\circ}\) and \(\pm 45^{\circ}\)) and thus have limited operational capability. Moreover, in our previous studies [27, 28], we presented deep learning-based full-disk flare prediction models. These
models were trained using smaller datasets and these proof-of-concept models served as initial investigations into their potential as a supplementary component for operational forecasting systems. More recently, we presented explainable full-disk flare prediction models [12, 13], utilizing attribution methods to comprehend the models' effectiveness for near-limb flare events. We observed that the deep learning-based full-disk models are capable of identifying relevant areas in a full-disk magnetogram, which eventually translates into the model's prediction. However, these models utilized a post-hoc approach for model explanation, which does not contribute to further improving the model's performance.
In recent years, attention-based models, particularly Vision Transformers (ViTs) [32], have emerged as powerful contenders for image classification tasks, achieving competent results on large-scale datasets. ViTs leverage self-attention mechanisms to effectively capture long-range dependencies in images, enabling them to excel in complex visual recognition tasks. While ViTs offer state-of-the-art performance, they often come with a large number (86 to 632 million) of trainable parameters, making them resource-intensive and less practical for scenarios with limited computational resources or small-sized datasets. To address this issue, for our specific use case with a small dataset, we are exploring alternative models that strike a balance between accuracy and efficiency. By incorporating attention blocks into a standard CNN pipeline, we obtain a much lighter model, consisting of \(\sim\)7.5 million parameters. This approach allows for computationally efficient near-real-time predictions with relatively less resource demand on deployment infrastructure while ensuring competent performance for solar flare prediction compared to our prior work [13, 14] with customized AlexNet-based [33] full-disk model, with \(\sim\)57.25 million parameters and fine-tuned VGG16 [34] full-disk model in [12] with \(\sim\)134 million parameters.
We use full-disk line-of-sight (LoS) magnetogram images obtained from the Helioseismic and Magnetic Imager (HMI) [35] instrument onboard Solar Dynamics Observatory (SDO) [36] publicly available from Heliviewer [37]. We collected hourly instances of magnetogram images at [00:00, 01:00,...,23:00] each day from December 2010 to December 2018. We labeled the magnetogram images for binary prediction mode (\(\geq\)M1.0-class flares) based on the peak X-ray flux converted to NOAA flare classes with a prediction window of the next 24 hours. To elaborate, if the maximum of GOES observed peak X-ray flux of a flare is weaker than M1.0, the corresponding magnetogram instances are labeled as "No Flare" (NF: \(<\)M1.0), and larger ones are labeled as "Flare" (FL: \(\geq\)M1.0) as shown in Fig.2.
Our dataset includes a total of 63,649 full-disk LoS magnetogram images, where 54,649 instances belong to the NF-class and 9,000 instances (8,120 instances of M-class and 880 instances of X-class flares) to the FL-class 1. We finally create a non-chronological split of our data into four temporally non-overlapping tri-monthly partitions introduced in [27] for our cross-validation experiments. This partitioning of the dataset is created by dividing the data timeline from Dec 2010 to Dec 2018 into four partitions, where Partition-1 contains data from January to March, Partition-2 contains data from April to June, Partition-3 contains data from July to September, and finally, Partition-4 contains data from October to December as shown in Table. I. Because \(\geq\)M1.0-class flares are scarce, the data distribution exhibits a significant imbalance, with the highest imbalance occurring in Partition-2 (FL:NF \(\sim\)1:9). Overall, the imbalance ratio stands at \(\sim\)1:6 for FL to NF class.
Footnote 1: The current total count of 63,649 magnetogram observations in our dataset is lower than it should be for the period of December 2010 to December 2018. This is due to the unavailability of some instances from Heliviewer.
## IV Model
In this work, we develop two deep learning models: (i) standard CNN model as a baseline (denoted as _M1_), and (ii) attention-based model (denoted as _M2_) proposed in [16] to perform and compare in the task of solar flare prediction. The M1 model shown in Fig. 3 follows an intuition of standard CNN architecture where a global image descriptor (\(g\)) is derived from the input image from the activations of the last convolutional layer and passed through a fully connected layer to obtain class prediction probabilities. On the other hand, the attention-based full-disk model (M2) encourages the filters earlier in the CNN pipeline to learn similar mappings compatible with the one that produces a global image descriptor in the original architecture. Furthermore, it focuses on identifying salient image regions and amplifying their influence while suppressing the irrelevant and potentially spurious information in other regions during training and thus utilizing a trainable attention estimator by integrating it into the standard CNN pipeline. The architecture of our attention-based model is shown in Fig. 4. The architecture of the attention model
Fig. 2: A visual representation of the data labeling process using hourly observations of full-disk LoS magnetograms and a prediction window of 24 hours considered to label the magnetograms. Here, ‘FL’ and ‘NF’ indicate ‘Flare’ and ‘No Flare’ for binary prediction mode (\(\geq\)M1.0-class flares).
proposed in [16] integrates the trainable attention modules in a modified VGG-16 [34] architecture. We use a simpler VGG-like architecture with a reduced number of convolutional layers, which also reduces the number of parameters. Our first convolutional layer accepts a 1-channel input magnetogram image resized to 256\(\times\)256. Each convolutional layer (except the last one) is followed by a batch normalization layer before max pooling. The final convolutional layer outputs feature maps of size 512\(\times\)1\(\times\)1 that squeezed into a fully connected layer (FC-1) with a 512-dimensional vector, which is the global representation (\(g\)) of the input image.
The M2 model follows the same architecture as in M1, except it has three trainable attention modules integrated after the third, fourth, and fifth convolution blocks before the max-pool layer. The similarity between the architectures is intentional to demonstrate the impact of the attention estimators on model performance. Similarly, integrating attention modules in the middle of the network is also a deliberate design choice. As the early layers in CNN primarily focus on low-level features [38], we position the attention modules further into the pipeline to capture higher-level features. However, there is a tradeoff involved, as pushing attention to the last layers is hindered by significantly reduced spatial resolution in the feature maps. Consequently, placing attention modules in the middle strikes a balance, making it a more suitable and pragmatic approach.
In the M2 model, outputs from the convolutional blocks (denoted as \(L^{s}\)) are passed to the attention estimators. In other words, \(L^{s}\) is a set of feature vectors:
\[L^{s}=\{l_{1}^{s},l_{2}^{s},...,l_{n}^{s}\}\]
extracted at a given convolutional layer to serve as input to the \(s_{th}\) attention estimator, and \(l_{i}^{s}\) is the vector of output activations at \(i^{th}\) of total \(n\) spatial locations in the layer. \(g\) represents a global feature vector obtained by flattening the feature maps at the first fully connected layer, located at the end of the convolution blocks (referred to as FC-1 in Fig.4).
The attention mechanism aims to compute a compatibility score, denoted as \(C(L^{s},g)\), utilizing the local features (\(L^{s}\)) and global feature representations (\(g\)), and replaces the final feature vector with a set of attention-weighted local features. As the compatibility scores \(C\) and \(L^{s}\) are required to have the same dimension, the dimension matching is performed by a linear mapping of vectors \(l_{i}^{s}\) to the dimension of \(g\). Then, the compatibility function \(C(L^{s},g)=\{c_{1}^{s},c_{2}^{s},...,c_{n}^{s}\}\) is a set for each vector \(l_{i}^{s}\), which is computed as an addition operation (additive attention) as follows:
\[c_{1}^{s}=(l_{i}^{s},g),\text{ for }i\in\{1,2,...,n\}.\]
The computed compatibility scores are then normalized using a softmax operation and represented as:
\[A^{s}=\{a_{1}^{s},a_{2}^{s},...,a_{n}^{s}\}.\]
Fig. 4: The architecture of our attention-based flare prediction model (M2). The model has three trainable attention modules integrated after the third, fourth, and fifth convolution blocks before the max-pool layer. Note: Each convolutional layer (except the last one) is followed by a batch normalization layer.
Fig. 3: The architecture of our baseline model (M1). Note: Each convolutional layer (except the last one) is followed by a batch normalization layer.
The normalized compatibility scores are then used to compute an element-wise weighted average, which results in a vector:
\[g_{a}^{s}=\sum_{i=1}^{n}a_{i}^{s}.l_{i}^{s}\]
for each attention layer, \(s\). Finally, the individual \(g_{a}^{s}\) vectors of size 512 are concatenated to get a new attention-based global representation to perform the binary classification in the (second) fully connected layer (FC-2). This approach allows the activations from earlier layers to influence and contribute to the final global feature representation, thereby enhancing the model's ability to capture relevant spatial information.
## V Experimental Evaluation
### _Experimental Settings_
We trained both of our models (M1 and M2) with stochastic gradient descent (SGD) as an optimizer and cross-entropy as the objective function. Both models are initialized using Kaiming initialization from a uniform distribution [39], and then we use a dynamic learning rate (initialized at 0.001 and reduced by half every 3 epochs) to further train the model to 40 epochs with a batch size of 128. We regularized our models with a weight decay parameter tuned at 0.5 to prevent overfitting. As mentioned earlier in Sec. III, we are dealing with an imbalanced dataset. Therefore, we address the class imbalance problem through data augmentation and oversampling exclusively for the training set while maintaining the imbalanced nature of the test set for realistic evaluation. Firstly, we use three augmentation techniques: vertical flipping, horizontal flipping, and +5\({}^{\circ}\) to -5\({}^{\circ}\) rotations on minority class (FL-class) which decreases the imbalance from 1:6 to approximately 2:3. Finally, we randomly oversampled the minority class to match the instances of NF-class resulting in a balanced dataset. We prefer augmentation and oversampling over undersampling as the flare prediction models trained with undersampled data are shown to lead to inferior performance [40] (usually transpiring as one-sided predictions). We employed a 4-fold cross-validation schema for validating our models, using the tri-monthly partitions (described in Sec. III), where we applied three partitions for training the model and one for testing.
We evaluate the performance of our models using two widely-used forecast skills scores: True Skill Statistics (TSS, in Eq. 1) and Heidke Skill Score (HSS, in Eq. 2), derived from the elements of confusion matrix: True Positives (TP), True Negatives (TN), False Positives (FP), and False Negatives (FN). In the context of this paper, the "FL-class" is considered as the positive outcome, while the "NF-class" is negative.
\[TSS=\frac{TP}{TP+FN}-\frac{FP}{FP+TN} \tag{1}\]
\[HSS=2\times\frac{TP\times TN-FN\times FP}{\left((P\times(FN+TN)+(TP+FP)\times N )\right)} \tag{2}\]
,where N = TN + FP and P = TP + FN. TSS and HSS values range from -1 to 1, where 1 indicates all correct predictions, -1 represents all incorrect predictions, and 0 represents no skill. In contrast to TSS, HSS is an imbalance-aware metric, and it is common practice to use HSS for the solar flare prediction models due to the high class-imbalance ratio present in the datasets. For a balanced test dataset, these metrics are equivalent [40]. Lastly, we report the subclass and overall recall for flaring instances (M- and X-class), which is calculated as (\(\frac{TP}{TP+FN}\)), to demonstrate the prediction sensitivity.
### _Evaluation_
We perform a 4-fold cross-validation using the tri-monthly separated dataset for evaluating our models. With the baseline model (M1) we obtain on an average TSS\(\sim\)0.35\(\pm\)0.13 and HSS\(\sim\)0.30\(\pm\)0.09. The M1 model following the standard CNN pipeline has fluctuations across folds and hence a high margin of error on skill scores is represented by the standard deviation. Model M2 improves over the performance of model M1 by \(\sim\)20% and \(\sim\)7% in terms of TSS and HSS respectively. Furthermore, it improves on the performance of [12, 13] by \(\sim\)3% in terms of TSS and shows comparable results in terms of HSS and is more robust as indicated by the deviations across the folds as shown in Table II. Moreover, the performance of model M2 becomes even more noteworthy when considering its parameter efficiency. With only \(\sim\)7.5 million parameters, it outperforms [13] an AlexNet-based model and [12] a VGG16-based model with a much higher parameter count of \(\sim\)57.25 and \(\sim\)134 million respectively, showcasing the effectiveness of attention mechanisms in achieving superior results while maintaining a significantly leaner model architecture. This highlights the potential of this approach to provide both performance gains and resource optimization. The findings of this study emphasize the significance of optimizing attention configurations to enhance model performance, taking into account both parameter complexities and the strategic combination of attention patterns for effective pattern recognition.
In addition, we evaluate our results for correctly predicted and missed flare counts for class-specific flares (X-class and M-class) in central locations (within \(\pm\)70\({}^{\circ}\)) and near-limb locations (beyond \(\pm\)70\({}^{\circ}\)) of the Sun as shown in Table III. We observe that the attention-based model (M2) shows significantly better results compared to the baseline (M1). The M2 model made correct predictions for \(\sim\)95% of the X-class flares and \(\sim\)83% of the M-class flares in central locations. Similarly, it shows a compelling performance for flares appearing on near-limb locations of the Sun, where \(\sim\)77% of the X-class and \(\sim\)51% of the M-class flares are predicted correctly. This is important because, to our knowledge, the prediction of near-limb flares is often overlooked, although vital for predicting Earth-impacting space weather events. More false negatives
in M-class are expected because of the model's inability to distinguish bordering class (C4+ to C9.9) flares from \(\geq\)M1.0-class flares as shown in Fig. 5. We observed an upward trend in the false positive rate for sub-classes (SFPR) within C-class flares when compared to other sub-classes, such as Flare-Quiet (FO), A-class, and B-class flares. More specifically we note that the count of false positives (FP) surpasses that of true negatives (TN) for flare classes ranging from \(\geq\)C4 to \(\leq\)C9. The prevalence of FP in \(\geq\)C4-class flares suggests a need for improved predictive capabilities between border classes.
Overall, we observed that our model predicted \(\sim\)89% of the flares in central locations and \(\sim\)64% of the flares in near-limb locations. Furthermore, class-wise analysis shows that \(\sim\)91% and \(\sim\)74% of the X-class and M-class flares, respectively, are predicted correctly by our models. To reproduce this work, the source code is available in our open-source repository [41].
## VI Discussion
In this section, we visualize the attention maps learned by the M2 model to qualitatively analyze and understand regions in input magnetogram images that are considered relevant. We applied three attention layers in our model M2, where attention maps \((L1,L2,L3)\) has a spatial dimension \((\frac{1}{4},\frac{1}{8},\frac{1}{16})th\) of the input size respectively. To visualize the relevant features learned by the models using attention layers, we upscale these maps to the size of the magnetogram image using bilinear interpolation and overlay the maps on top of the original image. We present the attention maps from the Attention Estimator-2 because the first attention layer focuses on lower-level features, which are scattered and do not provide a globally detailed explanation. On the other hand, the Attention Estimator-3 focuses on higher-level features, and due to the high reduction in spatial dimension (\(\frac{1}{16}\) of the original input), upscaling through interpolation results in a spatial resolution that is insufficient for generating interpretable activation maps.
As the primary focus of this study is to understand the capability of full-disk models on the near-limb flares, we showcase a near-limb (East) X3.2-class flare observed on 2013-05-14T00:00:00 UTC. Note that East and West are reversed in solar coordinates. The location of the flare is shown by a green flag in Fig. 6 (a)(i), along with the ARs (red flags). For this case-based qualitative analysis, we use an input image at 2013-05-13T06:00:00 UTC (\(\sim\)18 hours prior to the flare event), shown in Fig. 6 (a)(ii) and in Fig. 6 (a)(iii), we show the overlaid attention map, which pinpoints important regions in the input image where specific ARs are activated as relevant features, suppressing a large section of the full-disk magnetogram disk although there are 10 ARs (red flags). More specifically, the model focuses on the same AR that is responsible for initiating a flare 18 hours later. Similarly, we analyze another case of correctly predicted near-limb (West) X1.0-class flare observed on 2013-11-19T10:14:00 UTC shown in Fig. 6 (b)(i). For this, we used an input image at 2013-11-18T17:00:00 UTC (\(\sim\)17 hours prior to the flare event) shown in Fig. 6 (b)(ii). We again observed that the model focuses on the relevant AR even though other, relatively large ARs are present in the magnetogram image as shown in Fig. 6 (b)(iii).
Furthermore, we provide an example to analyze a case of false positives as well. For this, we use an example of a C7.9 flare observed on 2014-02-03T00:12:43 UTC shown in Fig.6 (c)(i), and to explain the result, we used an input magnetogram instance at 2014-02-02T23:00:00 UTC (\(\sim\)14 hours prior to the event) shown in Fig.6 (c)(ii). For the given time, there are 7 ARs indicated by the red flags, however, on interpreting this prediction with attention maps shown in Fig.6 (c)(iii), we observed that the model considers only one region as a relevant feature for the corresponding prediction, which is indeed the location of the C7.9 flare. This incorrect prediction can be attributed to interference caused by bordering C-class flares as shown earlier in Fig. 5, where we noted that among the 25,150 C-class flares observed, \(\sim\)43% (10,935) resulted in incorrect predictions, constituting \(\sim\)91% of the total false positives.
## VII Conclusion and Future Work
In this work, we presented an attention-based full-disk model to predict \(\geq\)M1.0-class flares in binary mode and compared the performance with standard CNN-based models. We observed that the trainable attention modules play a crucial role in directing the model to focus on pertinent features associated with ARs while suppressing irrelevant features in a magnetogram during training, resulting in an enhancement of model performance. Furthermore, we demonstrated, both quantitatively through recall scores and qualitatively by overlaying attention maps on input magnetogram images, that our
Fig. 5: A bar-line plot showing the true negatives (TN), false positives (FP), and false positive rate for sub-classes in NF-class (SFPR) obtained from model M2. The results are aggregated from validation sets of 4-fold experiments.
model effectively identifies and localizes relevant AR locations, which are more likely to initiate a flare. This prediction capability extends to near-limb regions, making it crucial for operational systems. As an extension, we plan to include the temporal aspects in our dataset and create a spatiotemporal model to capture the evolution of solar activity leading to solar flares. Furthermore, we plan to extend this work by developing an automated way of analyzing the interpretation results to identify the main causes of incorrect predictions.
## Acknowledgments
This project is supported in part under two NSF awards #2104004 and #1931555, jointly by the Office of Advanced Cyberinfrastructure within the Directorate for Computer and Information Science and Engineering, the Division of Astronomical Sciences within the Directorate for Mathematical and Physical Sciences, and the Solar Terrestrial Physics Program and the Division of Integrative and Collaborative Education and Research within the Directorate for Geosciences. This work is also partially supported by the National Aeronautics and Space Administration (NASA) grant award #80NSSC22K0272. The data used in this study is a courtesy of NASA/SDO and the AIA, EVE, and HMI science teams, and the NOAA National Geophysical Data Center (NGDC).
|
2309.08362 | Towards Big Data Modeling and Management Systems: From DBMS to BDMS | To succeed in a Big Data strategy, you have to arm yourself with a wide range
of data skills and best practices. This strategy can result in an impressive
asset that can streamline operational costs, reduce time to market, and enable
the creation of new products. However, several Big Data challenges may take
place in enterprises when it comes to moving initiatives of boardroom
discussions to effective practices. From a broader perspective, we take on this
paper two very important challenges, namely modeling, and management. The main
context here is to highlight the importance of understanding data modeling and
knowing how to process complex data while supporting the characteristics of
each model. | Rania Mkhinini Gahar, Olfa Arfaoui, Minyar Sassi Hidri | 2023-09-15T12:40:51Z | http://arxiv.org/abs/2309.08362v1 | # Towards Big Data Modeling and Management Systems: From DBMS to BDMS
###### Abstract
To succeed in a Big Data strategy, you have to arm yourself with a wide range of data skills and best practices. This strategy can result in an impressive asset that can streamline operational costs, reduce time to market, and enable the creation of new products. However, several Big Data challenges may take place in enterprises when it comes to moving initiatives of boardroom discussions to effective practices. From a broader perspective, we take on this paper two very important challenges, namely modeling, and management. The main context here is to highlight the importance of understanding data modeling and knowing how to process complex data while supporting the characteristics of each model.
Big Data, Modeling, Management, BDMS, DBMS.
## I Introduction
In today's society, data is growing exponentially. It therefore becomes more complicated to manage these with traditional tools. The _Big Data Analytics_ process was therefore created to manage this mass of data and draw results from it.
Where staggering amount of data is meaningful, Big Data faces colossal challenges which are good for mastering in order to keep this database under control. Stored in multiple Data Centers, the exploitation of Big Data continues to grow, especially with the popularization of Cloud Computing (remote and online storage system) [1, 2, 3, 4, 5, 6, 7].
Information processing is one of the main challenges of Big Data. Indeed, data arrives in droves and in all formats from the four corners of the world, at all times. Companies in charge of Data Centers must therefore set up management tools capable of monitoring the velocity of data. At the same time, the quality and relevance of the information received must also be checked.
In this context, data modeling and management are two of the most important and valuable tools for understanding business information. The Big Data modeling concept implied two terminologies which are "Data modeling" and "Big Data". The "Big Data" term means all digital data produced by the use of new technologies for personal or professional purposes. This data kind is complex by nature too. That's why it is impossible to be analyzed using traditional methods [8, 9].
Data with such complexity can be analyzed using high-quality data modeling methods. In this context, it should be clear that the _Data modeling_ includes the organizing data method in such visualized patterns that the data analysis process can be performed with aptitude. These techniques include the process of making visual representations of the whole or part of the datasets [10].
Thereby, it employs a certain data modeling method. That's why it is different from the traditional methods and process consists to organize Big Data for the companies' use.
Whereas Big Data management is a sort of organization, administration, as well as governance of both large volumes namely structured and unstructured data. The Big Data management target is concluded in a high data quality level and accessibility for business intelligence and Big Data analytics applications. Many organizations such as corporations, government agencies, and others adopt Big Data management strategies to lead them contending with fast-growing data pools, typically involving many terabytes or even petabytes stored in several file formats variety.
Effective Big Data management can help companies to locate valuable information in large unstructured and semi-structured datasets from various sources, including call detail records, system logs, sensors, images, and social media sites.
The remainder of this paper consists of two sections. Big Data Modeling is highlighted in Section II. Some Big Data Management systems are presented and subsequently described in Section III. The overall conclusion with future extension remarks are stated in section IV.
## II Big Data Modeling
The Big Data modeling concept depends on many factors. It includes the data structure, the operations that can be performed on the applied ones, and constraints to models [11]. It is necessary to determine the data characteristics before it can be manipulated or analyzed in a meaningful as well as significant way [8, 12]. Let's take for example the structure _Person_ whose characteristics are resumed in surname, the first name, and the Date Of Birth (DOB) as shown in Fig. 1.
Likewise, the fact that we can perform data arithmetic or aggregation with the DOB field, and not the first name field which is categorical, is also part of our understanding of the data model. These are nothing but operations that can be performed. Let us cite the example of the selecting operation of all persons having DOB before 2023 as described in Fig. 2.
Finally, we can know that in this society, the age corresponding to the current date minus the DOB cannot be under 18 years old. A translation of this constraint can be given by Fig. 3.
So this overviews a way to detect records with obviously wrong DOB.
### _Data Models types_
#### Iv-A1 Relational data model
It refers to a way of structuring information in a matrices form called tables or relations. This very simple model is by far the most widespread in Database Management Systems (DBMS), which are thus called relational DBMSs [13, 14]. A relational database, therefore, consists of a structured dataset in the form of relations. It is similar to the table in Table. I presented here for an employee application. However, we should pay attention to relational tables, called relationships. This array actually represents a tuple set. In Table. I, relational tuple is framed in red. It is represented by a row in the table. A relational tuple implies that, unless otherwise specified, its elements such as 203 or 204, Mary, etc., are _atomic_.
The previous example describes a set of six tuples also called records. In fact, when we talk about a collection of _distinct elements_ of the same type, it means that it will be impossible to add a tuple that already exists to the solution. By that, if we do, it will be a _duplicate_ (see Table. II).
Table. III shows another tuple that cannot be added. The latter has all the right attributes, but unfortunately, all are placed in the wrong order. In this way, we called this tuple a **dissimilar** one.
The question that arises here is how does the system know that this tuple is different? This draws our attention to the first line which is Table. IV. It is a part of the table schema and simply gives us information about the table name, in this case, _Employee_.
It presents clearly the names of the columns which we called also attributes relationship. Each column describes its specific data type, i.e. the type constraint for each column. Given this schema, we now need to understand why the last red row does
Fig. 1: The Person structure.
Fig. 3: A constraint example.
Fig. 2: An operation example.
not belong to this table. The schema in a relational table can also specify constraints.
Let us introduce a new table containing employee salary history. Employees are identified with the _EmpID_ column, but these are not new values for this table. These are the same IDs present in the ID column of the Employee table, presented previously. This is reflected in the statement made to the right.
References mean that values in one column can only exist if the same values exist in the _Employee_ table (see Fig. 4), called _parent table_. That's why, in relational model concept, the _EmpID_ column of the _EmpSalaries_ table is called a foreign key which does refer to the primary key of the _Employee_ table (see Fig. 4).
#### Ii-B2 Semi-structured data model
Semi-structured data is an intermediate form. They are not organized according to a complex method that makes sophisticated access and analysis possible; however, certain information may be associated with them, such as metadata tags, which allow the addressing of the elements they contain. For example, a Word document is generally considered to be a collection of unstructured data. However, you can add metadata to it in the form of keywords that represent the content of the document and make it easier to find when searching for those terms [15]. The data is then semi-structured.
#### Ii-B3 Non-Structured data model
Unstructured data is defined as data present in absolute raw form. This data is difficult to process due to its complex organization and formatting. Unstructured data management can take data in many forms, including social media posts, chats, satellite imagery, IoT (Internet of Things) sensor data, emails, and presentations, to organize it in the logical and predefined way in data storage. In contrast, the meaning of structured data is data that follows predefined data patterns and is easy to analyze. Examples of structured data would include alphabetized customer names and properly organized credit card numbers [16].
Unstructured data can be anything that is not in a specific format. It can be a paragraph from a book with relevant information or a web page. An example of unstructured data could also be log files that are not easily separated. Comments and publications on social networks must be analyzed [17].
## III Big Data Management
The data management system refers to the set of practices necessary for the construction and maintenance of a framework for the data import, storage, exploration, and archiving that are necessary for business activities. Data management is the backbone that connects the different segments of the data life cycle in the company [18]. Data management works hand-in-hand with the management process to ensure that different teams take the necessary steps to always have the cleanest and most up-to-date data. In other words, it is the process to manage that your employees are empowered to monitor changes and trends in real-time.
For example, each data access task, such as finding employees in a department sorted by salary or finding employees in all departments sorted by start date, must be translated by a program according to the request requested. To do this, each request is associated with a developed program to respond to it even for accessing data or updating it.
The third problem concerns constraints. Data types are a way to restrict the nature of data that can be stored in a table. For many applications, however, the constraint provided by this bias is too coarse. For example, a column that contains the price of a product should only accept positive values. But
Fig. 4: Join relation.
there is no standard data type that only accepts positive values. Another problem can arise from wanting to constrain the data in one column relative to other columns or rows. For example, in a table containing product information, there can only be one row per product number.
For this, the Structured Query Language (SQL) allows you to define constraints on columns and tables. Constraints give as much control over table data as a user wants. If a user attempts to store data in a column in violation of a constraint, an error is thrown. This applies even if the value comes from the default value definition. Many constraints are called for integrity. For example, say that each employee has exactly one job title [14, 19].
Atomicity means that database updates must be "atomic", i.e. they must be done completely or not at all. Out of 5000 rows to be modified, if one modification just failed, then the entire transaction must be rolled back. It is important to note that each modified row can be affected by the modification context of the adjacent one, and any break in that context can have disastrous consequences [20, 21, 22].
When it comes to Big Data, things change. It is clear that traditional DBMS will not deal with massive characteristics. That's why another concept is born. It is baptized BDMS for Big Data Management Systems.
Redis - An Enhanced Key-Value Storeit is called an in-memory data structure store (in-memory): It can keep data on disks and saves its state. However, it is intended to make optimal use of memory and memory-based methods to make a number of common data structures very fast for many users [23]. Redis supports a list of data structures namely: strings, hashes, lists, sets, sorted sets
* Look-up problem: Now, in the simplest case, a search requires a key-value pair where the key is a string and the value is also a string. So, for a search, we provide the key and get the value and it is simple.
* Partitioning and replication: they are techniques that build the foundation of using Redis as a distributed system. They will be examined as very basic building blocks. For more complex needs, there are more complex abstractions, like Redis Sentinel and Redis Cluster, that build upon these building blocks. Fig. 5 describes an example of the Master/Slave replication mode.
Aerospike: A New Generation KV Storeis an open-source In-Memory Not only SQL (NoSQL) DBMS. It is a key-value base designed to provide sub-millisecond response times to applications [24]. Fig. 6 can further describe its architecture.
The upper layer presents several applications for real-time systems for consumers, such as travel recommendation systems, pricing engines used for stock market applications, real-time decision systems that analyze data to determine whether an investment must be made, etc.
Nowadays, all data management systems have a common need which resides in the accessibility at any time to the colossal volume of data. The Aerospike system can interact with systems based Hadoop, Spark, a Legacy database, or even with a real-time data source. It can exchange large volumes of data with any of these sources and serve quick queries and searches to the above applications. Now, this translates to very high availability and robust consistency requirements.
The storage layer uses three types of storage systems, in-memory with dynamic Random Access Memory (RAM) or Dynamic RAM (DRAM), disk in normal rotation, and a flash disk / Solid-State Drive (SSD), which is a device solid-state for fast data loading when needed. In fact, the Aerospike system has optimized its performance keeping in mind the characteristics of an SSD drive. For those who don't know what an SSD is, you can consider as a kind of storage device whose random read performance is much faster than that of a hard disk and write performance is a little slower.
AsterixDB: A DBMS of Semistructured Datais a shared-nothing parallel DBMS that is used to split data among various nodes by involving a hash-based partitioning mechanism. It also provides a platform for applications that are characterized by scalable storage and analysis of very large volumes of semi-structured data.
Fig. 7 provides an overview of how the various software components of AsterixDB map to nodes in a shared-nothing cluster, what is called Asterix Manager (AM) interface. It is composed of three Node Controllers (NCs) and one Cluster
Fig. 5: Master/slave replication mode under Redis.
Fig. 6: Aerospike architecture.
Controller (CC). The topmost layer of AsterixDB is a parallel DBMS, with a full, flexible AsterixDB Data Model (ADM) and AsterixDB Query Language (AQL) for describing, querying, and analyzing data. ADM and AQL support both native storage and indexing of data as well as analysis of external data (e.g., data in Hadoop Distributed File System(HDFS)).
In AsterixDB, data is stored in datasets. Each record conforms to the datatype associated with the dataset. In fact, data is hash-partitioned (primary key) across a node set which forms the node group for a dataset and defaults to all nodes in an AsterixDB cluster [25].
Solr - Managing Textis a powerful search engine, based on Apache Lucene, integrated with Hadoop. It computes the Term Frequency (TF) and Inverse Document Frequency (IDF) of the collection. Term Frequency-Inverse Document Frequency (TF-IDF) term vectors are often used to represent text documents when performing text mining and machine learning operations.
Practically, other calculated numbers or properties associated with the terms will also be included in the index [26].
The main Solr features are multiple such as indexing of text Document (DOC), Portable Document Format (PDF), PowerPoint (PPT), or Microsoft Excel spreadsheet (XLS) documents, indexing a database or even the ability to do advanced searches. These are full-text indexes where text columns are supplemented with indexes for other data types, including numeric data, dates, geographic coordinates, and fields where domains are limited to a set of emerging values. Fig. 8 shows its architecture.
Vertica - A Columnar DBMSis a relational analytical database that integrates with SQL solutions and Hadoop, Spark, or Kafka architectures, whether in the Cloud (Google, Amazon Web Service (AWS), Azure) or On-Premise. Its performance, scalability, and native high availability allow both startups and the largest global players to carry out their Business Intelligence(BI) or Data Science projects regardless of the volume handled [27]. Vertica has advanced analytical functions and Machine Learning algorithms to perform part of the in-database processing. It is a columnar data storage platform designed to handle huge volumes of data. This allows its users fast and efficient query performance while providing high availability and scalability on enterprise servers. The main features of the Vertica database are:
* Column-based storage organization;
* SQL interface with integrated analysis capabilities;
* Compression to reduce storage costs;
* Compatible with programming interfaces;
* High performance and parallel data transfer.
For the query example shown in Fig. 9, a column store reads only three columns while a row store reads all columns.
Table V presents a comparative study of the different BDMS already described above.
## IV Conclusion
Data modeling as well as management are very important tasks nowadays for the data scientist. the main reason for recourse is decision-making. In fact, data modeling is the process that make companies able to discover, design, visualize, as well as standardize and even deploy high-quality data assets through an intuitive graphical interface. A proper data model can now serve as a blueprint for designing and deploying databases, leveraging higher quality data sources to improve the application development process and make better decisions [28, 29]. Thus, among other things, data Visualisation represents also a challenge that we can't ignore [30]. However, conventional visualization techniques cannot
Fig. 8: Apache Solr architecture.
Fig. 7: Illustration of a simple YARN cluster with AsterixDB processes and their locations.
Fig. 9: Vertica query example.
handle the enormous volume, variety, and velocity of data. To do this, several tools have emerged and are constantly evolving. So, we will be interested in Big Data visualization.
|
2309.07051 | UnifiedGesture: A Unified Gesture Synthesis Model for Multiple Skeletons | The automatic co-speech gesture generation draws much attention in computer
animation. Previous works designed network structures on individual datasets,
which resulted in a lack of data volume and generalizability across different
motion capture standards. In addition, it is a challenging task due to the weak
correlation between speech and gestures. To address these problems, we present
UnifiedGesture, a novel diffusion model-based speech-driven gesture synthesis
approach, trained on multiple gesture datasets with different skeletons.
Specifically, we first present a retargeting network to learn latent
homeomorphic graphs for different motion capture standards, unifying the
representations of various gestures while extending the dataset. We then
capture the correlation between speech and gestures based on a diffusion model
architecture using cross-local attention and self-attention to generate better
speech-matched and realistic gestures. To further align speech and gesture and
increase diversity, we incorporate reinforcement learning on the discrete
gesture units with a learned reward function. Extensive experiments show that
UnifiedGesture outperforms recent approaches on speech-driven gesture
generation in terms of CCA, FGD, and human-likeness. All code, pre-trained
models, databases, and demos are available to the public at
https://github.com/YoungSeng/UnifiedGesture. | Sicheng Yang, Zilin Wang, Zhiyong Wu, Minglei Li, Zhensong Zhang, Qiaochu Huang, Lei Hao, Songcen Xu, Xiaofei Wu, changpeng yang, Zonghong Dai | 2023-09-13T16:07:25Z | http://arxiv.org/abs/2309.07051v1 | # UnifiedGesture: A Unified Gesture Synthesis Model for Multiple Skeletons
###### Abstract
The automatic co-speech gesture generation draws much attention in computer animation. Previous works designed network structures on individual datasets, which resulted in a lack of data volume and generalizability across different motion capture standards. In addition, it is a challenging task due to the weak correlation between speech and gestures. To address these problems, we present UnifiedGesture, a novel diffusion model-based speech-driven gesture synthesis approach, trained on multiple gesture datasets with different skeletons. Specifically, we first present a retargeting network to learn latent homeomorphic graphs for different motion capture standards, unifying the representations of various gestures while extending the dataset. We then capture the correlation between speech and gestures based on a diffusion model architecture using cross-local attention and self-attention to generate better speech-matched and realistic gestures. To further align speech and gesture and increase diversity, we incorporate reinforcement learning on the discrete gesture units with a learned reward function. Extensive experiments show that UnifiedGesture outperforms recent approaches on speech-driven gesture generation in terms of CCA, FGD, and human-likeness. All code, pretrained models, databases, and demos are available to the public at [https://github.com/YoungSeng/UnifiedGesture](https://github.com/YoungSeng/UnifiedGesture).
gesture generation, neural motion processing, data-driven animation
## 1. Introduction
Nonverbal behaviors, including gestures, play key roles in conveying messages in human communication (Sundundhi et al., 2017). The automatic co-speech gesture generation is considered an enabling technology to create realistic 3D avatars in films, games, virtual social spaces, and for interaction with social robots (Sundhi et al., 2017). In the era of deep learning, existing data-driven gesture generation methods usually rely on a large dataset. Studies have shown that a larger amount of data can improve the generalization of the model and enhance its performance (Kang et al., 2019; Wang et al., 2020).
Thanks to the development of human pose estimation (Sundhi et al., 2017), it's easy to extract 3D human poses from tremendous 2D gesture data on the web, e.g., TED (Tie et al., 2018) and PATS (Beng et al., 2019), some works (Sundhi et al., 2017; Tie et al., 2018; Tie et al., 2018) are based on 2D gesture datasets. While large in quantity, 3D poses extracted from 2D datasets are poor in quality and difficult to use, most works (Beng et al., 2019; Wang et al., 2020; Wang et al., 2020; Wang et al., 2020) opt for high quality 3D mood datasets.
There are two main challenges when utilizing 3D datasets. First, due to the expensive cost of motion capture, the typical 3D gesture datasets (Kang et al., 2019; Wang et al., 2020; Wang et al., 2020) are relatively small, thus the generalization of the models trained on the individual dataset is limited, and the ability of the trained algorithms is also confined to the content of the individual dataset. For example, some datasets contain style information (Kang et al., 2019; Wang et al., 2020), while the others do not (Kang et al., 2019; Wang et al., 2020). Second, it is not straightforward to train algorithms on mixture datasets directly, since different datasets usually have different skeletons, they are captured with different mood systems. Most of the current solutions use software such as Blender (Kang et al., 2019) or Maya (Beng et al., 2019) for automatic retargeting to a unified skeleton, which requires manual specification of the bone mapping and leads to unavoidable errors (Beng et al., 2019). The irregular connectivity and hierarchical structure of the skeleton joint motion cause difficulties in the large-scale application of multiple skeletons.
To tackle these challenges, we propose UnifiedGesture, a novel unified co-speech gesture synthesis model for multiple skeletons. The overview of our method is shown in Figure 2. Although the number and position of the different skeleton joints are different, they all correspond to homeomorphic (topologically equivalent) graphs (Kang et al., 2019). Unlike sign language or hand gestures, there is a weak correlation between speech and body gestures at a coarse-grained level (Kang et al., 2019; Wang et al., 2020). Specifically, we assume that the gesture details associated with speech are contained in the primal skeleton gesture. According to this assumption, we first use a data-driven deep skeleton-aware (Beng et al., 2019) framework to learn latent homeomorphic graphs for different skeletons. The different skeletons are unified and retargeted to the primal skeleton while extending the dataset. Then we introduce a denoising-diffusion-based speech-driven co-speech gesture generation model, using WavLM features (Kang et al., 2019), based on cross-local attention (Wang et al., 2020) and self-attention (Wang et al., 2020) architecture to better capture the temporal information between audio and gestures. Third, unlike speaking with the face or lips, the weak correlation between speech and gesture lacks a suitable criterion for learning the model, to refine the gesture generation model, we employ inverse reinforcement learning (IRL) on discrete gesture units to train a reward model that evaluates the generated gestures and guides the diffusion model to generate high-quality and diverse gestures aligned with speech during the reinforcement learning (RL) process. Our code, pre-trained models, and demos will be publicly available soon. The main contributions of our work are:
* We employ a skeleton-aware retargeting network to unify the different skeletons to a common primal skeleton while extending the dataset.
* We present a temporally aware attention-based diffusion model on the primal skeleton for speech-driven co-speech gesture generation. By virtue of the diffusion model, we can edit the style of the gestures, setting the initial gestures, and generating diverse gestures.
* We introduce reinforcement learning with a learned reward function to refine the generation model and make the model explore the data. The exploratory space for reinforcement learning is reduced by learning a codebook with VQVAE to summarize meaningful gesture units.
* Extensive experiments show that our model can generate human-like, speech-matched, stylized, diverse, controllable, and physically plausible gestures that significantly outperform existing gesture generation methods.
## 2. Related Work
### Motion Retargeting
Our task is to take advantage of multiple gesture datasets. There are two main challenges. First, different datasets have the same
Figure 1. Gesture examples generated by our proposed method. Different skeletons are unified to the primal skeleton. The speech-driven primal skeleton generate gestures for the specified skeleton. The character used in the paper is publicly available.
Figure 2. Gesture generation pipeline of our proposed framework. We retarget the different skeletons to the primal skeleton. Given a speech segment (optional style, seed gesture), the output is the primal gesture after VQVAE encoding of the output of the diffusion model. We introduce reinforcement learning to refine the gesture generation network. Finally, a gesture of the skeleton is specified and generated, with physics guidance.
motion capture standards (e.g., Trinity (2016) and BEAT (Sanden et al., 2017) both using Vicon's suits); second, different datasets have different motion capture standards (e.g., ZEGGS (2017) and Talking With Hands (2017)). For the first case, we can select the body joints common to both datasets and unify the different skeletons normalized (Zeggs, 2017) by height or arm span, etc. The latter case is more challenging. Ma et al. (2018) try to map multiple datasets to a defined skeleton, but it still partially relies on handerfading and the results are still limited to a specific motion skeleton. Some work (Zeggs, 2017; Zeggs, 2017) try to retarget the motion of different skeletons by VAE, using standard convolution and pooling. However, unlike images or videos, different skeletons exhibit irregular connectivity. Villegas et al. (2019) propose a neural network for motion retargeting that adapts input motion to target characters, achieving state-of-the-art results and using cycle consistency for unsupervised learning. Lim et al. (2018) propose a pose-movement network for motion retargeting using a normalizing process and novel loss function. Kim et al. (2019) present an unsupervised motion retargeting model using temporal dilated convolutions that generates realistic and stable trajectories for humanoid characters. Villegas et al. (2019) propose a motion retargeting method that preserves self-contacts and prevents interpenetration, using a recurrent network. Li et al. (2018) propose an iterative motion retargeting method using an iterative motion retargeting network for unsupervised motion retargeting. Inspired by (Beng et al., 2019), we use a deep skeleton-aware framework for data-driven motion retargeting between skeletons.
### Gesture Generation
#### 2.2.1. End-to-end Co-speech Gesture Generation
Gesture generation is a complex task that requires understanding speech, gestures, and their relationships. The present data-driven studies mainly consider four modalities: text (Kal
\(\mathcal{N}_{I}^{d}\) denotes the edges whose distance in the tree is equal or less than \(d\) from the \(i\)-th edge.
#### 3.1.1. Reference Pose Unification
The current full-body motion capture dataset for speech-driven gestures contains mainly: Trinity (Miller et al., 2017) (244 min of audio, a male actor), ZEGGS (Stein
### Diffusion Model for Speech-driven Gesture Generation
Diffusion models (Kang et al., 2017) have made great progress in motion generation (Kang et al., 2017) due to their ability of to learn to gradually denoising starting from pure noise. We unified the gestures by retargeting the skeletons of different gesture datasets to a primal skeleton, and now obtained a multi-deep primal skeleton gesture set \(\left[\mathbf{L}_{\mathbf{A}},\mathbf{L}_{\mathbf{B}},...\right]\) with the corresponding speech set \(\left[\mathbf{A}_{\mathbf{A}},\mathbf{A}_{\mathbf{B}},...\right]\). To generate co-speech gestures with a diffusion model, we use DiffusestyleGesture (Kang et al., 2017), which has recently achieved strong results on a single dataset, as our backbone model. As shown in Figure 4, the diffusion model consists of two parts: the forward process (diffusion process) \(q\) and the reverse process (denoising process) \(\rho_{\theta}\).
We denote the generated gesture as \(\mathbf{L}\) in the diffusion process, which has the same dimension as an observation data \(\mathbf{L}_{0}\sim q\left(\mathbf{L}_{0}\right)\), \(q\left(\mathbf{L}_{0}\right)\) denotes the distribution of the real data \(\left[\mathbf{L}_{\mathbf{A}},\mathbf{L}_{\mathbf{B}},...\right]\). According to a variance schedule \(\beta_{1},\beta_{2},\dots,\beta_{T_{d}}\) (\(0<\beta_{1}<\beta_{2}<\dots<\beta_{T_{d}}<1\), \(T_{d}\) is the total time step), we add Gaussian noise
\[q\left(\mathbf{L}_{t_{d}}\mid\mathbf{L}_{t_{d}-1}\right)=\mathcal{N}\left( \mathbf{L}_{t_{d}};\sqrt{1-\beta_{t_{d}}}\mathbf{L}_{t_{d}-1},\beta_{t_{d}} \right) \tag{10}\]
In denoising process, the denoising process \(p_{\theta}\) is a process of learning parameter \(\theta\) via a neural network. The noise \(\mathbf{L}_{t_{d}}\) at time \(t_{d}\) is used to learn \(\rho_{\theta}\), \(\Sigma_{\theta}\), then
\[p_{\theta}\left(\mathbf{L}_{t_{d}-1}\mid\mathbf{L}_{t_{d}}\right)=\mathcal{N} \left(\mathbf{L}_{t_{d}-1};\mu_{\theta}\left(\mathbf{L}_{t_{d}},t_{d}\right), \Sigma_{\theta}\left(\mathbf{L}_{t_{d}},t_{d}\right)\right) \tag{11}\]
#### 3.2.1. Denoising Module
Our goal is to synthesize a gesture \(\mathbf{L}^{1:N}\) of length \(N\) given noising step \(t_{d}\), noisy gesture \(\mathbf{L}_{t_{d}}\) and conditions \(c\) (including audio \(a\), style \(s\), and seed gesture \(d\)).
\[\hat{\mathbf{L}}_{0}=\text{Denoise}\left(\mathbf{L}_{t_{d}},t_{d},c\right) \tag{12}\]
During training, noising step \(t_{d}\) is sampled from a uniform distribution of \(\left\{1,2,\dots,T_{d}\right\}\), with the same position encoding as (Kang et al., 2017). Noisy gesture \(\mathbf{L}_{t_{d}}\) has the same dimension as the real gesture \(\mathbf{L}_{0}\) obtained by sampling from the standard normal distribution \(\mathcal{N}(0,\mathbf{I})\). In the latent representation of the gesture we also extract the difference between two frames as latent velocity and also extract the difference between two frames of latent velocity as latent acceleration, therefore \(\mathbf{L}_{0}\in\mathbb{R}^{N\times(7\times C\times 3)}\). Audio features are generated from the pre-trained models of WavLM Large (Liang et al., 2017). Then we use linear interpolation to align WavLM features and gesture \(\mathbf{L}_{0}\) in the time dimension. The styles of gestures are represented as one-hot vectors where only one element of a selected style is nonzero. Seed gesture helps to make smooth transitions between consecutive syntheses (Kang et al., 2017). The first \(N_{seed}\) frames of the gestures clip are used as the seed gesture \(d\) and the remaining \(N\) frames are used as the real gesture \(\mathbf{L}_{0}\) to calculate loss. Self-attention (Kang et al., 2017) and cross-local attention (Zhu et al., 2017) based on relative position encoding (RPE) (Zhu et al., 2017) are used to generate better speech-matched and realistic gesture. Random masks (RM) are added to the pipeline of seed gesture \(d\) and style is feature processing for classifier-free learning (Zhu et al., 2017). During the training process, we combine the predictions of the conditional model \(\left(\mathbf{L}_{t_{d}},t_{d},c_{1}\right),c_{1}=\left[d,s,a\right]\) and the unconditional model \(\left(\mathbf{L}_{t_{d}},t_{d},c_{2}\right),c_{2}=\left[\varnothing,\varnothing,a\right]\):
\[\hat{\mathbf{L}}_{0\gamma,c_{1},c_{2}}=\gamma\text{Denoise}\left(\mathbf{L}_{t_ {d}},t_{d},c_{1}\right)+\left(1-\gamma\right)\text{Denoise}\left(\mathbf{L}_{t _{d}},t_{d},c_{2}\right) \tag{13}\]
Then, as for style \(s\) in condition, we can generate style-controlled gestures when sampling by interpolating or even extrapolating the two variants using \(\gamma\), as \(c_{1}=\left[d,s_{1},a\right],c_{2}=\left[d,s_{2},a\right]\) in Equation (13).
The Denoising module can be trained by optimizing the Huber loss (Kang et al., 2017) between the generated poses \(\hat{\mathbf{L}}_{0}\) and the ground truth human gestures \(\mathbf{L}_{0}\) on the training examples:
\[\mathcal{L}_{diff}=\lambda_{diff}E_{\mathbf{L}_{0}\sim q\left(\mathbf{L}_{0} \right|c\right),t_{d}\sim\left[1,T_{d}\right]}\left[\text{HuberLoss}(\mathbf{L} _{0}-\hat{\mathbf{L}}_{0})\right] \tag{14}\]
#### 3.2.2. Sample Module
The final co-speech gesture is given by splicing a number of clips of time duration \(T_{e}\) with frame length \(N\). The initial noisy gesture \(\mathbf{L}_{T_{d}}\) is sampled from the standard normal distribution and the other \(\mathbf{L}_{t_{d}}(t_{d}<T_{d})\) is the result of the previous noising step. The seed gesture for the first clip can be generated by randomly sampling a gesture from the dataset or by setting it to the average gesture. Then the seed gesture for other clips is the last \(N_{seed}\) frames of the gesture generated in the previous clip. For every clip, in every noising step \(t\), we predict the clean gesture \(\hat{\mathbf{L}}_{0}\) =\(\text{Denoise}(\mathbf{L}_{t_{d}},t_{d},c)\), and add the noise to the noising step \(\mathbf{L}_{t_{d}-1}\) using Equation (10) with the diffuse process. This process is repeated from \(t_{d}=T_{d}\) until \(\mathbf{L}_{0}\) is reached (Figure 4 bottom). Please refer to our supplementary material for training details such as network structure and implementation details.
#### 3.3. Gesture Generation Refinement
#### 3.3.1. Primal Gesture VQVAE
Here we train a VQVAE to summarize meaningful gesture units to reduce the exploration space for following reinforcement learning. Each code represents a unique gesture. Besides, discrete spaces are more conducive to reinforcement learning for exploration (Kang et al., 2017; Wang et al., 2017). The architecture of the primal gesture VQVAE is shown in Figure 5. Given the primal gesture
Figure 4. (Top) Denoising module. A noising step \(t_{d}\) and a noisy gesture sequence \(\mathbf{L}_{t}\) at this noising step conditioning on \(c\) (including seed gesture \(d\), style \(s\) and audio \(a\)) are fed into the model ‘RM’ is short for random mask. (Bottom) Sample module. At each step \(t_{d}\), we predict the \(\hat{L}_{0}\) with the denoising process based on the corresponding conditions, then add the noise to the noising step \(\mathbf{L}_{t_{d}-1}\) with the diffuse process. This process is repeated from \(t_{d}=T_{d}\) until \(t_{d}=0\).
sequence \(\mathbf{L}_{0}^{\text{upper}}\in\mathbb{R}^{T\times D^{\text{upper}}}\) of the upper body, where \(D^{\text{upper}}\) denotes primal gesture dimension of the upper body. We first adopt a 1D temporal convolution network \(E_{\text{sq}}\) to encode the sequence \(\mathbf{L}_{0}^{\text{upper}}\) to context-aware features \(\mathbf{u}\)
\[\mathbf{u}=E_{\text{sq}}(\mathbf{L}_{0}^{\text{upper}}) \tag{15}\]
where \(\mathbf{u}\in\mathbb{R}^{T^{\prime\prime}\times C^{\prime\prime}}\) and \(T^{\prime\prime}=T/d_{\text{sq}}\), \(d_{\text{sq}}\) is the temporal downsampling rate in VQVAE and \(C^{\prime\prime}\) is the channel dimension of features. Then we quantize \(\mathbf{u}\) by mapping each temporal feature \(\mathbf{u}_{i}\) to its closest codebook [74] element \(z_{j}\) as \(\mathbf{q}(.)\):
\[\mathbf{u}_{\mathbf{q}i}=\mathbf{q}(\mathbf{u})=\arg\min_{\mathbf{z}_{j}\in \mathcal{Z}_{u}}\left\|\mathbf{u}_{i}-\mathbf{z}_{j}\right\| \tag{16}\]
where \(\mathcal{Z}_{u}\) is a set of \(C_{b}\) codes of dimension \(n_{z}\). And \(\mathbf{u}_{\mathbf{q}}\) is the elements of codebook \(\mathcal{Z}_{u}\), \(\mathbf{u}_{q}\in\mathcal{Z}_{u}\). A following de-convolutional decoder \(D_{\text{sq}}\) projects \(\mathbf{u}_{\mathbf{q}}\) back to the deep latent space as a primal gesture sequence \(\hat{\mathbf{L}}_{0}^{\text{upper}}\) for the upper body, which can be formulated as
\[\hat{\mathbf{L}}_{0}^{\text{upper}}=D_{\text{sq}}\left(\mathbf{u}_{q}\right) \tag{17}\]
The VQVAE can be trained by optimizing \(\mathcal{L}_{\text{sq}}\):
\[\mathcal{L}_{\text{sq}} =\left\|\hat{\mathbf{L}}_{0}^{\text{upper}}-\mathbf{L}_{0}^{ \text{upper}}\right\|_{1}+\alpha_{1}\left\|\hat{\mathbf{L}}_{0}^{\text{upper} }-\mathbf{L}_{0}^{\text{upper}}\right\|_{1} \tag{18}\]
where the first item is the reconstruction loss. The next two items are velocity loss and acceleration loss [86, 85]. \(\text{sg}[.]\) denotes the stop-gradient operation, and the term \(\left\|\mathbf{u}-\text{sg}\left[\mathbf{u}_{\mathbf{q}}\right]\right\|\) is the "commitment loss [74]" with weighting factor \(\beta_{\text{sq}}\).
#### 3.3.2. Reinforcement Learning Finetuning
To further enhance the alignment between the speech and gesture and increase the diversity of the generated gestures, we employed reinforcement learning to fine-tune the gesture generation model. The reward signal is pivotal in balancing exploration and exploitation in reinforcement learning. Previous work [45] attempts to optimize partial performance metrics of the model through hand-designed reward functions. However, in our experience, designing heuristic reward functions that comprehensively evaluate the model's performance is challenging. Reinforcement learning training is less stable than supervised learning, and if the reward function only considers specific metrics while neglecting others, the model's overall performance may deteriorate.
In this paper, we adopted Inverse Reinforcement Learning (IRL) [55] to learn a neural network model from human demonstrations to fit the true reward function and explain human behavior. Specifically, our reward model training is shown in Figure 6, similar to [11]. Firstly, we sample a speech-gesture pair from the VQVAE-encoded dataset \(\mathcal{D}\), denoted as trajectory \(\tau_{0}\). Then we randomly replace \(k\) codes in the trajectory \(\tau_{0}\) where \(k=1,\cdots,K\) to generate \(K\) trajectories \([\tau_{1},\cdots,\tau_{K}]\). We sample \(L\) tuples and thus get \(L\times K\) trajectories to form the dataset \(\mathcal{D}_{rm}\) to train the reward model. We make a weak assumption that the more codes replaced with random codes, the worse the quality of the trajectories, including alignment with speech and diversity. Then, we let the reward model \(R_{\psi}\) classify these trajectories with different qualities (may come from different human demonstrations with different speech) \(r=R_{\psi}(\tau)\) to determine which trajectory is better:
\[\mathcal{L}_{rm}=-\mathbb{E}\left[\log\left(\sigma\left((r_{i}-r_{j})\cdot \text{sgn}(j-i)\right)\right)\right], \tag{19}\]
where \(\{i,j\in[1,\cdots,K],i\neq j\}\), \(\sigma\) means the sigmoid function and sgn means the signun function:
\[\text{sgn}(x)=\begin{cases}-1,&x<0\\ 1,&x>0\end{cases}. \tag{20}\]
By learning the classification task, the reward model can learn to output a scalar reward signal \(r(\tau)=R_{\psi}(\tau)\) that makes reasonable evaluations on the quality of the trajectory \(\tau\).
Given the reward model, we use the REINFORCE algorithm [68] to improve the model:
\[\mathcal{L}_{RL}=-\mathbb{E}_{\tau\sim\pi}\left[\log p_{\pi}(\tau)r(\tau) \right], \tag{21}\]
where \(\pi\) means the current policy, _i.e._, the gesture model and \(p_{\pi}(\tau)\) means the probability of \(\tau\) given policy \(\pi\). During the fine-tuning process of the model, the reward model accurately scores the gesture
Figure 5. Structure of primal gesture VQVAE. After learning the discrete latent representation of the primal gesture of upper body, the gesture VQVAE encode and summarize meaningful gesture units.
Figure 6. Reward model training. We first sample a VQVAE-encoded speech-gesture pair, denoted as trajectory \(\tau_{0}\). Then, we randomly replace \(k\) gesture code(s) with random codes, where \(k=1,\cdots,K\), resulting in \(K\) speech-gesture trajectories with decreasing quality. Finally, we utilize the output of reward model \(r\) to classify the trajectories with different qualities and optimize the reward model with the loss function \(\mathcal{L}_{rm}\).
under the given speech to improve alignment between speech and increase gesture diversity.
#### 3.3.3. Physics Guidance
Inspired by (Zhang et al., 2017), we consider that the foot should have contact with the ground when there is a left-right acceleration or an upward acceleration of the root. Then we use standard Inverse Kinematics (IK) optimization for physics guidance. For more details please refer to the supplementary material.
## 4. Experiments
### Experiment Preparation
#### 4.1.1. Implementation Details
We perform the training and evaluation on the Trinity (Santos et al., 2017) and ZEGGS (Les et al., 2017) datasets. Even based on motion capture, the hand quality is still low (Beng et al., 2016; Wang et al., 2017; Wang et al., 2017), so we ignore hand motion currently. Then the number of joints for the two datasets is \(J_{A}=26\) and \(J_{B}=27\), respectively. We choose seven more typical and longest-duration styles (happy, sad, neutral, old, relaxed, angry, still) for training and validation. For the Trinity dataset, there are no style labels and we consider all of its styles to be 'neutral'. And we divided the data into 8:1:1 by training, validation, and testing. We first resample the motion of both datasets to 30fps. All audio recordings are downsampled to 16kHz. In terms of retargeting network, we set \(d_{re}=4\), then the primal gesture is 7.5 fps. We set all reference poses R to the T-pose at the origin with the foot in the Z-plane. The dimension \(C\) of each node of the primal gesture in latent space after convolution is 16. We set \(\lambda_{\text{lc}}=1\), \(\lambda_{\text{ee}}=2\) and \(\lambda_{\text{adv}}=0.25\) for Equation (9) and use the Adam (Kingmaa et al., 2014) optimizer with a batch size of 256 for 16000 epochs. The retargeting network trained on an NVIDIA V100 GPU takes about 3 days. While training the diffusion model and VQVAE, gesture data are cropped to a length of \(N=30\) (4 seconds). For the diffusion model, the Denoising module learns both the conditioned and the unconditioned distributions by randomly masking 10% of the samples using Bernoulli masks. The cross-local attention networks use 8 heads, 32 attention channels, 256 channels, the window size is 6, each window looks at the one window before it, and with a dropout of 0.1. As for self-attention networks are composed of 8 layers, 8 heads, 32 attention channels, 256 channels, and with a dropout of 0.1. We use the AdamW (Kingmaa et al., 2014) optimizer (learning rate is 3\(\times 10^{-5}\)) with a batch size of 256 for 100000 steps. Our models have been trained with \(T=1000\) noising steps and a cosine noise schedule. The diffusion model can be learned in about 3 days on one NVIDIA V100 GPU. As for VQVAE, the size \(C_{b}\) of codebook \(\mathcal{Z}_{u}\) is set to 512 with dimension \(n_{z}\) is 512. We set the down-sampling rate \(d_{0g}=2\). And \(\beta_{\text{eq}}=0.1\), \(\alpha_{\text{1}}=1\) and \(\alpha_{\text{2}}=1\) for Equation (18). we use the ADAM optimizer (learning rate is e-4, \(\beta_{1}=0.5\), \(\beta_{2}=0.98\)) with a batch size of 128 for 200 epochs. The VQVAE is learned on one NVIDIA A100 GPU for several hours. For more datasets and training details please refer to the supplementary material.
#### 4.1.2. Evaluation Metrics
Canonical correlation analysis (CCA) (Zegas et al., 2017) is to project two sets of vectors into a joint subspace and then find a sequence of linear transformations of each set of variables that maximizes the relationship between the transformed variables. CCA values can be used to measure the similarity between the generated gestures and the real ones. The closer the CCA is to 1, the better. The Frechet gesture distance (FGD) (Wang et al., 2017) on feature space is proposed as a metric to quantify the quality of the generated gestures. To compute the FGD, we trained an autoencoder to extract the feature. Lower FGD is better. Diversity (Zegas et al., 2017) in feature space is used to evaluate the diversity of the gestures. We also report average jerk, average acceleration (Zegas et al., 2017), Hellinger distance (Hellinger, 1954), and Beat Align Score (Zegas et al., 2017; Wang et al., 2017) in the supplementary material.
### Comparison to Existing Methods
#### 4.2.1. Objective Evaluation
We compare our proposed model with StyleGestures (Beng et al., 2016), AudioGestures (Zegas et al., 2017), ExampleGestures (Les et al., 2017), and DiffusStyleGesture (Wang et al., 2017). The quantitative results are shown in Table 1. On the global CCA, our proposed model outperforms all other existing methods. The highest global CCA shows a strong coupling between the generated gestures and the ground truth gestures. CCA for each sequence is not as good as the other methods, and we suggest that this is because for each speech, the model learns the gestures across the skeleton. Our method significantly surpasses the compared state-of-the-art methods with FGD, improves 6.64 (63%) than the best compared baseline model ExampleGestures. This shows the high quality of the generated gestures. We can see that our model is not as good as StyleGesture in terms of Diversity. The video results show that StyleGesture has a lot of cluttered movements, increasing diversity while decreasing human-likeness and appropriateness. However, we would like to emphasize that objective evaluation is currently not particularly relevant for assessing gesture generation (Zegas et al., 2017). Subjective evaluation remains the gold standard for comparing gesture generation models (Zegas et al., 2017; Wang et al., 2017). Current research on speech-driven gestures prefers to conduct only subjective evaluation (Beng et al., 2016; Wang et al., 2017). Please refer to the supplementary video for more comparisons.
#### 4.2.2. User Study
To understand the real visual performance of our method, we conduct a user study among the gesture sequences generated by each compared method and the ground truth motion capture data. Following the evaluation in GENEA (Gan
the gap compared to DiffuseStyleGesture is that DiffuseStyleGesture uses kinematic parameters such as the position, rotation angle, velocity, and rotation angular velocity of the root, as well as the position, rotation angle, velocity, rotation angular velocity, and gaze direction of each joint of the original motion as features of the gesture, which has a much larger dimension than the feature dimension of the primal skeleton gesture and may contain fine-grained skeletal details related to speech. According to the feedback from the participants, our generated gestures are "more semantically relevant" and "more natural", while our method has "less power" compared to Ground Truth. We suggest that this observation is due to the downsampling in the retargeting network and the VQVAE network. Smaller downsampling coefficients may result in faster and more powerful movements.
### Ablation Studies
#### 4.3.1. Objective Evaluation
Moreover, we conduct ablation studies to address the performance effects of different components in the framework. We performed the experiments on the following components: (1) reinforcement learning, (2) VQVAE, and (3) multiple skeletons. The results of our ablation studies are summarized in Table 2. The metrics on FGD indicate that after RL finetuning, the generated gestures have increased distance from the distribution of human gestures in the dataset, indicating that the model has explored some gestures that do not belong to the existing distribution of gestures in the dataset but are considered reasonable by the reward model. From the CCA and diversity metrics, it can be seen that the reward model can indeed generalize to gestures outside the dataset, allowing the model to generate more diverse and high-quality gesture movements that are not limited to the dataset. When neither RL nor VQVAE is used, both FGD and diversity are still decreasing, which indicates the necessity of codebooks to generalize meaningful gestures. When we use only a single dataset, we notice that both FGD and diversity decrease a lot, which indicates the essential importance of gesture generation for learning on multiple datasets.
#### 4.3.2. User Study
Similarly, we conduct a user study of ablation studies. The MOS on human-likeness and appropriateness are shown in the last two columns in Table 2. In terms of human similarity, we can find that the scale of the dataset has a significant effect on the results, which demonstrates the importance of unifying the gesture dataset. For speech and gesture appropriateness, it is also found that the scale of the dataset has the largest impact on this metric. Secondly, the appropriateness also decreased without reinforcement learning, shows the importance of data exploration. The visual comparisons of this study can be also referred to the supplementary video.
### Diverse, Controllable, and Stylized Gesture Generation
* **Stylization.** We can generate stylized gestures by setting \(\gamma\) and \(s\) in Equation (13). The intensity of the stylization can be controlled by the value of \(\gamma\). As shown in Figure 7, for the same speech, different styles of gestures can be generated while preserving matching with the speech.
* **Diversity.** Due to the diffusion model architecture, different noisy gesture and different seed gesture could generate different gestures even for the same speech and style, as
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{Name} & \multicolumn{4}{c}{Objective evaluation} & \multicolumn{3}{c}{Subjective evaluation} \\ \cline{2-7} & Global CCA & CCA for each sequence & FGD \(\downarrow\) & Diversity \(\uparrow\) & Human-likeness & Appropriateness \\ \hline Ground Truth & 1.000 & 1.00 \(\pm\) 0.00 & 0.0 & 10.03 & 4.22 \(\pm\) 0.11 & 4.22 \(\pm\) 0.11 \\ StyleGestures (Dong et al., 2018) & 0.978 & **0.98 \(\pm\) 0.01** & 15.89 & **13.86** & 3.56 \(\pm\) 0.12 & 3.17 \(\pm\) 0.13 \\ Audio2Gesture (Song et al., 2018) & 0.969 & 0.97 \(\pm\) 0.01 & 19.78 & 6.148 & 3.61 \(\pm\) 0.11 & 3.15 \(\pm\) 0.14 \\ ExampleGestures (Song et al., 2018) & 0.914 & **0.98 \(\pm\) 0.01** & 10.49 & 5.418 & 3.77 \(\pm\) 0.12 & 3.17 \(\pm\) 0.14 \\ DiffuseStyleGesture (Song et al., 2018) & 0.987 & 0.97 \(\pm\) 0.01 & 11.98 & 11.22 & 3.66 \(\pm\) 0.12 & **3.46 \(\pm\) 0.14** \\ Ours & **0.988** & 0.95 \(\pm\) 0.02 & **3.850** & 7.039 & **3.80 \(\pm\) 0.11** & 3.42 \(\pm\) 0.14 \\ \hline \hline \end{tabular}
\end{table}
Table 1. Quantitative results on test set. Bold indicates the best metric. Among compared methods, StyleGestures (Dong et al., 2018), Audio2Gestures (Song et al., 2018), ExampleGestures (Song et al., 2018), and DiffuseStyleGesture (Song et al., 2018) are reproduced using officially released code with some optimized settings. Objective evaluation is recomputed using the officially updated evaluation code (Zhu et al., 2019; Wang et al., 2019). Human-likeness and appropriateness are the results of MOS with 95% confidence intervals.
Figure 7. Visualization of the stylization, controllability, and diversity of generated gestures. We randomly select a 2.67-second generated gesture clip (10 codes). Then setting \(\gamma\) and \(s\) in Equation (13) to control the style and setting noisy gesture in diffusion model to generate diverse gestures. The dashed boxes indicate that we control their code the same.
shown in Figure 7. This is the same as real human speech, which creates diverse co-speech gestures related to the initial position.
* **Controllability.** Since we use VQVAE to generate gestures, it is easy to control the gesture or take out the code for interpretation. We can have a high level of control over speech-driven gestures at any time with the specified upper body code, as shown in the dashed box in 7.
For more details please refer to the supplementary material.
## 5. Discussion and Conclusion
In this paper, we assume that the body gestures of the different skeletons are contained in the primal skeleton and present a unified gesture synthesis model for multiple skeletons. UnifiedGesture demonstrates x major strength: 1) Benefit from using the skeleton-aware retargeting network to unify the different skeletons, while extending the dataset. The model has stronger generalization. And ablation experiments on a single skeleton effectively demonstrate that a larger amount of data can improve the performance of the model. 2) Based on a diffusion model, probabilistic mapping enhances diversity while enabling the generation of high-quality, speech-matched, and style-controlled gestures. 3) VQVAE learns a codebook to summarize meaningful gesture units to improve controllability and interpretability. Reinforcement learning with a learned reward function helps refine the gesture generation model, enabling the model to explore the data and able to increase the diversity of the generated gestures. The physics-based kinematic constraints also further improve gesture generation. There is room for improvement in this research. Besides speech, more modalities (e.g. text, facial expressions) could be taken into consideration to generate more appropriate gestures. Solving the problem that the skeleton-aware encoder and decoder need to be re-trained for the new skeleton is also our future research direction.
###### Acknowledgements.
This work is supported by National Natural Science Foundation of China (62076144), Shenzhen Science and Technology Program (WDZC20220816140515001, JCYJ20220818101014030) and Shenzhen Key Laboratory of next generation interactive media innovative technology (ZDSYS20210623092001004).
|
2302.14439 | Mission Target: Exotic Multiquark Hadrons -- Sharpened Blades | Motivated by recent experimental progress in establishing the likely
existence of (variants of) exotic hadrons, predicted to be formed by the strong
interactions, various proposed concepts and ideas are compiled in an attempt to
draft a coherent picture of the achievable improvement in the theoretical
interpretation of exotic hadrons in terms of the underlying quantum field
theory of strong interactions. | Wolfgang Lucha | 2023-02-28T09:31:04Z | http://arxiv.org/abs/2302.14439v3 | [
###### Abstract
Motivated by recent experimental progress in establishing the likely existence of (variants of) exotic hadrons, predicted to be formed by the strong interactions, various proposed concepts and ideas are compiled in an attempt to draft a coherent picture of the achievable improvement in the theoretical interpretation of exotic hadrons in terms of the underlying quantum field theory of strong interactions.
exotic hadron states; multiquark adequacy; QCD sum rule; large-\(N_{\rm c}\) limit; \(1/N_{\rm c}\) expansion Article]Mission Target: Exotic Multiquark Hadrons -- Sharpened Blades Wolfgang Lucha] Wolfgang Lucha +
Footnote †: 2023 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license ([https://creativecommons.org/licenses/by/4.0/](https://creativecommons.org/licenses/by/4.0/)).
## 1 Significance of Fundamental Diverseness of Ordinary Hadrons and Multiquark States
Within the framework of (relativistic) quantum field theories, all strong interactions are described -- at fundamental level -- by quantum chromodynamics (QCD), a renormalizable gauge theory, invariant under local transformations forming a representation of the compact non-Abelian Lie group SU(3). Two sorts of particles constitute the (basic) dynamical degrees of freedom of QCD: massless vector gauge bosons labelled gluons, transforming (inevitably) according to the eight-dimensional adjoint representation \(\mathbf{8}\) of SU(3), and spin-\(\frac{1}{2}\) fermions \(q_{\rm a}\), labelled quarks, each distinguished from all others by some quark flavour degree of freedom
\[a\in\{u,d,s,c,b,t(,\ldots 2)\} \tag{1}\]
and transforming according to the three-dimensional fundamental representation \(\mathbf{3}\) of SU(3). The (few) fundamental parameters characterizing QCD are the masses \(m_{a}\) of the quarks \(q_{a}\) as well as the strong coupling \(g_{\rm s}\), frequently adopted in form of a strong fine-structure coupling
\[a_{\rm s}\equiv\frac{g_{\rm s}^{2}}{4\pi}. \tag{2}\]
This designation as quantum _chromodynamics_ derives from the fact that the quark and gluon degree of freedom affected by their gauge-group transformation is referred to as their colour.
Among others, QCD features the phenomenon of colour _confinement_: not the (coloured) quarks and gluons but _exclusively_ their colour-singlet _hadron_ bound states [1] invariant under the action of the QCD gauge group are, in form of isolated states, experimentally observable. Closer inspection reveals that the hadron states _have to be_ divided into two _disjoint_ categories:
* **Conventional** (ordinary) hadrons include all mesons that consist of only a pair of quark and antiquark, as well as all baryons that consist of three quarks or of three antiquarks.
* **Exotic** hadrons are characterized by _non-conventional_ quark and/or gluon compositions comprising _multiquark_ states (tetraquarks, pentaquarks, hexaquarks, heptaquarks, etc.), "hybrid" quark-gluon bound states, or pure-gluon bound states (nick)named glueballs.
There is a (crucial) _fundamental_ difference between conventional hadrons and exotic hadrons, based on a (more or less) trivial observation: any colour-singlet multiquark arrangement of a number of quarks and/or antiquarks may be decomposed (in one or more ways) into a set of states that are also colour singlets but consist of lesser numbers of quarks and/or antiquarks.
Therefore, an (initially) tightly bound, "compact" _multiquark_ hadron may reconfigure to molecular-type clusters of (ultimately) _conventional_ hadrons, loosely bound by some residual
forces [2; 3]. In view of this, trustworthy attempts to describe exotic hadrons should (struggle to) take into account, too, the potential mixing of these two "phases" of multiquark hadrons.
The present note _recalls_ a collection of recently proposed procedures and considerations the application of which _might facilitate_ gaining theoretical understanding of (experimentally established) multiquark states. Both origin and prospects of these tools are illustrated for the hopefully easiest case: the kind of tetraquarks presumably least plagued by complications of technical nature given by (compact) bound states of two quarks and two antiquarks carrying four unequal flavour quantum numbers. (These tools' transfer to other cases seems evident.) In particular, a brief glance at the related present _experimental_ situation [4; 5; 6; 7; 8; 9; 10; 11] (Section 2) will be followed by a recapitulation of insights gained upon basing the strong interactions' gauge symmetry tentatively on special unitary groups of _higher_ dimension [12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26] (Section 4) and a sketch of the advantages of trimming a popular technique for the nonperturbative analytical discussion of QCD bound states to fit to the needs of multiquark hadrons [27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39] (Section 5).
## 2 Tetraquark Mesons -- the Example of Multiquark Exotic Hadron States Par Excellence
All tetraquark mesons \(T\) are bound states of two antiquarks \(\overline{q}_{a},\overline{q}_{c}\) and two quarks \(q_{b},q_{d}\),
\[T=[\overline{q}_{a}\,q_{b}\,\overline{q}_{c}\,q_{d}]\,\qquad a,b,c,d\in\{u,d,s,c,b\}\, \tag{3}\]
henceforth calling the masses of the four (anti-) quarks constituting such state \(m_{a},m_{b},m_{c},m_{d}\). On group-theoretical grounds, the presence of these mesons in the hadron spectrum without coming into conflict with _confinement_ of colour is rendered possible by the appearance of two SU(3) singlet representations \(\mathbf{1}\) in the (appropriate) tensor product of two fundamental SU(3) representations \(\mathbf{3}\) and two (complex-conjugate) fundamental SU(3) representations \(\mathbf{\overline{3}}\)[24; 36], as this product's decomposition into irreducible SU(3) representations \(\mathbf{1},\mathbf{8},\mathbf{10},\mathbf{\overline{10}},\mathbf{27}\) reveals:
\[q_{b}\,q_{d}\,\overline{q}_{a}\,\overline{q}_{c}\sim\mathbf{3}\otimes \mathbf{\overline{3}}\otimes\mathbf{\overline{3}}=\mathbf{81}=\mathbf{1} \oplus\mathbf{1}\oplus\mathbf{8}\oplus\mathbf{8}\oplus\mathbf{8}\oplus \mathbf{10}\oplus\mathbf{\overline{10}}\oplus\mathbf{27}. \tag{4}\]
As far as its _flavour_ degrees of freedom are concerned, the four quark constituents of any tetraquark state (3) may contribute, at most, four different quark flavours and, trivially, carry at least one, the _same_ for all the four (anti-) quarks. Owing to such simultaneous involvement of both quarks and antiquarks, however, the latters' hadron bound states need not feature all of the available quark flavours. Table 1 presents the listing [21] of conceivable quark-flavour arrangements in the tetraquark state (3), with respect to both the number of _different_ flavours \(a\neq b\neq c\neq d\) provided by two quarks and two antiquarks as well as the number of flavours exhibited by the related _hadron_, which might differ from the former number either because of mutual flavour-antiflavour compensations or because of quark-flavour double occurrences.
Needless to say, at least from the experimental point of view it may be more satisfactory if the exotic nature of a (suspected) multiquark is established already by its observed content of quark flavours. The corresponding species of multiquarks may be told apart by relying on
Definition 1: A multiquark hadron is termed flavour-exotic if it exhibits more open quark flavours than the corresponding category of conventional hadrons does, which means at least three open quark flavours in the case of mesonic states or at least four open quark flavours in the case of baryonic states. By contrast, a multiquark hadron is called flavour-cryptoexotic if it does not meet this requirement.
For the quark-flavour arrangements of tetraquarks, Table 1 offers several options to meet the requirement of being considered flavour-exotic: of course, there can exist merely one flavour arrangement that incorporates four mutually different quark flavours; however, there exist a few self-evident options for flavour-exotic tetraquarks to comprise not more than two or three different quark flavours by involving one or even two double appearances of a given flavour.
Quite recently, various candidates for tetraquark states that are manifestly flavour-exotic by exhibiting (in accordance with Definition 1) four open quark flavours have been observed by experiment. Regarding the flavour compositions of these candidates, there are states each encompassing exactly one of all four lightest quarks [6; 7; 10; 11] and "doubly flavoured" ones containing only three different flavours but one of these twice [8; 9] (see summary of Table 2).
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Number of Different** & **Quark Composition** & **Number of Open** \\
**Quark Flavours Involved** & \(\overline{q}_{\Box}\,q_{\Box}\,\,\overline{q}_{\Box}\,\,\overline{q}_{\Box}\) & **Quark Flavours Involved** \\ \hline
4 & \(\overline{q}_{a}\,q_{b}\,\,\overline{q}_{c}\,q_{d}\) & 4 \\ \hline
3 & \(\overline{q}_{a}\,q_{b}\,\,\overline{q}_{c}\,q_{b}\) & 4 \\ & \(\overline{q}_{a}\,q_{b}\,\,\overline{q}_{a}\,q_{c}\) & 4 \\ & \(\overline{q}_{a}\,q_{b}\,\,\overline{q}_{b}\,q_{c}\) & 2 \\ \hline
2 & \(\overline{q}_{a}\,q_{b}\,\,\overline{q}_{a}\,q_{b}\) & 4 \\ & \(\overline{q}_{a}\,q_{a}\,\,\overline{q}_{a}\,q_{b}\) & 2 \\ & \(\overline{q}_{a}\,q_{a}\,\,\overline{q}_{b}\,q_{a}\) & 2 \\ & \(\overline{q}_{a}\,q_{b}\,\,\overline{q}_{b}\,q_{a}\) & 0 \\ & \(\overline{q}_{a}\,q_{a}\,\,\overline{q}_{b}\,q_{b}\) & 0 \\ \hline
1 & \(\overline{q}_{a}\,q_{a}\,\,\overline{q}_{a}\,q_{a}\) & 0 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Tetraquark states (3): Classification by different vs. open quark-flavour content \(a\neq b\neq c\neq d\), open-flavour number referring to all flavours not counterbalanced by their antiflavours. (From Ref. [21].)
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Candidate Tetraquark Meson** & **(Minimal) Quark-Flavour Content** & **References** \\ \hline \(T_{\text{cs0}}(2900)^{0}\) & \(c\overline{d}\,\overline{s}\,\overline{u}\) & [6; 7] \\ \(T_{\text{cs1}}(2900)^{0}\) & \(c\overline{d}\,\overline{s}\,\overline{u}\) & [6; 7] \\ \(T_{\text{cc}}(3875)^{+}\) & \(cc\overline{u}\,\overline{d}\,\) & [8; 9] \\ \(T_{\text{cs0}}^{*}(2900)^{0}\) & \(c\overline{s}\,\overline{d}\,\overline{u}\) & [10; 11] \\ \(T_{\text{cs0}}^{*}(2900)^{++}\) & \(c\overline{s}\,\overline{u}\,\overline{d}\,\) & [10; 11] \\ \hline \hline \end{tabular}
\end{table}
Table 2: Flavour-exotic tetraquark states: Experimental candidates, in naming convention of LHCb [5].
## 3 Correlation Functions of Hadron Interpolating Operators: Application to Multiquarks
For descriptions of hadronic states in terms of QCD, a pivotal contact point between the realm of QCD and the realm of hadrons is established by the concept of hadron interpolating operators. For a fixed hadron \(H\) under consideration, its -- not necessarily unique -- hadron interpolating operator, generically called \({\cal O}\), is a gauge-invariant local operator composed of the QCD dynamical degrees of freedom, the quark and gluon field operators, that betrays its nonzero overlap with the hadron \(|H\rangle\) by the nonvanishing matrix element emerging from its getting sandwiched between the hadronic state \(|H\rangle\) and the QCD vacuum \(|0\rangle\): \(\langle 0|{\cal O}|H\rangle\neq 0\). In all subsequent implementations of hadron interpolating operators, features such as parity or spin degrees of freedom can be safely ignored; they get therefore notationally suppressed.
For a _conventional_ meson consisting of a quark of flavour \(b\) and an antiquark of flavour \(a\), the most evident option for its interpolating operator is the quark-antiquark bilinear current
\[j_{\bar{a}b}(x)\equiv\overline{q}_{a}(x)\,q_{b}(x). \tag{5}\]
For _exotic_ hadrons belonging to the subset of _tetraquark_ mesons characterized in Equation (3), the search for appropriate tetraquark interpolating operators, specifically named \(\theta\), is greatly facilitated by the observation [40] that (by means of suitable Fierz transformations [41]) _every_ colour-singlet operator that is composed of two quarks and two antiquarks can be expressed by a linear combination of only two different products of colour-singlet conventional-meson interpolating operators of quark-bilinear-current shape (5). Thus, this "operator basis" reads
\[\theta_{\bar{a}b\bar{c}d}(x)\equiv j_{\bar{a}b}(x)\,j_{\bar{c}d}(x)\,\qquad \theta_{\bar{a}\bar{a}\bar{c}b}(x)\equiv j_{\bar{a}d}(x)\,j_{\bar{c}b}(x). \tag{6}\]
Moreover, taking into account some useful identities recalled, for instance, by Equations (32) and (36) of Reference [26] or Equations (1) and (2) of Reference [37]_may_ be regarded either as a kind of shortcut to or as explicit verification of these findings. The tetraquark interpolating operators (6) will provide some kind of playground for (most of) the ensuing considerations.
That pleasing observation [40] points out a promising route how to reasonably proceed. Namely, the enabled basic two-current structure (6) of the tetraquark interpolating operators \(\theta\) suggests to start (envisaged) analyses of tetraquarks from _correlation functions_ -- in general, defined by vacuum expectation values of time-ordered products, symbolized by T, of chosen field operators -- of four quark-bilinear operators (5). If tolerated by the involved dynamics, in appropriate four-point correlation functions of such kind tetraquark states should become manifest by their contributions in form of intermediate-state poles. Momentarily focusing to only essential aspects, all these four-current correlation functions are of the general structure
\[\left\langle{\rm T}\!\left(j(y)\,j(y^{\prime})\,j^{\dagger}(x)\,j^{\dagger}(x ^{\prime})\right)\right\rangle\,. \tag{7}\]
Upon application of well-understood procedures, the correlation functions (7) entail also the amplitudes encoding scatterings of two conventional mesons into two conventional mesons. Because of the two-current structure (6), contact with tetraquark states, in form of correlation functions involving tetraquark interpolating operators \(\theta\), can be established by identification or contraction of configuration-space coordinates of _proper_ quark-bilinear currents \(j\), forming
* _twice_ configuration-space contracted _two-point_ correlation functions of _two_ operators (6) \[\left\langle{\rm T}\!\left(\theta(y)\,\theta^{\dagger}(x)\right)\right\rangle =\lim_{\begin{subarray}{c}x^{\prime}\to x\\ y^{\prime}\to y\end{subarray}}\!\left\langle{\rm T}\!\left(j(y)\,j(y^{ \prime})\,j^{\dagger}(x)\,j^{\dagger}(x^{\prime})\right)\right\rangle\,;\] (8)
_once_ contracted _three-point_ correlation functions of _one_ operator (6) and _two_ operators (5)
\[\Big{\langle}\mathsf{T}\Big{(}j(y)\,j(y^{\prime})\,\theta^{\dagger}(x)\Big{)} \Big{\rangle}=\lim_{x^{\prime}\to x}\Big{\langle}\mathsf{T}\Big{(}j(y)\,j(y^{ \prime})\,j^{\dagger}(x)\,j^{\dagger}(x^{\prime})\Big{)}\Big{\rangle}. \tag{9}\]
An immediate implication of the mere conceptual nature of unconventional multiquark states is, as already stressed in Section 1, their potential to undergo _clustering_ without getting into conflict with colour confinement [2]. For the correlation-function underpinned analyses of tetraquark properties, this finding should be regarded as a strong hint that, presumably or even very likely, not all QCD-level contributions to some correlation function are, in general, of relevance for such formation of a tetraquark pole. It appears opportune to distinguish any contribution that may play a role in tetraquark studies even by nomenclature; this is done in
Definition 2: A QCD contribution to a correlation function (7) is termed **tetraquark-phile**[18; 23] if it is (potentially) capable of supporting the formation of a tetraquark-related intermediate-state pole. _
As a guidance through the process of filtering all of the QCD-level contributions as implicitly requested by Definition 2, a self-evident, easy to implement criterion may be devised [17; 19]:
Proposition 1: _For a given four-point correlation function (7) with external momenta in initial state \(p_{1}\), \(p_{2}\) and external momenta in final state \(q_{1}\), \(q_{2}\), considered as a function of the Mandelstam variable_
\[s\equiv(p_{1}+p_{2})^{2}=(q_{1}+q_{2})^{2}\, \tag{10}\]
_a QCD-level contribution is supposed to be tetraquark-phile if it exhibits a nonpolynomial dependence on \(s\) and if it develops an intermediate-state four-quark-related branch cut starting at the branch point_
\[\hat{s}\equiv(m_{a}+m_{b}+m_{c}+m_{d})^{2}. \tag{11}\]
For any contribution to a correlation function, the capability of supporting the formation of a tetraquark pole by satisfying all requirements in Proposition 1 may be straightforwardly and unambiguously decided by consulting the related Landau equations [42]: the existence of an appropriate solution to (the relevant set of) those Landau equations indicates the presence of an expected branch cut. References [19; 26; 37] show some examples worked out in all details.
As announced in Section 1, the benefit of implementing such programme is exemplified for the meanwhile even experimentally observed [6; 7; 10; 11]_subset_ of all those flavour-exotic tetraquarks that exhibit not less than (the feasible maximum of) four unequal quark flavours:
Definition 3: The quark-flavour composition of a tetraquark (3) is called **definitely flavour-exotic** if it comprises four mutually different quark flavours \(a\neq b\neq c\neq d\), that is, if this state is of the kind
\[T=[\overline{q}_{a}\,q_{b}\,\overline{q}_{c}\,q_{d}]\,\qquad a,b,c,d\in\{u,d,s,c,b\}\, \qquad a\neq b\neq c\neq d. \tag{12}\]
At least for the case of the definitely flavour-exotic tetraquarks (12), there exist two definitely distinguishable quark-flavour distributions in (from the point of view of intermediate states) incoming and outgoing states of a correlation function (7): its quark-flavour arrangements in initial and final state might be either identical or different. These two possibilities got names:
Definition 4: A definitely flavour-exotic correlation function (7) of four interpolating currents (5) is
* _flavour-preserving_[20] for equal quark-flavour distributions of incoming and outgoing states,
* _flavour-rearranging_[20] for unlike incoming- and outgoing-state quark-flavour distributions.
For the two categories of correlation functions (7), it is straightforward yet worthwhile (since instructive) to investigate their contributions of lowest orders to the perturbative expansions in powers of the strong fine-structure constant (2). Representative examples of contributions are given, for flavour-preserving cases, in Figures 1 and 2 and, for flavour-rearranging cases, in Figures 3 and 4. (In the plots, internal gluon exchanges are depicted in form of curly lines.) Expectably, such considerations disclose differences in analyses but similarities in outcomes:
* For flavour-preserving correlation functions, the line of argument proves to be, more or less, evident. _All_ the contributions of the type of Figure 1(a) or of the type of Figure 1(b), involving at most one gluon exchange, are doubtlessly disconnected. The contributions that involve a single gluon exchange between their two (otherwise disconnected) quark loops _vanish_ identically, due to the vanishing of the sum over colour degrees of freedom of each of the two quark loops. Phrased _slightly_ more technically, this can be traced back to the tracelessness of all generators of a special unitary group, governing the couplings of quarks and gluons. Consequently, _exclusively_ contributions that involve, at least, two gluon exchanges of an appropriate topology may be viewed as tetraquark-phile. These insights get, of course, corroborated by identifying these tetraquark-phile contributions according to Proposition 1 by explicit inspection [17] by way of their Landau equations. Replacing any double contraction (8) in Figure 1 by a single contraction (9) confirms the tetraquark-phile nature of contributions of the type of Figure 2 or related higher orders.
Figure 4: Definitely flavour-exotic four-current correlation function (7) of flavour-_rearranging_ type (left) and (right) contraction (9) to a correlation function of one tetraquark interpolating operator (6) and two quark-bilinear currents (5) [37]: typical contribution of _lowest tetraquark-phile_ perturbative order \(O(\alpha_{\rm s}^{2})\).
Figure 3: Definitely flavour-exotic four-current correlation function (7) of flavour-_rearranging_ type (left) and (right) its contraction (8) to two-point correlation function of tetraquark interpolating operators (6) [35; 37]. Representative contributions of lowest perturbative orders: (**a**) \(O(\alpha_{\rm s}^{0})\), (**b**) \(O(\alpha_{\rm s})\) and (**c**) \(O(\alpha_{\rm s}^{2})\).
For flavour-rearranging correlation functions, a simple optical guidance in this analysis is, beyond doubt, hardly imaginable: already the lowest-order contributions turn out to be connected. Rather, one has to gladly accept any assistance offered by that tool called Landau equations. For the three lowest-order contributions exemplified in Figure 3, the usage of this formalism is demonstrated, in full detail, in Appendix A of Reference [19], in the Appendix of Reference [37], as well as in Section 4 of Reference [26]. For this kind of analysis, it might prove advantageous to recast the encountered plots into box shape, by "unfolding" all these plots [15; 19; 26; 37]. These efforts' outcome is that contributions of the type of Figure 3(a) or of the type of Figure 3(b), being characterized by no or only one internal gluon exchange, do not incorporate the requested four-quark singularities. Involvement of this feature starts not before the level of two gluon exchanges of _suitable_ positioning, which then holds, of course, also for the single contractions (9) in Figure 4.
As an overall summary of the two classes of definitely flavour-exotic correlation functions (7) identified by Definition 4, the systematic scrutiny of their lowest-order contributions betrays that tetraquark-phile contributions (an essential ingredient, since providing the singularities that, upon summation, _may_ support the development of intermediate-state tetraquark poles) will not emerge before the next-to-next-to-lowest order in a series expansion in powers of the strong fine-structure constant (2), that is, in terms of \(\alpha_{\rm s}\), have to be at least of the order \(O\big{(}\alpha_{\rm s}^{2}\big{)}\).
Number of Colour Degrees of Freedom, Unfixed: Large-\(N_{\rm c}\) Limit and \(1/N_{\rm c}\) Expansion
Quite generally, first insights, even if only of qualitative nature, may be gained from the reduction of the complexity of QCD, enacted by the increase of the number of colour degrees of freedom and, in parallel, the decrease of the strength of the strong-interaction coupling \(g_{\rm s}\). In some more detail, that simplification of QCD [12; 13] proceeds along the following moves:
* Generalize QCD to the gauge theories invariant under a non-Abelian Lie group \(\mathrm{SU}(N_{\rm c})\). The dynamical degrees of freedom of each of the latter quantum field theories hence are its gauge bosons, still retaining their designation as gluons and transforming according to the \((N_{\rm c}^{2}-1)\)-dimensional, adjoint representation of \(\mathrm{SU}(N_{\rm c})\), and its fermionic quarks that transform according to the \(N_{\rm c}\)-dimensional, fundamental representation of \(\mathrm{SU}(N_{\rm c})\).
* Allow the number of colour degrees of freedom, \(N_{\rm c}\), to increase from \(N_{\rm c}=3\) to infinity: \[N_{\rm c}\to\infty\;.\] (13)
* For the strong coupling strength \(g_{\rm s}\), demand the _related_ decrease, with rising \(N_{\rm c}\), to zero: \[g_{\rm s}\propto\frac{1}{\sqrt{N_{\rm c}}}=O(N_{\rm c}^{-1/2})\xrightarrow[N_{ \rm c}\to\infty]{}0\;.\] (14) Clearly, for the strong fine-structure coupling \(\alpha_{\rm s}\) this requirement implies the behaviour \[\alpha_{\rm s}\propto\frac{1}{N_{\rm c}}=O(N_{\rm c}^{-1})\xrightarrow[N_{\rm c }\to\infty]{}0\;.\] (15)
Therefore, in the large-\(N_{\rm c}\) limit, the product \(N_{\rm c}\,\alpha_{\rm s}\) approaches a meaningful finite value. Only by establishing a careful balance between the growth of \(N_{\rm c}\) and the vanishing of \(\alpha_{\rm s}\), the latter requirement allows for both reasonable generalization of QCD to its large-\(N_{\rm c}\) limit and exploitation of any corresponding \(1/N_{\rm c}\) expansion, that is, the expansion in powers of \(1/N_{\rm c}\).
According to the above characterization of large-\(N_{\rm c}\) QCD, for each QCD contribution to a correlation function its behaviour in the large-\(N_{\rm c}\) limit gets determined by two ingredients:
* the number of _closed_ loops of the colour degrees of freedom carried by quarks or gluons,
* the number of either the strong couplings (14) or the strong fine-structure constants (15).
Keeping this in mind, the large-\(N_{\rm c}\) behaviour of arbitrary correlation functions will be found. In particular, for the tetraquark-phile (and therefore tetraquark-pole relevant) contributions, indicated by the subscript "tp", to definitely flavour-exotic correlation functions (7), one gets * for any flavour-preserving contribution of the type employed by Figure 1(c) or Figure 2, \[\left\langle{\rm T}\!\left(j_{\overline{\imath}b}(y)\,j_{cd}(y^{ \prime})\,j_{\overline{\imath}b}^{\dagger}(x)\,j_{\overline{\imath}d}^{ \dagger}(x^{\prime})\right)\right\rangle_{\rm tp} =O(N_{\rm c}^{2}\,a_{\rm S}^{2})=O(N_{\rm c}^{0})\,\] (16) \[\left\langle{\rm T}\!\left(j_{\overline{\imath}ad}(y)\,j_{cb}(y^{ \prime})\,j_{\overline{\imath}ad}^{\dagger}(x)\,j_{\overline{\imath}cb}^{ \dagger}(x^{\prime})\right)\right\rangle_{\rm tp} =O(N_{\rm c}^{2}\,a_{\rm S}^{2})=O(N_{\rm c}^{0})\,\] (17)
* for each flavour-rearranging contribution of the kind adopted by Figure 3(c) or Figure 4, \[\left\langle{\rm T}\!\left(j_{\overline{\imath}b}(y)\,j_{cd}(y^{ \prime})\,j_{\overline{\imath}ad}^{\dagger}(x)\,j_{\overline{\imath}cb}^{ \dagger}(x^{\prime})\right)\right\rangle_{\rm tp} =O(N_{\rm c}\,a_{\rm S}^{2})=O(N_{\rm c}^{-1})\.\] (18)
This general discrepancy between the large-\(N_{\rm c}\) behaviour of the flavour-preserving and of the flavour-rearranging four-point correlation functions expressed, for all contributions of any tetraquark-phile type, by Equations (16) and (17), on the one hand, and by Equation (18), on the other hand, has a startling or even disturbing implication for the spectra of tetraquark mesons to be expected in the large-\(N_{\rm c}\) limit. In the scattering of a pair of _conventional_ mesons,
\[M_{\overline{\imath}b}=[\overline{q}_{a}\,q_{b}]\,\qquad a,b\in\{u,d,s,c,b,t( \ldots?)\}\, \tag{19}\]
a tetraquark \(T\) betrays its existence by contributing in form of an intermediate-state pole. Its couplings to conventional mesons are governed by _transition amplitudes_\(A(T\longleftrightarrow M_{\overline{\imath}b}\,M_{\overline{\imath}d})\). Given the discrepancy between those classes of contributions for large \(N_{\rm c}\), consistency in the large-\(N_{\rm c}\) limit turns out [17; 19] to impose constraints on any involved transition amplitudes.
The QCD predictions for the large-\(N_{\rm c}\) behaviour of the correlation functions introduced in Section 3 cannot be matched, at hadron level, by the presence of merely a single tetraquark state [22]. Rather, fulfillment of the large-\(N_{\rm c}\) behaviour requested by Equations (16), (17) and (18) by the tetraquark-pole contributions necessitates the pairwise occurrence of tetraquarks, that is to say, of a minimum of two (corresponding) tetraquarks [17; 19]. The two tetraquarks, generically denoted by \(T_{A}\) and \(T_{B}\), have to exhibit _unequal_\(N_{\rm c}\) dependences of their transition amplitudes to the two possible quark-flavour divisions among the two conventional mesons in initial and final states; their dominant decay channels, however, exhibit the same large-\(N_{\rm c}\) behaviour. Thus, in the large-\(N_{\rm c}\) limit their total decay widths, \(\Gamma\), behave in a similar fashion,
\[\Gamma(T_{A})=O(N_{\rm c}^{-2})=\Gamma(T_{B})\, \tag{20}\]
and the large-\(N_{\rm c}\) interrelationships of the four involved transition amplitudes are of the kind
\[\underbrace{A(T_{A}\longleftrightarrow M_{\overline{\imath}b}\,M_{ \overline{\imath}d})=O(N_{\rm c}^{-1})}_{\Longrightarrow}\quad\Gamma(T_{A})= O(N_{\rm c}^{-2}) \tag{21}\] \[\begin{array}{c}A(T_{B}\longleftrightarrow M_{\overline{\imath}b} \,M_{\overline{\imath}d})=O(N_{\rm c}^{-2})\end{array}\quad\begin{array}{c} \underbrace{A(T_{B}\longleftrightarrow M_{\overline{\imath}d}\,M_{ \overline{\imath}b})=O(N_{\rm c}^{-1})}_{\Longrightarrow}\quad\Gamma(T_{B})= O(N_{\rm c}^{-2})\end{array}. \tag{22}\]
Table 3 compares several available _expectations_ for the large-\(N_{\rm c}\) dependence of the total decay rates \(\Gamma\) of definitely exotic and cryptoexotic tetraquarks, indicating a few discrepancies likely resulting from differences in underlying assumptions or contributions considered as crucial.
## 5 Multiquark-Adequate QCD Sum Rules Recognizing "Peculiarities" of Exotic Hadrons
From a mainly theoretical point of view, the description of any hadronic bound states of the fundamental degrees of freedom of QCD in a thoroughly analytical fashion appears to be most favourable; a promising approach complying with this intention, well-grounded in the framework of relativistic quantum field theories, is realized by the QCD sum rule formalism.
In the version originally devised by Shifman, Vainshtein, Zakharov [27] and others [28], a QCD sum rule embodies an analytical relationship between, on the one hand, properties of the hadron state (formed by the strong interactions) in the focus of one's current interest and, on the other hand, the (few) _basic_ parameters of their underlying quantum field theory, QCD. In principle, every _routine_ derivation of a QCD sum rule follows meanwhile well-established procedures [29]. The starting point of the construction of a QCD sum rule is the evaluation of an appropriate correlation function -- which clearly has to involve an operator interpolating the hadron under investigation -- in parallel both at the phenomenological hadron level and at the fundamental QCD level, followed (of course) by equating both evaluations' outcomes:
* In the course of QCD-level evaluation, Wilson's _operator product expansion_[30] (enabling conversion of a nonlocal product of operators into a series of local operators) is invoked to separate nonperturbative and (to some extent calculable) perturbative contributions.
* The _perturbative_ contributions, identical to the lowest term of this operator product expansion, can be inferred in form of a series in powers of the strong coupling (2).
* The _nonperturbative_ contributions involve, apart from derivable prefactors, _vacuum condensates_, i.e., the vacuum expectation values of products of quark and/or gluon field operators, which may be interpreted as a kind of effective parameters of QCD.
* In the course of hadron-level evaluation, the insertion of a complete set of hadron states guarantees that the hadron under study shows up by way of its intermediate-state pole.
By application of dispersion relations (and, if necessary, a sufficient number of subtractions), both perturbative QCD-level evaluation and hadron-level evaluation can be reexpressed (for the sake of convenience) in the form of dispersion integrals of appropriate spectral densities.
The predictive value and therefore usefulness of the QCD-hadron relations constructed in this manner is perceptibly increased by taking consecutively both the following measures:
1. Subject both sides of such a relation to a Borel transformation to another variable called Borel parameter \(\tau\). This results in the _entire_ removal of any subtraction term introduced and the suppression of the hadron-level contributions above the hadronic ground state. Under a Borel transformation, all _vacuum condensates_ in the nonperturbative QCD-level contributions get multiplied by powers of \(\tau\). So, these terms are called _power corrections_.
2. Rely on the assumption of quark-hadron duality, which postulates a (needless to stress, approximately realized) cancellation of all perturbative QCD-level contributions above suitably defined effective thresholds, \(s_{\rm eff}\), against all higher hadron-level contributions,
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Author Collective** & \multicolumn{2}{c}{**Decay Width \(\Gamma\)**} & **References** \\ & **Definitely Exotic** & **Cryptoexotic** & \\ & **Tetraquarks** & **Tetraquarks** & \\ \hline Knecht, Peris & \(O(1/N_{\rm C}^{2})\) & \(O(1/N_{\rm C})\) & [14] \\ Cohen, Lebed & \(O(1/N_{\rm C}^{2})\) & — & [15] \\ Maiani, Polosa, Riquer & \(O(1/N_{\rm C}^{3})\) & \(O(1/N_{\rm C}^{3})\) & [16] \\ Lucha, Melikhov, Sazdjian & \(O(1/N_{\rm C}^{2})\) & \(O(1/N_{\rm C}^{2})\) & [17; 19] \\ \hline \hline \end{tabular}
\end{table}
Table 3: Tetraquark total decay widths: expected upper bounds on large-\(N_{\rm C}\) behaviour (from Ref. [21]).
consisting of hadron excitations and hadron continuum. In implementing this concept, the problem of pinning down the nature of \(s_{\text{eff}}\) may be dealt with in two different ways:
* Without knowing better, just a guessed _fixed_ value of the parameter \(s_{\text{eff}}\) is adopted: \[s_{\text{eff}}=\text{const}\;.\] (23)
* In contrast, slipping in _limited_ information about a targeted hadron state opens the possibility [31; 32; 33; 34] to work out the _expected_\(s_{\text{eff}}\) dependence on Borel parameters \(\tau\): \[s_{\text{eff}}=s_{\text{eff}}(\tau)\;.\] (24)
The roadmap for the construction of QCD sum rules sketched above has originally been drafted for analyses of _conventional_ hadrons. Its unreflected application (in unchanged form) also to multiquark states seems, in view of the far-reaching discrepancies between the exotic and the conventional categories of hadrons, to be either too optimistic or a little bit too naive. Rather, one should be open for (potentially favourable) modifications of the customary QCD sum-rule approach, modifications that might be capable of improving the achieved accuracy of the predictions of QCD sum rules for the class of multiquark exotic hadrons. In particular, upon performing necessary evaluations of correlation functions at QCD level one might find advantageous to take into account the QCD contributions' feature of being tetraquark-phile, in Definition 2 implied to be desirable and by Proposition 1 given its precise meaning, or not. With respect to the power corrections, in any QCD sum-rule derivation indispensable for its QCD-level evaluation, the problem of whether a given nonperturbative vacuum-condensate contribution is tetraquark-phile or not may be analyzed along the lines indicated in Section 3 (as has been demonstrated at the example of definitely flavour-exotic tetraquarks [25; 35; 37]).
Targeting _definitely flavour-exotic tetraquarks_ (12), the versions of correlation functions (7) indicated in Definition 4 have to be discriminated and hence subjected to separate treatment.
* In the flavour-preserving case, one has to start from the four-point correlation functions \[\left\langle\Upsilon\!\left(j_{\text{\tiny{B}}}(y)\;j_{\text{\tiny{Zd}}}(y^{ \prime})\;j_{\text{\tiny{Zd}}}^{\dagger}(x)\;j_{\text{\tiny{Zd}}}^{\dagger}(x ^{\prime})\right)\right\rangle\,,\qquad\left\langle\Upsilon\!\left(j_{\text{ \tiny{Bd}}}(y)\;j_{\text{\tiny{Zd}}}(y^{\prime})\;j_{\text{\tiny{Zd}}}^{ \dagger}(x)\;j_{\text{\tiny{Zd}}}^{\dagger}(x^{\prime})\right)\right\rangle\,.\] (25)
Applying the _traditional_ QCD sum-rule manipulations to twofold contractions (8) of the correlation functions (25) yields as outcome of this enterprise a relationship, depicted in Figure 5, that incorporates a (vast) multitude of QCD-level and hadron-level quantities.
Figure 5: Aggregation of a pair of _unconnected_ conventional-meson QCD sum rules of the kind recalled by Figure 6 (top row, separated by a red dot-dashed line) and (bottom row) the _tetraquark-adequate_ QCD sum rule of generic structure as in Figure 7, potentially supporting tetraquark intermediate-state poles: outcome of uncritical evaluation of correlation functions (25) still awaiting its disentanglement [35; 38].
However, a more in-depth analysis [35] reveals that, already on diagrammatic grounds, this conglomerate decomposes, in fact, into two QCD sum rules for _conventional_ mesons (Figure 6) and one further QCD sum rule that, potentially, supports the development of _tetraquark poles_ and rightly deserves the label of being "tetraquark-adequate" (Figure 7). In the course of its QCD-level evaluation, this latter QCD sum rule receives, exclusively, tetraquark-phile contributions, in the sense of Proposition 1; all the perturbative among these enter in form of dispersion integrals of tetraquark-adequate spectral densities, \(\rho_{\rm p}\). An analogous reflection for single contractions (9) of the correlation functions (25) leads to similar QCD sum-rule findings, all perturbative tetraquark-phile QCD contributions being encoded, in dispersive formulation, in tetraquark-adequate spectral densities \(\Delta_{\rm p}\).
* In the flavour-rearranging case, one has to deal with the four-point correlation function \[\left\langle\Upsilon\Big{(}j_{\rm 2b}(y)\,j_{\rm 2d}(y^{\prime})\,j_{\rm 2d}^{ \dagger}(x)\,j_{\rm 2b}^{\dagger}(x^{\prime})\Big{)}\right\rangle\,.\] (26)
Here, irrespective of (ultimately necessary) spatial contractions (8) and (9) of four-point correlation functions (7), the analysis is unfortunately not thus straightforward as in the flavour-preserving case: Within QCD-level evaluation, all _tetraquark-phile_ contributions (defined by requiring them to satisfy the constraint formulated in Proposition 1) may be identified, case by case, by inspection of the solutions of the relevant Landau equations. Within hadron-level evaluation, that QCD-level characteristic of being tetraquark-phile or not is mirrored by the ability of any contributions at hadron level to accommodate, in their \(s\) channel, two-meson intermediate states or not, in addition to a possible presence of tetraquark intermediate-state poles [37]. Hardly surprisingly, these insights translate the outcome of the QCD sum-rule formalism based on the correlation function (26) into a quark-hadron relation of (expected) two-component structure symbolically shown in Figure 8. All _perturbative_ tetraquark-phile QCD-level contributions find their way into a tetraquark-adequate QCD sum rule arising from a _precursor_ as in Figure 8(b) by spectral densities \(\rho_{\rm r}\) in the double-contractions case (8) and \(\Delta_{\rm r}\) in the single-contraction case (9).
Figure 6: Schematical composition of QCD sum rules for conventional mesons (blue dashed lines) [35].
Figure 7: Schematical composition of a tetraquark-adequate QCD sum rule of flavour-preserving type: tetraquark-phile contributions at QCD level, at hadron level counterbalanced by non-separable meson contributions (blue dashed lines), and perhaps those of tetraquark poles (blue dashed double line) [35].
For a definitely flavour-exotic tetraquark (12), the properties of foremost interest are mass \(M\),
* decay constants \(f_{\overline{j}\overline{\mu}\overline{c}d}\) and \(f_{\overline{j}\overline{\mu}\overline{c}b}\), arising from the vacuum-tetraquark matrix elements of the two distinct operators (6) interpolating any definitely flavour-exotic tetraquark (12), \[f_{\overline{j}\overline{c}d}\equiv\langle 0|\theta_{\overline{j}\overline{c}d }|T\rangle\,\qquad f_{\overline{j}\overline{c}b}\equiv\langle 0|\theta_{\overline{j} \overline{c}b}|T\rangle\ ;\] (27)
* momentum-space amplitudes \(A(T\to j_{\overline{j}\overline{c}b}\,j_{\overline{c}d})\) and \(A(T\to j_{\overline{j}\overline{c}b})\), Fourier-transformed vacuum-tetraquark matrix elements of appropriate pairs of quark bilinear currents (5), \[\begin{array}{l}\langle 0|{\rm T}[j_{\overline{j}\overline{c}b}(y)\,j_{ \overline{c}d}(y^{\prime})]|T\rangle\ \frac{{\rm Fourier}}{{\rm transformation}}\ A(T\to j_{ \overline{j}\overline{c}b}\,j_{\overline{c}d})\,\\ \langle 0|{\rm T}[j_{\overline{j}\overline{c}b}(y^{\prime})]|T\rangle\ \frac{{\rm Fourier}}{{\rm transformation }}\ A(T\to j_{\overline{j}\overline{c}b})\.\end{array}\] (28)
Figure 8: Outcome of application of established QCD sum-rule techniques to correlation functions (26), consisting of two uncorrelated quark–hadron relationships: (**a**) one equating the non-tetraquark-phile QCD contributions with hadron contributions not involving any two-meson \(s\)-channel cuts (subsumed by hatched rectangle); (**b**) the precursor of a tetraquark-adequate QCD sum rule, involving two-meson \(s\)-channel cuts (subsumed by filled rectangle) and _maybe_ tetraquark poles (blue horizontal bar) too [37].
In terms of these hadronic properties, all effective-threshold improved multiquark-adequate QCD sum rules resulting from (once or twice) contracted four-point correlation functions (7) assume, for the example of definitely flavour-exotic tetraquarks, _symbolically_ the form [35; 37]
\[(f_{\overline{h}\overline{c}\overline{c}\overline{c}})^{2}\exp(-M^{2}\,\tau)\] \[=\int\limits_{\hat{s}}^{s_{\rm eff}(\tau)}\mathrm{d}s\exp(-s\, \tau)\,\rho_{\rm p}(s)+\text{Borel-transformed power corrections}\, \tag{29}\] \[f_{\overline{h}\overline{c}\overline{c}\overline{c}}\,A(T\to j_{ \overline{a}b}\,j_{\overline{c}\overline{d}})\exp(-M^{2}\,\tau)\] \[=\int\limits_{\hat{s}}^{s_{\rm eff}(\tau)}\mathrm{d}s\exp(-s\, \tau)\,\Delta_{\rm p}(s)+\text{Borel-transformed power corrections}\,\] (30) \[f_{\overline{h}\overline{c}\overline{c}\overline{c}}\,f_{ \overline{a}\overline{c}\overline{c}\overline{c}}\exp(-M^{2}\,\tau)\] \[=\int\limits_{\hat{s}}^{s_{\rm eff}(\tau)}\mathrm{d}s\exp(-s\, \tau)\,\rho_{\rm r}(s)+\text{Borel-transformed power corrections}\,\] (31) \[f_{\overline{h}\overline{c}\overline{c}\overline{c}}\,A(T\to j_{ \overline{a}b}\,j_{\overline{c}\overline{d}})\exp(-M^{2}\,\tau)\] \[=\int\limits_{\hat{s}}^{s_{\rm eff}(\tau)}\mathrm{d}s\exp(-s\, \tau)\,\Delta_{\rm r}(s)+\text{Borel-transformed power corrections}. \tag{32}\]
The general lesson to be learned from the above for both _perturbative and nonperturbative_ QCD contributions to QCD sum-rule approaches applied to _any_ type of multiquark hadrons: paying attention to deploy exclusively spectral densities and power corrections computed in multiquark-phile manner should avoid or, at least, diminish the "contamination" of inferred QCD sum-rule predictions by input not related at all to the multiquark hadrons under study.
## 6 Summary, Conclusion and Outlook -- Multiquark-Instigated Theoretical Adaptations
The multiquark states among the conceivable exotic hadrons feature a characteristic not shared by conventional hadrons, namely, cluster reducibility [2; 39], that is to say, their ability to fragment into _colour-singlet_ bound states of lesser numbers of constituents, eventually into a set of conventional hadrons. A promising implication for various theoretical approaches to multiquarks is the advantage gained by pertinent modification of one's favoured formalism.
Here, such improvements have been illustrated for the set of flavour-exotic tetraquarks. An analogous contemplation can be (and has been) done for the class of flavour-cryptoexotic tetraquarks [17; 18; 19; 20; 21; 26]. It goes without saying that there one gets confronted with additional complications: the potential mixing of these tetraquark states with conventional mesons that carry precisely the quantum numbers of those tetraquarks. Mutatis mutandis, these findings should be straightforwardly transferable to any other multiquark states, such as the likewise established [4] pentaquark baryons. The numerical impact of proposed changes may only be quantified by confronting (definite) multiquark predictions with experimental counterparts.
## Funding
This research received no external funding.
Data Availability Statement: Data sharing not applicable.
The author would like to thank both Dmitri I. Melikhov and Hagop Sazdjian, for a particularly pleasurable, enjoyable, and inspiring collaboration on various of the topics covered above.
The author declares no conflict of interest.
## Abbreviations
The following abbreviations are used in this manuscript:
\begin{tabular}{l l} LHCb & Large Hadron Collider beauty \\ OPE & operator product expansion \\ QCD & quantum chromodynamics \\ \end{tabular}
|
2301.13835 | Multidimensional Quantum Fourier Transformation | The Quantum Fourier Transformation (QFT) is a well-known subroutine for
algorithms on qubit-based universal quantum computers. In this work, the known
QFT circuit is used to derive an efficient circuit for the multidimensional
QFT. The complexity of the algorithm is $\mathcal{O}( \log^2(M)/d )$ for an
array with $M=(2^n)^d$ elements $(n \in \mathbb{N})$ equally separated along
$d$ dimensions. Relevant properties for application are discussed. An example
on current hardware is depicted by a 6 qubit 2D-QFT with an IBM quantum
computer. | Philipp Pfeffer | 2023-01-31T18:25:40Z | http://arxiv.org/abs/2301.13835v1 | # Multidimensional Quantum Fourier Transformation
###### Abstract
The Quantum Fourier Transformation (QFT) is a well-known subroutine for algorithms on qubit-based universal quantum computers. In this work, the known QFT circuit is used to derive an efficient circuit for the multidimensional QFT. The complexity of the algorithm is \(\mathcal{O}(\log^{2}(M)/d)\) for an array with \(M=(2^{n})^{d}\) elements (\(n\in\mathbb{N}\)) equally separated along \(d\) dimensions. Relevant properties for application are discussed. An example on current hardware is depicted by a 6 qubit 2D-QFT with an IBM quantum computer.
The Quantum Fourier Transformation [1] (QFT) is a key subroutine in quantum information processing, most prominently used within the quantum phase estimation [2] and the factoring algorithm of Shor [3]. Comparing to its classical counterpart, the fast fourier transformation [4] (FFT) solves the same problem with effort \(\mathcal{O}(N\log(N))\) while the QFT needs \(\mathcal{O}(\log^{2}(N))\) operations [5] for a vector with \(N=2^{n}\) elements (\(n\in\mathbb{N}\)). Though this speed-up does not lead to the broad replacement of the FFT, the aspect of quantum parallelism is important for the construction of further quantum algorithms. Single operations acting on a large quantum state, such that the whole state is affected, enable the speed-up of the QFT and can be utilized even better for a multidimensional QFT.
The idea of this dimensional extension is not new, as the 2-dimensional QFT (2D-QFT) is of crucial use in the field of quantum image processing [6; 7; 8]. There, it can be used for edge detection [9], watermarking [10] and for the implementation of a discrete cosine transform [11], which is useful for interpolation [12]. The motivation here is to extend the concept to more dimensions and give an easily understandable summary of the constructed circuit and its complexity. Furthermore, this work aims at readers new to the field in order to help them understand the concept of quantum parallelism as well as the consequences hindering its success.
The exact task is to construct a quantum circuit which calculates the d-dimensional discrete Fourier transformation of a d-dimensional array A where each dimension \(i\) spreads over \(N_{i}=2^{n_{i}}\) elements (\(n_{i}\in\mathbb{N}\)). The transformation is given by the formula
\[\tilde{a}_{\delta_{1},...,\delta_{d}}=\sum_{k_{d}=1}^{N_{d}-1}\omega_{N_{d}}^ {k_{d}\delta_{d}}\sum_{k_{d-1}=0}^{N_{d-1}-1}\cdots\left(\sum_{k_{1}=0}^{N_{i }-1}\omega_{N_{1}}^{k_{i}\delta_{1}}a_{k_{1},..,k_{d}}\right) \tag{1}\]
with \(\omega_{N_{i}}=e^{j2\pi/N_{i}}\) and \(a_{k_{1},...,k_{d}}\) as the indexed elements of A. For simplification, indices are abbreviated by
\[N_{1}\cdot...\cdot N_{i-1}=:N_{i_{1}}\ \,\ \ N_{i^{\dagger}}:=N_{i+1}\cdot... \cdot N_{d}\]
and the number of all array elements is \(M=N_{1}\cdot...\cdot N_{d}\).
The 1-dimensional FFT has a matrix representation given by the Vandermonde-matrix
\[V_{N_{i}}=\begin{pmatrix}\omega_{N_{i}}^{0}&\omega_{N_{i}}^{0}&\omega_{N_{i} }^{0}&\cdots&\omega_{N_{i}}^{0}\\ \omega_{N_{i}}^{0}&\omega_{N_{i}}&\omega_{N_{i}}^{2}&\cdots&\omega_{N_{i}}^{N_ {i}-1}\\ \omega_{N_{i}}^{0}&\omega_{N_{i}}^{2}&\omega_{N_{i}}^{4}&\cdots&\omega_{N_{i}}^ {2(N_{i}-1)}\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ \omega_{N_{i}}^{N_{i}-1}&\omega_{N_{i}}^{2(N_{i}-1)}&\cdots&\omega_{N_{i}}^{ \left((N_{i}-1)^{2}\right)}\end{pmatrix}.\]
The QFT in terms of a matrix is the rescaled version of this matrix, namely
\[QFT_{N_{i}}=\frac{1}{\sqrt{N_{i}}}V_{N_{i}}.\]
Thereby the matrix is unitary. The circuit implementing this matrix (see [5]) will be called \(QFT_{n_{i}}\). Note that the prefactor of \(QFT_{N_{i}}\) matters for comparing the results with the classical algorithm, as done in Fig. 2.
In classical computing, multidimensional FFT algorithms utilize the FFT recursively [13]. For the solution, one can start by applying the FFT to the last sum of Eq. (1) to get an interim array where every entry is transformed along the first dimension. This array is then the input for the FFT along the next dimension, which can be repeated until every sum is evaluated. The \(d=2\) case gives this procedure the name row-column algorithm, since there one transforms the array first along its rows or columns in order to transform along the other next. This idea is essential for the following algorithm.
The quantum circuit for the multidimensional QFT is shown in Fig. 1 and works as follows. The elements of \(A\) can be aligned into a vector
\[\vec{v}=(a_{1,1,...,1}\,\...\,\ a_{N_{1},1,...,1}\,\ a_{1,2,...,1}\,\...\,\ a_{N_{1},N_{2}...,N_{d}})^{T}.\]
This vector has to be reformulated into a quantum state \(|v\rangle\), which requires to scale the vector by
\[|v\rangle=\frac{\vec{v}}{\sqrt{\sum_{k=1}^{N_{i}\cdot...\cdot N_{d}}|v_{k}|^{ 2}}}. \tag{2}\]
With that \(|v\rangle\) can be initialized on a quantum computer by a state loading procedure[14; 15]. As pointed out later on, it is more favourable if the input construction already is an efficient algorithm for a specific problem.
The formalism for circuit-to-matrix conversion is that the upper qubit is the least significant bit, or in other words the last bit of the bit string. Thereby, if the action of \(QFT_{n_{1}}\) in Fig. 1 is of interest, the corresponding matrix acting on the input state is by definition [5]
\[U_{1}=\mathcal{I}_{N_{i^{\uparrow}}}\otimes QFT_{N_{1}}=\begin{pmatrix}QFT_{N _{1}}&&\mathbb{O}\\ &&\ddots&\\ \mathbb{O}&&QFT_{N_{1}}\end{pmatrix} \tag{3}\]
where \(\mathcal{I}_{N}\) denotes the N\(\times\)N identity matrix, \(\mathbb{O}\) represents all zeros besides the block diagonal and \(\otimes\) is the tensor product. The tensor product is the mathematical representation of quantum parallelism since it duplicates the operation to the size of the quantum state it acts on. As a result, this circuit gives
\[U_{1}|v\rangle=\begin{pmatrix}QFT_{N_{1}}&(\ v_{1}\,\...\,\ v_{N_{1}}\ )^{T}\\ QFT_{N_{1}}&(v_{N_{1}+1},...,v_{2N_{1}})^{T}\\ &\vdots\\ QFT_{1}&(v_{M-N_{1}},...,v_{M})^{T}\end{pmatrix}\]
which creates the first interim matrix comparable to the classical approach. In other words, a single QFT finishes the first dimension globally. In general for any dimension i, the large unitary matrix corresponding to the quantum circuit and its position in Fig. 1 is
\[U_{i}=\mathcal{I}_{N_{i^{\uparrow}}}\otimes\underbrace{QFT_{N_{i}}\otimes \mathcal{I}_{N_{i_{i}}}}_{\widetilde{U_{i}}}. \tag{4}\]
For edge cases, \(\mathcal{I}_{1_{i}}=\mathcal{I}_{d^{\dagger}}=1\) is the empty product convention. The arrangement of A as a vector in combination with Eq. (4) yields the transformation along dimension i. To further clarify this, writing out \(\tilde{U}_{i}\) of Eq. (4) by the tensor product definition gives
\[\tilde{U}_{i}=\frac{1}{\sqrt{N_{i}}}\begin{pmatrix}\omega_{N_{i}}^{0}\ \mathcal{I}_{N_{i_{ \downarrow}}}&\omega_{N_{i}}^{0}\mathcal{I}_{N_{i_{\downarrow}}}&...&\omega_{ N_{i}}^{0}\mathcal{I}_{N_{i_{\downarrow}}}\\ \omega_{N_{i}}^{0}\mathcal{I}_{N_{i_{\downarrow}}}&\omega_{N_{i}}^{1} \mathcal{I}_{N_{i_{\downarrow}}}&...&\omega_{N_{i}}^{N_{i}-1}\mathcal{I}_{N_{ i_{\downarrow}}}\\ \vdots&\vdots&\ddots&\vdots\\ \omega_{N_{i}}^{0}\mathcal{I}_{N_{i_{\downarrow}}}&\omega_{N_{i}}^{N_{i}-1} \mathcal{I}_{N_{i_{\downarrow}}}&...&\omega_{N_{i}}^{(N_{i}-1)^{2}}\mathcal{I}_ {N_{i_{\downarrow}}}\end{pmatrix}. \tag{5}\]
The overall working principle of the algorithm can be explained by observing Eq. (4) with the help of Eq. (3) and Eq. (5). The position of \(QFT_{n_{i}}\) in the circuit structure of Fig 1 forces the tensor product of Eq. (4). The evaluation of the latter tensor product, namely for \(\tilde{U}_{i}\), creates a matrix as shown in Eq. (5). Here, the elements of the 1-dimensional QFT are separated such that, combined with the vector form of the original array, they act only along the dimension i. The first tensor product, similar to Eq. (3), then duplicates this correctly spaced matrix to the necessary size in a block diagonal structure, which assures the action on all array elements. All QFTs act on separate qubits, which means that all \(U_{i}\) of Eq. (4) commute with each other [5]. Thereby, one can choose that they act in ascending order, i.e. they evaluate the sums as done in Eq. (1), dimension by dimension, similar to the classical algorithm. To summarize, each QFT transforms the array along the corresponding dimension with a single QFT while all QFTs are executed simultaneously.
The difference of computational complexity is thereby significant. For simplicity, let A be a d-dimensional array equally sized along every dimension i such that every dimension has \(N_{i}=N=2^{n}\) elements and \(M=N^{d}\). It is known for the classical multidimensional FFT that it has computational effort \(\mathcal{O}(M\log(M))\)[13], similar to the 1-dimensional case. For the quantum algorithm, a single QFT has a known complexity of \(\mathcal{O}(n^{2})\), and the algorithm needs d QFTs in total, so \(\mathcal{O}(dn^{2})\) operations. In terms of \(M\), one can deduce that \(n=\log_{2}(\sqrt[4]{M})\) which means for a direct comparison that
FFT: \[\mathcal{O}(M\log(M))\leftrightarrow\text{QFT: }\mathcal{O}(\log^{2}(M)/d).\]
The applicability of this speed up remains to be shown since it is already known for the \(d=1\) case that this algorithm will not replace all applications of the FFT [5]. The computational effort of rescaling arrays according to Eq. (2), then initializing them on a quantum computer [14; 15] in order to approximate the spectrum with measurements will almost always scale at least linear in terms of array elements \(M\), which makes the computational advantage either shallow or non-existing. Additionally, the approximation of the final spectrum by measurements will return \(|\hat{v}_{i}|^{2}\), which will erase both the sign as well as the phase information of the complex Fourier coefficient \(\hat{v}_{i}\). These points have to be addressed in competition with the classical multidimensional FFT,
Figure 1: Quantum circuit to calculate the d-dimensional FFT. \(|v\rangle\) represents the input array and \(|\hat{v}\rangle\) the QFT of this array, both as quantum states.
which is efficiently executable with parallel computing approaches [13; 16].
In favor of the proposed algorithm speak the existing applications of QFTs, as is already the case in the field of quantum image processing [9; 10; 11; 12]. Another application could use the multidimensional QFT to solve Laplace operators by variational quantum algorithms (see [17]) with more dimensions. Furthermore, it is worth noting that the algorithm itself does not scale with the amount of parallel initialized arrays. In detail, if one manages to initialize not only a single array \(M\) as \(|v\rangle\) done by Eq. (2) but multiple ones aligned one after another, then the circuit of Fig. 1 can be appended where only the initialization spreads over more qubits. In the herein notation, these qubits are added below the illustrated circuit, which then creates a structure similar to Eq. (3) whereas the 1-dimensional QFT is replaced by the d-dimensional QFT.
Concluding, an optimal use-case for this algorithm would be that multiple input arrays need to be calculated first, and it exists an efficient quantum algorithm to do so. The result state after the QFT should then either be strongly localized or further accessible on a quantum computer, for instance if the difference of multiple Fourier transformed arrays is of interest.
As a final example, a real quantum device of IBM (_'ibm_lagos'_) is used to calculate the Fourier coefficients of an 8\(\times\)8 picture (\(d=2\) and \(n_{x}=n_{y}=3\)), where two simplifications are used. First, the final swap operations in the QFTs (see [5]) are avoided by measuring the qubits in a swapped order. Further circuit optimization has been inhibited rigorously. Second, the input picture is generated by
\[f(x,y)=sin(\pi x/2)cos(\pi y/2) \tag{6}\]
where \(x,y\in\{0,1,...7\}\). This is easy to initialize, as Fig. 2.a shows. The image itself is illustrated in Fig. 2.b. For this, four distinct peaks are expected in the Fourier spectrum, as depicted in Fig. 2.c. The result of the real quantum computer, shown in Fig. 2.d, has its largest peaks at the correct locations. Here, the expectable influence of noisy quantum computers corrupts the result such that the peaks are not equally sized and additional peaks rise. The lowest correct and highest incorrect peak still show significant difference.
Closing with acknowledgments, I like to thank Theo Kaufer for bringing up the fundamental motivation for this work. I thank Darius Becher and Clara Stolzenberg for helpful comments. During this work, I was supported by the project no. P2018-02-001 "DeepTurb - Deep Learning in and of Turbulence" of the Carl Zeiss Foundation and the Deutsche Forschungsgemeinschaft under grant no. DFG-SPP 1881.
I acknowledge the use of IBM Quantum services for this work. The views expressed are those of the author, and do not reflect the official policy or position of IBM or the IBM Quantum team. In this paper _'ibm_lagos'_ was used, which is an IBM Quantum Falcon Processor.
|
2305.00400 | On LinDistFlow Model Congestion Pricing: Bounding the Changes in Power
Tariffs | The optimal power flow (OPF) problem is an important mathematical program
that aims at obtaining the best operating point of an electric power grid. The
optimization problem typically minimizes the total generation cost subject to
certain physical constraints of the system. The so-called linearized
distribution flow (LinDistFlow) model leverages a set of linear equations to
approximate the nonlinear AC power flows. In this paper, we consider an OPF
problem based on the LinDistFlow model for a single-phase radial power network.
We derive closed-form solutions to the marginal values of both real and
reactive power demands. We also derive upper bounds on the congestion price
(a.k.a. `shadow price'), which denotes the change in marginal demand prices
when the apparent power flow limits of certain lines are binding at optimum.
Various cases of our result are discussed while simulations are carried out on
a $141$-bus radial power network. | Shourya Bose, Kejun Chen, Yu Zhang | 2023-04-30T05:53:44Z | http://arxiv.org/abs/2305.00400v1 | # On LinDistFlow Model Congestion Pricing: Bounding the Changes in Power Tariffs
###### Abstract
The optimal power flow (OPF) problem is an important mathematical program that aims at obtaining the best operating point of an electric power grid. The optimization problem typically minimizes the total generation cost subject to certain physical constraints of the system. The so-called linearized distribution flow (LinDistFlow) model leverages a set of linear equations to approximate the nonlinear AC power flows. In this paper, we consider an OPF problem based on the LinDistFlow model for a single-phase radial power network. We derive closed-form solutions to the marginal values of both real and reactive power demands. We also derive upper bounds on the congestion price (a.k.a.'shadow price'), which denotes the change in marginal demand prices when the apparent power flow limits of certain lines are binding at optimum. Various cases of our result are discussed while simulations are carried out on a \(141\)-bus radial power network.
## I Introduction
Radially energized power networks are prevalent in grid-scale power systems such as utility distribution networks and microgrids [1]. They are defined as power distribution networks wherein the energized section assumes a topology of a _connected tree_. Traditionally, the only generation source in a radial network is a sub-station as the upstream to the network, which is interfaced with some high-voltage transmission network using power conversion devices. With the advent of consumer-level generation devices such as photovoltaic panels, wind turbines, and microturbines, radial networks can now accommodate _prosumers_, i.e. agents on the network which can either inject/withdraw power in/from the network. Desirable set points of generation power can be determined by solving for set points which are optimal with respect to some generation cost function, subject to physical and operational constraints. This optimization problem, well known as _Optimal Power Flow_ (OPF) was first introduced in literature by Carpentier [2, 3]. Depending on the nature of constraints and the cost function, OPF can be variously categorized as DC-OPF, AC-OPF, security constrained OPF, etc [4]. The former two are different ways of modeling physics of the network, while the latter adds security or contingency constraints meant to ensure robust operation of the network. It is important to note that in this article we only consider OPF problems wherein the network operator and prosumers seek to optimize the same cost function. This is as opposed to scenarios wherein the prosumers may seek to optimize a cost function different from that of the network operator [5, 6].
The DC-OPF, which models the physics of the network using a set of linear equations, has been widely researched and used in practice for transmission networks, where the topology may be meshed. Combined with administrative constraints, DC-OPF provides reasonably accurate set points _vis-a-vis_ the optimal AC-OPF solution [7]. However, the DC-OPF linearization does not model reactive power injection, which poses a problem in analyzing radial distribution networks with devices such as PVs and WTs having controllable inverters. To counter this drawback, there has been significant recent research on the _LinDistFlow_ equation [8]. First introduced by Baran and Wu [9], this is a set of linearized equations which describes the physics of the network (possibly multiphase with high R/X ratios). The underlying condition of LinDistFlow is that there are no power line losses, which allows for linearization of the non-convex _DistFlow_ equations from which LinDistFlow is derived [10].
_Related Work:_ For a review of marginal demand costs in traditional OPF models including DC-OPF, the reader may consult the textbook [11]. Khatami et al. provide a detailed description of various components constituting nodal prices [12]. Biegel et al. consider congestion management through shadow prices [13]. Bai et al. consider marginal pricing of real and reactive power demand under various markets for the nonlinear DistFlow model [14]. Xu et al. design a deregulated power market mechanism, which uses the idea of marginal pricing at its core [15]. A comprehensive review of pricing mechanisms in transmission and distribution markets, including reserves, may be found in [16]. Finally, similar in nature to the current paper, Winnicki et al. consider marginal pricing in the DistFlow model, but without consideration of congestion. [17]
_Contribution:_ In this paper, we formulate an OPF problem with the LinDistFlow model, named as LDF-OPF. We consider load satisfaction, generation bounds, voltage bounds, and conic branch flows. Our proposed framework can handle more generalized formulations with linear and conic constraints. We first show a closed-form expression for the marginal price when there is no flow congestion at optimality of LDF-OPF. Then, we derive an upper bound on the variation of demand marginal costs. The proposed upper bound is function of terms involving marginal costs associated with binding of aforementioned conic constrains representing branch flow limits, and network topology factors. This result provides a useful tool for network operators to estimate the change in demand marginal prices as a function of their choice of branch
flow marginal costs.
_Notation:_\(\mathbb{R}\) and \(\mathbb{N}\) denote the set of real numbers and integers, respectively. Vectors and matrices are denoted with boldface. For a vector \(\mathbf{a}\in\mathbb{R}^{n}\), \(\mathbf{a}(j)\) is its \(j^{\text{th}}\) element while \(\left\|\mathbf{a}\right\|_{2}\) denotes its 2-norm. \(\mathbf{1}_{n}\in\mathbb{R}^{n}\) is the all-ones vector. For a positive \(n\in\mathbb{N}\), \([n]\) denotes the set \(\{1,\cdots,n\}\). For a finite set \(\mathcal{S}\), \(\left|\mathcal{S}\right|\) denotes its cardinality. For a directed graph \(\mathcal{G}=(\mathcal{N},\mathcal{L})\) where \(\mathcal{N}\) is the set of nodes and \(\mathcal{L}\) the set of _directed_ edges, and for any node \(i\in\mathcal{N}\), \(\mathcal{H}_{i}\) denotes the _inclusive downstream set_ of \(i\), i.e. \(\mathcal{H}_{i}\overset{\Delta}{=}\{j\in\mathcal{N}|\exists\text{ directed path from }i\text{ to }j\text{ in }\mathcal{L}\}\cup\{i\}\).
## II Problem Formulation
### _Background_
Consider a radial power network with a slack bus. The buses are labeled as \(\mathcal{N}\overset{\Delta}{=}\{0,1,\cdots,n\}\), with \(0\) denoting the slack bus. The network topology is represented by a directed graph \(\mathcal{G}\overset{\Delta}{=}(\mathcal{N},\mathcal{L})\), where \(\mathcal{L}\) denotes the set of branches. Without loss of generality, \(\mathcal{L}\) can be constructed such that the directed branches point _away_ from the slack bus. All non-slack buses are classified as the set of _load_ buses \(\mathcal{N}^{l}\) and the set of _generator_ buses \(\mathcal{N}^{g}\) such that \(\mathcal{N}=\{0\}\cup\mathcal{N}^{g}\cup\mathcal{N}^{l}\). Let \(n_{g}\overset{\Delta}{=}|\mathcal{N}^{g}|\) and \(n_{l}\overset{\Delta}{=}|\mathcal{N}^{l}|\) denote the number of generator and load buses, respectively. Noting that the number of branches equals that of non-slack buses, each branch may be uniquely assigned the index of the bus to which it is upstream.
A concrete way of analyzing the physics of power flows is through the _DistFlow_ equations. For \(i\in\mathcal{N}\), let \(s_{i}\overset{\Delta}{=}p_{i}+\mathbf{i}q_{i}\) be the complex power injection at bus \(i\), \(S_{i}\overset{\Delta}{=}P_{i}+\mathbf{i}Q_{i}\) be the complex power flow on branch \(i\), and \(v_{i}\) and \(l_{i}\) denote the squared voltage magnitude at bus \(i\), and squared current magnitude of branch \(i\), respectively. The DistFlow equations that hold for all \(i\in\mathcal{N}\) are given as [18]
\[s_{i} =\sum_{j\in\text{child}(i)}S_{j}-S_{i}+l_{i}z_{i}, \tag{1a}\] \[v_{i} =v_{\text{parent}(i)}-2\text{Re}[z_{i}^{*}S_{i}]+l_{i}|z_{i}|^{2},\] (1b) \[|S_{i}|^{2} =v_{\text{parent}(i)}l_{i}. \tag{1c}\]
Assuming no power line losses in the network the DistFlow equations 1 can be linearized into the so-called _LinDistFlow_ model whose compact form is given as [18]
\[\mathbf{v}=\mathbf{R}\mathbf{p}+\mathbf{X}\mathbf{q}+v_{0}\mathbf{1}_{n}, \tag{2}\]
where \(\mathbf{p}\overset{\Delta}{=}[\mathbf{p}(1),\cdots,\mathbf{p}(n)]\), \(\mathbf{q}\overset{\Delta}{=}[\mathbf{q}(1),\cdots,\mathbf{q}(n)]\), and \(\mathbf{v}\overset{\Delta}{=}[\mathbf{v}(1),\cdots,\mathbf{v}(n)]\) denote the real and reactive power injections and squared voltage magnitude of all non-slack buses, respectively. \(v_{0}\) is the fixed voltage of the slack bus. Positive semi-definite matrices \(\mathbf{R},\mathbf{X}\in\mathbb{R}^{n\times n}\) encode branch resistance and reactance, as well as the topology of \(\mathcal{G}\). Since the vector \(\mathbf{p}\) (similarly \(\mathbf{q}\) and \(\mathbf{v}\)) contains various indices corresponding to generators and loads, we define the matrices \(\mathbf{A}_{g}\in\{0,1\}^{n_{g}\times n}\) and \(\mathbf{A}_{l}\in\{0,1\}^{n_{l}\times n}\) which help us separate generation and load indices from \(p\) as
\[\mathbf{p}_{g}=\mathbf{A}_{g}\mathbf{p},\quad\mathbf{q}_{g}=\mathbf{A}_{g} \mathbf{q},\quad\mathbf{p}_{l}=\mathbf{A}_{l}\mathbf{p},\quad\mathbf{q}_{l}= \mathbf{A}_{l}\mathbf{q}.\]
Thanks to zero line losses, the slack bus real and reactive injections become \(p_{s}=-\mathbf{1}_{n}^{\top}\mathbf{p}\) and \(q_{s}=-\mathbf{1}_{n}^{\top}\mathbf{q}\), respectively.
Let \(\mathbf{f}^{p}\overset{\Delta}{=}[\mathbf{f}^{p}(1),\cdots,\mathbf{f}^{p}(n)]\) and \(\mathbf{f}^{q}\overset{\Delta}{=}[\mathbf{f}^{p}(1),\cdots,\mathbf{f}^{q}(n)]\). The branch flows are given as \(\mathbf{f}^{p}=\mathbf{F}\mathbf{p}\) and \(\mathbf{f}^{q}=\mathbf{F}\mathbf{q}\), where \(\mathbf{F}\) is derived from the _signed branch-bus incidence matrix_\(\tilde{\mathbf{A}}\in\mathbb{R}^{n\times n+1}\) by deleting its first column, and invert-transposing it.
**Lemma 1** (Properties of matrix \(\tilde{\mathbf{A}}\)).: _The matrix \(\tilde{\mathbf{A}}\) is defined as_
\[\tilde{\mathbf{A}}(i,j)=\begin{cases}1,&\text{if branch $i$ starts at bus $j-1$}\\ -1,&\text{if branch $i$ terminates at bus $j-1$}\\ 0,&\text{otherwise.}\end{cases}\]
_Let \(\mathbf{A}\) be the square matrix derived by deleting the first column of \(\tilde{\mathbf{A}}\). \(\mathbf{A}^{-1}\) exists [18], and \(\mathbf{F}\overset{\Delta}{=}\mathbf{A}^{-\top}\)._
### _Optimal Power Flow Problem_
A general OPF problem based on the described power network characteristics is given as follows.
\[\min_{p,q,v,p_{s},q_{s}} \mathbf{c}_{g}^{\top}(\mathbf{A}_{g}\mathbf{p})+c_{s}p_{s}\] (3) s.t. \[\mathbf{v}=\mathbf{R}\mathbf{p}+\mathbf{X}\mathbf{q}+v_{0} \mathbf{1}_{n} \tag{3a}\] \[\mathbf{y}\leq\mathbf{v}\leq\bar{\mathbf{v}}\] (3b) \[\mathbf{A}_{l}\mathbf{p}=\bar{\mathbf{p}}_{l},\quad\mathbf{A}_{l} \mathbf{q}=\hat{\mathbf{q}}_{l}\] (3c) \[\underline{\mathbf{p}}_{g}\leq\mathbf{A}_{g}\mathbf{p}\leq\bar{ \mathbf{p}}_{g},\quad\underline{\mathbf{q}}_{g}\leq\mathbf{A}_{g}\mathbf{q}\leq \bar{\mathbf{q}}_{g}\] (3d) \[p_{s}=-\mathbf{1}_{n}^{\top}\mathbf{p},\quad q_{s}=-\mathbf{1}_{n}^ {\top}\mathbf{q}\] (3e) \[\underline{p}_{s}\leq p_{s}\leq\bar{p}_{s},\quad\underline{q}_{s}\leq q _{s}\leq\bar{q}_{s}\] (3f) \[\mathbf{f}^{p}=\mathbf{F}\mathbf{p},\quad\mathbf{f}^{q}=\mathbf{F} \mathbf{q}\] (3g) \[\left\|[\mathbf{f}^{p}(i),\mathbf{f}^{q}(i)]^{\top}\right\|_{2} \leq\bar{f}_{i},\quad\forall i\in[n] \tag{3h}\]
The objective function calculates the cost of generation at generator buses (given by \(\mathbf{c}_{g}^{\top}(\mathbf{A}_{g}\mathbf{p})=\mathbf{c}_{g}^{\top}\mathbf{p}_{g}\), where \(\mathbf{c}_{g}\in\mathbb{R}^{n_{g}}\) is the generation cost vectors) and at the slack bus (given by \(c_{s}p_{s}\), where \(c_{s}\) is the slack generation cost). (3a) is the LinDistFlow equation, while (3b) limits voltage values to within operational limits. (3c) stipulates that the demanded real and reactive power amounts are \(\hat{\mathbf{p}}_{l}\) and \(\hat{\mathbf{q}}_{l}\) respectively. (3d), (3e), and (3f) place limits on generation. Finally, (3g) and (3h) together describe the conic line flow for each branch.
The OPF (3) contains redundancy in its decision variables and constraints. To that end, a much simplified and operatorized version of (3) may be written as follows.
\[\mathcal{J}(\boldsymbol{\ell},\bar{\mathbf{f}})\overset{\Delta}{=} \min_{\mathbf{p}_{g},\mathbf{q}_{g}} \tilde{\mathbf{c}}_{g}^{\top}\mathbf{p}_{g}\] (4) \[\mathbf{M}_{p}\mathbf{p}_{g}+\mathbf{M}_{q}\mathbf{q}_{g}\leq \mathbf{G}\boldsymbol{\ell}+\mathbf{h}\] (4a) \[\underline{\mathbf{p}}_{g}\leq\mathbf{p}_{g}\leq\bar{\mathbf{p}}_{g },\quad\underline{\mathbf{q}}_{g}\leq\mathbf{q}_{g}\leq\bar{\mathbf{q}}_{g}\] (4b) \[\left\|\begin{bmatrix}\mathbf{r}_{i}^{\top}\mathbf{p}_{g}+\mathbf{s }_{i}^{\top}\boldsymbol{\ell}\\
cost, which is a scalar. It can be shown that \(\mathcal{J}(\boldsymbol{\ell},\bar{\mathbf{f}})\) is jointly convex in \(\boldsymbol{\ell}\) and \(\bar{\mathbf{f}}\); see [19].
**Remark 1** (Equivalence of (3) and (4)).: _To establish the equivalence of (3) and (4), first \(\mathbf{v}\), \(p_{s}\), and \(q_{s}\) can be eliminated as decision variables from (3) by replacing all of their occurrences with \(\mathbf{R}\mathbf{p}+\mathbf{X}\mathbf{q}+v_{0}\mathbf{1}_{n}\), \(-\mathbf{1}_{n}^{\top}\mathbf{p}\), and \(-\mathbf{1}_{n}^{\top}\mathbf{q}\) respectively. Further, the indices of \(\mathbf{p}\) and \(\mathbf{q}\) corresponding to load buses may be written as linear combinations of elements of \(\boldsymbol{\ell}\) using (3c). Thus, the only effective decision variables in (3) are \(\mathbf{p}_{g},\mathbf{q}_{g}\in\mathbb{R}^{n_{g}}\). Further, any expression in (3) which contains any of the decision variables \(\mathbf{v},\mathbf{p},\mathbf{q},p_{s}\), or \(q_{s}\) is linear in \(\mathbf{p}_{g}\), \(\mathbf{q}_{g}\) or \(\boldsymbol{\ell}\). Therefore, the objective of (3) is equivalent to the objective of (4). Constraints (3a)-(3e) are equivalent to (4a), (3f) and (4b), and finally (3g)-(3h) are equivalent to (4c)._
### _Illustrative Example_
Consider a 4-bus system shown in Figure 1, which is derived from case4ba in MATPOWER [20]. The buses are indexed such that the slack bus has index 0, the generator bus has index 1, and the two load buses have indices 2 and 3, respectively. Each branch is uniquely numbered based on its downstream bus. The impedance (in p.u.) of each branch is \(0.003+j0.006\,\Omega\). Thus, equation (3a) then becomes
\[\begin{bmatrix}\mathbf{v}(1)\\ \mathbf{v}(2)\\ \mathbf{v}(3)\end{bmatrix} =\begin{bmatrix}0.012&0.006&0\\ 0.006&0.006&0\\ 0&0&0.006\end{bmatrix}\begin{bmatrix}\mathbf{p}(1)\\ \mathbf{p}(2)\\ \mathbf{p}(3)\end{bmatrix}\] \[+\begin{bmatrix}0.024&0.012&0\\ 0.012&0.012&0\\ 0&0&0.012\end{bmatrix}\begin{bmatrix}\mathbf{q}(1)\\ \mathbf{q}(2)\\ \mathbf{q}(3)\end{bmatrix}+\begin{bmatrix}v_{0}\\ v_{0}\\ v_{0}\end{bmatrix}.\]
The voltage of the generator bus and slack bus is fixed at \(1.05\) p.u., while the voltage levels of the load buses vary in \([0.95,1.05]\) p.u.. Recognizing that \(\boldsymbol{\ell}=[\mathbf{p}(2),\mathbf{p}(3),\mathbf{q}(2),\mathbf{q}(3)]\), \(p_{g}=\mathbf{p}(1)\) and \(q_{g}=\mathbf{q}(1)\), the above equation may be written as
\[\begin{bmatrix}\mathbf{v}(1)=0.012p_{g}+0.024q_{g}+\begin{bmatrix}0.006&0&0.0 12&0\end{bmatrix}\boldsymbol{\ell}+v_{0}\\ \begin{bmatrix}\mathbf{v}(2)\\ \mathbf{v}(3)\end{bmatrix}=\begin{bmatrix}0.006\\ 0\end{bmatrix}p_{g}+\begin{bmatrix}0.012\\ 0\end{bmatrix}q_{g}+\\ \begin{bmatrix}0.006&0&0.012&0\\ 0&0.006&0.012\end{bmatrix}\boldsymbol{\ell}+\begin{bmatrix}v_{0}\\ v_{0}\end{bmatrix}\end{bmatrix}\]
Letting \(\mathbf{v}(1)=1.05\) (or equivalently, \(1.05\leq\mathbf{v}(1)\leq 1.05\)) and \(0.95\leq\mathbf{v}(i)\leq 1.05\) for \(i=2,3\) recovers (4a). Now suppose the slack bus provides a maximum of 1 p.u. real power to the system; i.e., \(p_{s}\leq 1\). This can be equivalently written in the form of (4a) as \(-\mathbf{p}_{g}-\begin{bmatrix}1&1&0&0\end{bmatrix}\boldsymbol{\ell}\leq 1\) Finally, we demonstrate a branch flow. The matrix \(\mathbf{F}\) is given as
\[\mathbf{F}=\begin{bmatrix}-1&0&0\\ -1&-1&0\\ 0&0&-1\end{bmatrix},\]
and correspondingly
\[\begin{split}\mathbf{f}^{\mathbf{p}}(1)&=-\mathbf{p}(1),& \mathbf{f}^{\mathbf{p}}(2)=-\mathbf{p}(1)-\mathbf{p}(2),&\mathbf{f}^{ \mathbf{p}}(3)=-\mathbf{p}(3)\\ \mathbf{f}^{\mathbf{q}}(1)&=-\mathbf{q}(1),&\mathbf{f}^{ \mathbf{q}}(2)=-\mathbf{q}(1)-\mathbf{q}(2),&\mathbf{f}^{\mathbf{q}}(3)=- \mathbf{q}(3).\end{split}\]
Suppose the branch flow limit of branch 2 is 3 p.u. It can be expressed as, equivalently,
\[\left\|\begin{bmatrix}(-p_{g}+\begin{bmatrix}-1&0&0&0\end{bmatrix} \boldsymbol{\ell})\\ \begin{pmatrix}(-q_{g}+\begin{bmatrix}0&0&-1&0\end{bmatrix}\boldsymbol{\ell}) \end{bmatrix}\right\|_{2}\leq 3.\]
In other words, in the form of (4c) we have
\[\mathbf{r}_{2}=[-1],\mathbf{s}_{2}^{\top} =\begin{bmatrix}-1&0&0&0\end{bmatrix},\mathbf{t}_{2}^{\top}= \begin{bmatrix}0&0&-1&0\end{bmatrix}.\]
For a generalized \(j\), we provide the following closed form expressions on the norm values of \(\mathbf{r}_{j}\), \(\mathbf{s}_{j}\) and \(\mathbf{t}_{j}\).
**Lemma 2** (Norm values of \(\mathbf{r}_{j}\), \(\mathbf{s}_{j}\) and \(\mathbf{t}_{j}\)).: _We have, for all branches \(j\),_
\[\left\|\mathbf{r}_{j}\right\|_{2} =\sqrt{\left|\mathcal{H}_{j}\cap\mathcal{N}_{j}\right|},\left\| \mathbf{s}_{j}\right\|_{2}=\sqrt{\left|\mathcal{H}_{j}\cap\mathcal{N}_{l} \right|},\] \[\left\|\mathbf{t}_{j}\right\|_{2}=\sqrt{\left|\mathcal{H}_{j} \cap\mathcal{N}_{l}\right|}.\]
Proof.: Recall that branch \(j\) shares its index with its downstream bus \(j\). We use the properties of matrix \(\mathbf{F}\) in this proof. From Lemma 1, \(\mathbf{F}=\mathbf{A}^{-\top}\). It is known [21] that the \((i,j)^{\text{th}}\) element of matrix \(\mathbf{A}^{-1}\) is +1 if branch \(j\) is directed _along_ path from \(i\) to slack bus, -1 if it is directed _against_, and 0 otherwise. By the construction of \(\mathcal{G}\), \(\mathbf{A}^{-1}\) only has entries -1 and 0. Thus, \(\mathbf{A}^{-\top}\) has \((i,j)^{\text{th}}\) element -1 if branch \(i\) is falls in the path from bus \(j\) to the origin and 0 otherwise. Therefore, the \(i^{\text{th}}\) row of \(\mathbf{A}^{-\top}\) collects all buses in \(\mathcal{H}_{i}\). The result follows by separating generator buses and placing their coefficients in \(\mathbf{r}_{j}\), and load buses in \(\mathbf{s}_{j}\) and \(\mathbf{t}_{j}\).
## III Marginal Analysis
The analysis of marginal pricing starts from the dual problem of (4) that is given as follows.
**Lemma 3**.: _The dual problem of (4) is given as_
\[\max_{\begin{subarray}{c}\boldsymbol{\lambda},\boldsymbol{\alpha}_{1 \text{th}},\boldsymbol{\alpha}_{\text{th}},\boldsymbol{\beta}_{\text{th}},\\ \boldsymbol{\beta}_{\text{th}},\theta_{1},\phi_{1},\mu_{i}\end{subarray}} -(\mathbf{G}\boldsymbol{\ell}+\mathbf{h})^{\top}\boldsymbol{ \lambda}+\boldsymbol{\alpha}_{lb}^{\top}\underline{\mathbf{p}}_{g}- \boldsymbol{\alpha}_{ub}^{\top}\bar{\mathbf{p}}_{g}+\boldsymbol{\beta}_{lb}^{ \top}\underline{\mathbf{q}}_{g}\] \[-\boldsymbol{\beta}_{ub}^{\top}\bar{\mathbf{q}}_{g}+\sum_{i=1}^{ n}\left[-(\theta_{i}\mathbf{s}_{i}+\phi_{i}\mathbf{t}_{i})^{\top}\boldsymbol{\ell}-\mu_{i} \bar{f}_{i}\right] \tag{5}\]
Fig. 1: Illustrative example of a 4-bus radial network.
_s.t._ \[\tilde{\mathbf{c}}_{g}+\mathbf{M}_{p}^{\top}\boldsymbol{\lambda}+ \boldsymbol{\alpha}_{ub}-\boldsymbol{\alpha}_{lb}-\sum_{i=1}^{n}\theta_{i} \mathbf{r}_{i}=0\] (5a) \[\mathbf{M}_{q}^{\top}\boldsymbol{\lambda}+\boldsymbol{\beta}_{ub}- \boldsymbol{\beta}_{lb}-\sum_{i=1}^{n}\phi_{i}\mathbf{t}_{i}=0\] (5b) \[\big{\|}[\theta_{i},\phi_{i}]^{\top}\big{\|}_{2}\leq\mu_{i},\quad \forall i\in[n]\] (5c) \[\boldsymbol{\lambda},\boldsymbol{\alpha}_{lb},\boldsymbol{\alpha}_{ ub},\boldsymbol{\beta}_{lb},\boldsymbol{\beta}_{ub},\{\mu_{i}\}\geq 0\] (5d)
Proof.: We augment problem (4) by adding auxiliary variables \(y_{i}\stackrel{{\Delta}}{{=}}\mathbf{r}_{i}^{\top}\mathbf{p}_{g} +\mathbf{s}_{i}^{\top}\boldsymbol{\ell}\) and \(z_{i}\stackrel{{\Delta}}{{=}}\mathbf{r}_{i}^{\top}\boldsymbol{ \alpha}_{q}+\mathbf{t}_{i}^{\top}\boldsymbol{\ell}\), and letting \(\theta_{i}\) and \(\phi_{i}\) denote the dual variables for the same. Constraint (5c) can then be written as \(\big{\|}[y_{i},z_{i}]^{\top}\big{\|}_{2}\leq\bar{f}_{i},\forall i\in[n]\). The Lagrangian of the augmented problem is given as
\[\mathcal{L}\stackrel{{\Delta}}{{=}} \tilde{\mathbf{c}}_{g}^{\top}\mathbf{p}_{g}+\boldsymbol{\lambda} ^{\top}(\mathbf{M}_{p}\mathbf{p}_{g}+\mathbf{M}_{q}\mathbf{q}_{g}-\mathbf{G} \boldsymbol{\ell}-\mathbf{h})+\boldsymbol{\alpha}_{lb}^{\top}(\underline{ \mathbf{p}}_{g}-\mathbf{p}_{g})\] \[+\boldsymbol{\alpha}_{ub}^{\top}(\mathbf{p}_{g}-\bar{\mathbf{p}}_ {g})+\boldsymbol{\beta}_{lb}^{\top}(\underline{\mathbf{q}}_{g}-\mathbf{q}_{g}) +\boldsymbol{\beta}_{ub}^{\top}(\mathbf{q}_{g}-\bar{\mathbf{q}}_{g})+\] \[+\sum_{i=1}^{n}\bigg{[}\theta_{i}(y_{i}-\mathbf{r}_{i}^{\top} \mathbf{p}_{g}-\mathbf{s}_{i}^{\top}\boldsymbol{\ell})+\phi_{i}(z_{i}-\mathbf{ r}_{i}^{\top}\mathbf{q}_{g}-\mathbf{t}_{i}^{\top}\boldsymbol{\ell})\] \[+\mu_{i}\left(\big{\|}y_{i},z_{i}]^{\top}\big{\|}_{2}-\bar{f}_{i} \right)\bigg{]}.\]
The dual objective function consists of all the terms in \(\mathcal{L}\) which are not functions of any primal variables. Since \(\mathcal{L}\) is linear in \(\mathbf{p}_{g}\) and \(\mathbf{q}_{g}\), their respective coefficients must be zero such that \(\mathcal{L}\) is bounded from below. This gives rise to (5a) and (5b). Finally, that for any \(y,z,\theta,\phi\in\mathbb{R}\) and \(\mu>0\), it holds that
\[\inf_{y,z}\ \left[\theta,\phi\right]\begin{bmatrix}y\\ z\end{bmatrix}+\mu\left\|\begin{bmatrix}y\\ z\end{bmatrix}\right\|_{2}=\begin{cases}0,&\text{if}\ \left\|[\theta,\phi]^{\top}\right\|_{2}\leq\mu,\\ -\infty,&\text{if}\ \left\|[\theta,\phi]^{\top}\right\|_{2}>\mu.\end{cases}\]
Applying this property to find the infimum of all terms consisting of \(y_{i}\) and \(z_{i}\) in \(\mathcal{L}\), we recover (5c). Finally, non-negativity of dual variables yields (5d).
We assume that there exist optimal primal and dual solutions for (4) and (5) such that Kahrush-Kuhn-Tucker (KKT) conditions hold [22, Section 5.5.3].
**Assumption 1** (KKT conditions).: _Let \(\Gamma\stackrel{{\Delta}}{{=}}\{\boldsymbol{\lambda},\boldsymbol{ \alpha}_{lb},\boldsymbol{\alpha}_{ub},\boldsymbol{\beta}_{lb},\boldsymbol{ \beta}_{ub},\theta_{i},\phi_{i},\mu_{i}\}\) denote the set of dual variables in (5). Then, there exist optimal solutions \(\mathbf{p}_{g}^{*}\) and \(\mathbf{q}_{g}^{*}\) for primal problem (4) and \(\Gamma^{*}\) for dual problem (5) which satisfy the following KKT conditions:_
1. _stationarity of_ \(\mathcal{L}\) _in primal variables_ \(\mathbf{p}_{g}\) _and_ \(\mathbf{q}_{g}\)_._
2. _constraints (_4a)-(_4c_) hold for primal variables._
3. _constraints (_5a)-(_5d_) hold for dual variables._ \[\boldsymbol{\lambda}^{\top}(\mathbf{M}_{p}\mathbf{p}_{g}+ \mathbf{M}_{q}\mathbf{q}_{g}-\mathbf{G}\boldsymbol{\ell}-\mathbf{h})=0\] \[\boldsymbol{\alpha}_{lb}^{\top}(\underline{\mathbf{p}}_{g}- \mathbf{p}_{g})=0,\quad\boldsymbol{\alpha}_{ub}^{\top}(\mathbf{p}_{g}-\bar{ \mathbf{p}}_{g})=0\] \[\boldsymbol{\beta}_{lb}^{\top}(\underline{\mathbf{q}}_{g}-\mathbf{q }_{g})=0,\quad\boldsymbol{\beta}_{ub}^{\top}(\mathbf{q}_{g}-\bar{\mathbf{q}}_{g})=0.\]
We now introduce the _dual value function_\(\mathcal{D}(\Gamma,\boldsymbol{\ell},\bar{\mathbf{f}})\), which is defined as the objective function of dual problem (5) as a function of _any_ values of dual variables \(\Gamma\), demands \(\boldsymbol{\ell}\), and flow limits \(\bar{\mathbf{f}}\). Assumption 1 allows us to exploit duality theory in order to equate the dual value function to the operator \(\mathcal{J}(\boldsymbol{\ell},\bar{\mathbf{f}})\).
**Remark 2** (Strong duality).: _Since the primal problem (4) is convex in the decision variables, Assumption 1 ensures that strong duality holds [22, Section 5.5.3], i.e. \(\mathcal{J}(\boldsymbol{\ell},\bar{\mathbf{f}})=\mathcal{D}(\Gamma^{*}, \boldsymbol{\ell},\bar{\mathbf{f}})\)._
We now provide a closed form of the _flow marginal costs_ as a function of the optimum dual variables.
**Lemma 4**.: _The flow marginal cost for line flow limits \(\bar{\mathbf{f}}\), denoted as \(C_{\bar{\mathbf{f}}}^{flow}\) is given as_
\[C_{\bar{\mathbf{f}}}^{flow}(j)\stackrel{{\Delta}}{{=}}\nabla_{\bar{ \mathbf{f}}(j)}\mathcal{J}(\boldsymbol{\ell},\bar{\mathbf{f}})=-\mu_{j}^{*}, \quad\forall j\in[n] \tag{6}\]
_Moreover, if \(\mathbf{p}_{g1}^{*}\) is the optimum real generation with flow limits \(\bar{\mathbf{f}}_{1}\) and \(\mathbf{p}_{g2}^{*}\) with \(\bar{\mathbf{f}}_{2}\) where \(\bar{\mathbf{f}}_{1}\succeq\bar{\mathbf{f}}_{2}\), then \(\tilde{\mathbf{c}_{g}}^{\top}\mathbf{p}_{g1}^{*}\leq\tilde{\mathbf{c}_{g}}^{ \top}\mathbf{p}_{g2}^{*}\)._
Proof.: Due to strong duality, it follows that \(\mathcal{J}(\boldsymbol{\ell},\bar{\mathbf{f}})=\mathcal{D}(\Gamma^{*}, \boldsymbol{\ell},\bar{\mathbf{f}})\). Taking derivative of \(\mathcal{D}(\Gamma^{*},\boldsymbol{\ell},\bar{\mathbf{f}})\) with respect to each of the elements in \(\bar{\mathbf{f}}\) recovers the closed form of \(C_{\bar{\mathbf{f}}}^{flow}\). The second part of the result follows from observing that the feasible space of (4) is smaller under \(\bar{\mathbf{f}}_{2}\) than \(\bar{\mathbf{f}}_{1}\), leading to same or higher objective value.
To this end, we present the main result as follows.
**Theorem 1** (Bounding marginal prices).: _Suppose at optimality of problem (4), \(\mathcal{I}(\bar{\mathbf{f}}_{1})\subseteq[n]\) is the nonempty index set collecting the binding constraints in (4c) for flow limits \(\bar{\mathbf{f}}_{1}\). Then, the congested marginal cost of load demand is given as_
\[C_{\bar{\mathbf{f}}_{1}}^{\text{load}}\stackrel{{\Delta}}{{=}} \nabla_{\boldsymbol{\ell}}\mathcal{J}(\boldsymbol{\ell},\bar{\mathbf{f}}_{1})=- \boldsymbol{\lambda}^{*\top}\mathbf{G}-\sum_{i\in\mathcal{I}(\bar{\mathbf{f}}_{1} )}(\theta_{i}^{*}\mathbf{s}_{i}+\phi_{i}^{*}\mathbf{t}_{i})^{\top}. \tag{7}\]
_Furthermore, let \(\mathcal{I}(\bar{\mathbf{f}}_{2})\) be empty for flow limits \(\bar{\mathbf{f}}_{2}\), denoting the case wherein the network is uncongested. We have,_
\[|C_{\bar{\mathbf{f}}_{1}}^{\text{load}}(i)-C_{\bar{\mathbf{f}}_{2}}^{ \text{load}}(i)|\leq K_{\bar{\mathbf{f}}_{1}}\sum_{j\in\mathcal{I}(\bar{ \mathbf{f}}_{1})}|C_{\bar{\mathbf{f}}_{1}}^{\text{flow}}(j)|,\] (real) \[|C_{\bar{\mathbf{f}}_{1}}^{\text{load}}(n_
where \(\mathbf{z}_{j}^{s}\) and \(\mathbf{z}_{j}^{t}\) are vectors which have value \(1\) at indices where \(\mathbf{s}_{j}\) and \(\mathbf{t}_{j}\) are nonzero respectively, and \(0\) otherwise. \([a]\) is due to the Cauchy-Schwartz inequality while \([b]\) follows from dual (5c) and uniformly upper bounding the term \(\left\|[\mathbf{[\bar{s}_{j}^{\top},\bar{t}_{j}^{\top}]^{\top}}\right\|_{2}\) for all \(j\in\mathcal{I}(\bar{\mathbf{t}}_{1})\). Observing that \(\nabla_{\bar{f}_{j}}\mathcal{J}(\boldsymbol{\ell},\bar{\mathbf{t}})=-\mu_{j}^ {s}\) and \(\mu_{j}^{s}\geq 0\) concludes the proof.
Lemma (2) may be used to find the closed form of \(K_{\bar{\mathbf{f}}}\) for different \(\bar{\mathbf{f}}\). In the following remark, we highlight some use cases of Theorem 1.
**Remark 3** (Use cases of Theorem 1).: _Note that system operators often possess datasets of the form \(\{C_{\bar{\mathbf{f}}}^{\text{flow}},\bar{\mathbf{t}}_{k},p_{k}\}\), where \(p_{k}\in[0,1]\) is the probability of sample \(k\) being representative of a desired scenario. Such a probability may be derived empirically by observing frequencies of similar data points in the dataset_
* _The bounds may be used to quickly approximate the worst-case changes in power tariffs as a function of implemented line flow limits_ \(\bar{\mathbf{f}}\) _using existing datasets, without re-running the full OPF problem for different values of_ \(\bar{\mathbf{f}}\)_._
* _The capability to compute the marginal prices_ \(C^{\text{flow}}\) _and_ \(C^{\text{load}}\) _as a function of_ \(\boldsymbol{\ell}\) _and_ \(\bar{\mathbf{f}}\) _may be used to train novel sensitivity informed deep learning architectures_ _[_23_]__. Bounds based on sensitivity of the OPF solution to input parameters can also be used to construct triggering conditions in distributed optimization using the principle of event-triggered communication_ _[_24_]__._
* _If the power demand_ \(\boldsymbol{\ell}\) _is a random variable drawn from a known distribution, then Theorem_ 1 _can provide a method to quantify the statistical properties of demand marginal costs. Such analyses generally fall under the domain of probabilistic (optimal) power flow, which is a topic of significant research due to its possible applications in deep-learning based architectures for power systems_ _[_25_]__._
## IV Simulation Result
In this section, we experimentally validate the bounds derived in Theorem 1 by carrying out simulations on a 141-bus single-phase radial power network derived from case141 in MATPOWER [20], as shown in Figure 2. Due to case141 originally containing only a single generator at the slack bus,
Fig. 3: Results from simultaneously perturbing two branch flow limits from Figure 2. We choose branches 16 and 18 to perturb since branch 18 is downstream to branch 16, making for easier interpretability of results. The first two figures indicate marginal costs of real and reactive power respectively, while the third figure is the proposed upper bound on the quantities derived in both the first and second figure. As we see, the upper bound is valid for both the quantities.
Fig. 2: 141-bus distribution system, adapted from MATPOWER case141. We retain the topology and real/reactive load demands from case141, but randomly add 25 distributed generation buses. The real power generation of each distributed generator is limited to \(p_{g}\in[0,0.0654]pu\), while the reactive power generation follows limits \(q_{g}\in[-0.0270,0.0270]pu\).
we modify the same to add distributed generation. This is done according to the following steps.
* First, 25 buses are randomly selected to host distributed generation. Then per generator bounds are chosen such that the total upper bound of real power generation approximately equals the total real power demand at the load buses. The real power cost of distributed generator buses are chosen uniformly-at-random in the range \([0,1]\), while the slack bus generation is chosen to be more expensive than any distributed generator.
* Following introduction of distributed generators, we run the OPF without any branch flow limits to generate optimal generation and branch flow setpoints. This is done using CVX [26] and MATLAB. Once the optimal flow setpoints are generated, the branch flow limit of the line upstream to any distributed generator is set to its corresponding flow setpoint.
* Two branches, _viz._ 16 and 18 are selected, and their branch flow limits are simultaneously reduced upto 75% of their original values. For each step in the reduction, the perturbation of real and reactive marginal cost of load demand at bus 20 (a load bus) is recorded, and presented in Figure 3. The proposed upper bound is also presented in the same figure.
As seen in Figure 3, the proposed bounds are respected by the actual perturbation in marginal costs. Here, both the marginal cost perturbations and the upper bound are derived from the formulations presented in Theorem 1. It should be noted that the results in Theorem 1 use optimal dual variables in its formulation, which are generated in most modern optimization solvers. Another interesting observation in Figure 3 is that the marginal cost of reactive power is within some numerical error of zero. This is because the objective function of OPF 3 does not penalize generation/consumption of reactive power. Were an objective term concering reactive power to be added to the OPF, we would observe non-zero marginal costs for reactive power alongside real power demand.
## V Conclusion
In this paper, we formulated the LDF-OPF problem and derived upper bounds on the marginal prices of real and reactive power demand for all load buses. We also presented certain approaches showing how the main result in Theorem 1 can be utilized for OPF-based planning problems. Future work will develop results in this paper to accelerate solve times of LDF-OPF.
## Acknowledgements
The authors would like to thank Professor Yihsu Chen at the University of California, Santa Cruz for helpful discussions on this work and its future directions.
|
2309.08389 | Augmented quantization: a general approach to mixture models | The investigation of mixture models is a key to understand and visualize the
distribution of multivariate data. Most mixture models approaches are based on
likelihoods, and are not adapted to distribution with finite support or without
a well-defined density function. This study proposes the Augmented Quantization
method, which is a reformulation of the classical quantization problem but
which uses the p-Wasserstein distance. This metric can be computed in very
general distribution spaces, in particular with varying supports. The
clustering interpretation of quantization is revisited in a more general
framework. The performance of Augmented Quantization is first demonstrated
through analytical toy problems. Subsequently, it is applied to a practical
case study involving river flooding, wherein mixtures of Dirac and Uniform
distributions are built in the input space, enabling the identification of the
most influential variables. | Charlie Sire, Didier Rullière, Rodolphe Le Riche, Jérémy Rohmer, Yann Richet, Lucie Pheulpin | 2023-09-15T13:31:27Z | http://arxiv.org/abs/2309.08389v3 | # Augmented quantization: a general approach to mixture models
###### Abstract
The estimation of mixture models is a key to understanding and visualizing the distribution of multivariate data. Most approaches to learning mixture models involve likelihoods and are not adapted to distributions with finite support or without a well-defined density function.
This study proposes the Augmented Quantization method, which is a reformulation of the classical quantization problem but which uses the \(\mathbf{p}\)-Wasserstein distance. This metric can be computed in very general distribution spaces, in particular with varying supports. The clustering interpretation of quantization is revisited with distances between distributions. The Augmented Quantization algorithm can handle mixtures of discrete and continuous measures, with finite or infinite supports. Each step of the algorithm is supported by local minimizations of the generalized quantization error. The performance of Augmented Quantization is first demonstrated through analytical toy problems. Subsequently, it is applied to a practical case study involving river flooding, wherein mixtures of Dirac and uniform distributions are built in the input space, enabling the identification of the most influential variables.
**Keywords:** Mixture models, Quantization, Wasserstein distance, Flooding
## 1 Introduction
Quantization methods classically provide discrete representations of continuous distributions (Gray et al., 1998). Quantization is a key component of digitalization in signal processing (Pages, 2014) and, more generally, it can be employed as a clustering technique to approximate continuous random variables with a Dirac mixture (Pollard, 1982). This representation is useful when describing continuous phenomena with a finite number of prototype elements. Discrete approximations to random variables are classically obtained by minimizing a Maximum Mean Discrepancy (Teymur et al., 2020) or as centroids of Voronoi cells in the K-means clustering (Levrard, 2018).
However, when approximating a random variable, it may be desirable to come up with a
representation that is more complex than a discrete distribution. Mixture models are relevant in this case. They aim to identify subpopulations in a sample that arise from a common distribution but with different parameters (Aitkin et al., 1985). Gaussian mixture models are probably the most popular and highlight normally distributed subgroups. Mixture models, which can be seen as clustering methods, expand conventional quantization by allowing the use of any distribution in place of the Dirac measure. The mixture is learned with the Expectation-Maximization (EM) algorithm (Dellaert, 2003), which performs a probabilistic assignment of each point to a cluster by the computation of likelihoods, followed by the estimation of the cluster parameters by likelihood maximization. In certain applications, it is necessary to study specific distributions mixes. For example, in the flooding case study (cf. Section 7) where the objective is to gain insights into the influence of variables in relation to specific flood spatial patterns, we investigate subdistributions that consist either of a Dirac measure at a specific value or a uniform distribution with its support. Here, larger supports denote less influential variables.
Learning very general distributions is not easy. Distributions such as the uniform or the Dirac measure overlook points out of their supports and processing them through likelihoods and such direct functionals of their density will fail. Appendix A illustrates this problem with uniform distributions whose supports include all the points, even the obvious outliers. Density is not even defined when it comes to Dirac mixtures or to distributions with singular components. One possible remedy is to work with zero-inflated beta distributions (Burch et al., 2020), but they remain less general and harder to interpret than the well-known uniform, Dirac, or Gaussian distributions.
Thus, one objective of this article is to establish a framework for learning mixtures of general distributions. Our approach is based on computing the \(p\)-Wasserstein distance between two probability measures, \(\mu\) and \(\nu\), on a space \(\mathcal{X}\), without strong restrictions on them (Villani, 2016). In particular, the \(p\)-Wasserstein distance is defined between measures that do not have the same support or when one is discrete and the other continuous. The \(p\)-Wasserstein distance is defined as follows:
\[\mathcal{W}(\mu,\nu)=\inf_{\pi\in\mathbf{\Pi}(\mu,\nu)}\left(\int_{\mathcal{X }\times\mathcal{X}}\|x-x^{\prime}\|^{p}\pi(dx,dx^{\prime})\right)^{\frac{1}{ p}},\]
where \(\mathbf{\Pi}(\mu,\nu)\) is the set of all the joint probability measures on \(\mathcal{X}\times\mathcal{X}\) whose marginals are \(\mu\) and \(\nu\) on the first and second factors, respectively. This metric represents the optimal transport cost between the two measures and appears highly relevant in our context of approximation of a set of points by a general mixture.
However, finding the best mixture of \(\ell\) different distributions (for a given \(\ell\in\mathbb{N}\)) is not straightforward, as calculating the weights and parameters of the distributions implies minimizing \(p\)-Wasserstein distances which are non-convex optimization problems (Merigot et al., 2021). In addition, learning mixtures of distributions generates problems with many variables. For instance, a mixture of 4 Dirac measures in 4 dimensions is described by 20 variables.
Clustering approaches like K-means are popular in such high-dimensional situations. They reduce the size of the optimization problem by decomposing it cluster by cluster. Clustering also facilitates the interpretation of the results. Our method, that we call Augmented Quantization (AQ), is based on the classical K-means, generalized to handle various types of distributions using the \(p\)-Wasserstein distance.
The paper is structured as follows. The problem is formulated and a path towards Augmented Quantization is sketched in Section 2. Elements of analysis of Augmented Quantization are given in Section 3 followed, in Section 4, by a description of the implementation steps of the algorithm. In Section 5, the method is applied to several toy problems and compared to existing approaches, while in Section 7, it is applied to a study of real floodings. Finally, Section 8 summarizes the main results and proposes extensions to the method.
## 2 Augmented quantization
### Problem formulation
The objective of the study conducted here is to build a very general mixture model that approximates the (unknown) underlying distribution of a (known) sample \((x_{i})_{i=1}^{n}\in\mathcal{X}^{n}\) with \(\mathcal{X}\subset\mathbb{R}^{m}\).
The components of the mixture, called representatives, will be seen as probability measures on \(\mathcal{X}\). They belong to a given family of probability measures which is not necessarily parametric. More precisely, we consider \(\ell\in\mathbb{N}^{*}\) representatives named through their tag taken in \(\mathcal{J}=\{1,\ldots,\ell\}\). Let \(\mathcal{R}\) be a family of probability measures, the objective is approximate the distribution of \((x_{i})_{i=1}^{n}\) by the mixture \(R_{J}\) such that
* \(R_{J}=\sum_{j\in\mathcal{J}}\mathbb{P}(J=j)R_{j}\),
* \(R_{j}\in\mathcal{R},\quad j\in\mathcal{J}\),
* \(J\) a discrete random variable independent of \((R_{1},\ldots,R_{\ell})\) with weights \(p_{j}=P(J=j)\), \(j\in\mathcal{J}\).
It is important to note that this representation is not necessarily unique, the identifiability of the problem depends on the family \(\mathcal{R}\) (Yakowitz et al. 1968). For instance, a mixture of the measures associated to \(\mathcal{U}(0.5,1)\) and \(\mathcal{U}(0,1)\) with weights \(0.5\) on one hand, and a mixture of the measures associated to \(\mathcal{U}(0,0.5)\) and \(\mathcal{U}(0.5,1)\) with respective weights \(0.25\) and \(0.75\) on the other hand, are identical.
Augmented quantization, as the name indicates, generalizes the traditional quantization method and the accompanying K-means clustering. Let us first recall the basics of K-means. In K-means, the representatives are Dirac measures located at \(\gamma\), \(\mathcal{R}=\{\delta_{\gamma},\gamma\in\mathcal{X}\}\). For a given set of representatives \(\mathbf{R}=(R_{1},\ldots,R_{\ell})=(\delta_{\gamma_{1}},\ldots,\delta_{\gamma_{ \ell}})\in\mathcal{R}^{\ell}\), the associated clusters are \(\mathbf{C}=(C_{1},\ldots,C_{\ell})\) with \(C_{j}=\{x\in(x_{i})_{i=1}^{n}:j=\underset{i\in\mathcal{J}}{\arg\min}\|x-\gamma _{i}\|\},j\in\mathcal{J}\). The objective is to minimize the _quantization error_,
\[\mathcal{E}_{p}(\gamma_{1},\ldots,\gamma_{\ell}):=\left(\frac{1}{n}\sum_{i=1 }^{n}\lVert x_{i}-\underset{\gamma\in\Gamma_{\ell}}{\arg\min}\lVert x_{i}- \gamma\|\|^{p}\right)^{\frac{1}{p}}. \tag{1}\]
We now generalize the quantization error by replacing the Euclidean distance with a \(p\)-Wasserstein metric which, it turns, allows to rewrite the quantization error for any representatives \(\mathbf{R}\) and associated clusters \(\mathbf{C}\):
\[\mathcal{E}_{p}(\mathbf{C},\mathbf{R}):=\left(\sum_{j=1}^{\ell}\frac{\operatorname{ card}\left(C_{j}\right)}{n}\mathcal{W}_{p}(C_{j},R_{j})^{p}\right)^{\frac{1}{p}}, \tag{2}\]
where \(\mathcal{W}_{p}(C_{j},R_{j})\) is the \(p\)-Wasserstein distance (Rueschendorf 1985) between the empirical probability measure associated to \(C_{j}\) and the measure \(R_{j}\).
The traditional quantization error (1) is recovered from the general error 2 by limiting the representatives to Dirac measures and defining the clusters as nearest neighbors to the Dirac locations \(\gamma_{i}\,\ i=1,\ldots,\ell\). The general clusters \(C_{j}\,\ j=1,\ldots,\ell\) of Equation 2 form a partition of the samples \((x_{i})_{i=1}^{n}\). We will soon see how to define them.
### From K-means to Augmented Quantization
Lloyd's algorithm is one of the most popular implementation of K-means quantization (Du et al. 2006). The method can be written as follows:
```
Input:\((\gamma_{1},\ldots,\gamma_{\ell})\in\mathcal{X}^{\ell}\), sample \((x_{i})_{i=1}^{n}\) while stopping criterion not met do FindC : update the clusters, \[\forall j\in\mathcal{J}\,\ C_{j}=\{x\in(x_{i})_{i=1}^{n}\,:\,j= \underset{j^{\prime}\in\mathcal{J}}{\arg\min}\|x-\gamma_{j^{\prime}}\|\}\] FindR : update the representatives, \[\forall j\in\mathcal{J}\,\ \gamma_{j}=\tfrac{1}{\operatorname{card}\left(C_{j} \right)}\sum_{x\in C_{j}}x\quad,\quad R_{j}=\delta_{\gamma_{j}}\] endwhile Output:
* the membership discrete random variable \(J\) with \(\mathbb{P}(J=j)=\frac{\operatorname{card}\left(C_{j}\right)}{n},\quad j\in \mathcal{J}\)
* the mixture \(R_{J}=\sum_{j\in\mathcal{J}}\mathbb{P}(J=j)\delta_{\gamma_{j}}\)
```
**Algorithm 1** Lloyd's algorithm
Algorithm 1 is not the standard description of Lloyd's algorithm but an interpretation paving the way towards augmented quantization and the estimation of mixture distributions: the centroids \((\gamma_{1},\ldots,\gamma_{\ell})\) are seen as representatives \((R_{1},\ldots,R_{\ell})\) through the Dirac measures \(\delta_{\gamma}\); the produced Voronoi cells are translated into a mixture distribution.
At each iteration, new clusters \(\boldsymbol{C}=(C_{1},\ldots,C_{\ell})\) are determined from the previously updated representatives (centroids in this case), and then new representatives \(\boldsymbol{R}=(R_{1},\ldots,R_{\ell})\) are computed from the new clusters. These operations reduces the quantization error. We refer to these two steps as \(FindC\) and \(FindR\). The stopping criterion is typically related to a very slight difference between the calculated representatives and those from the previous iteration.
The usual quantization is augmented by allowing representatives that belong to a predefined set of probability measures \(\mathcal{R}\). In Augmented Quantization, \(\mathcal{R}\) can still contain Dirac measures but it will typically include other measures. Each iteration alternates between \(\boldsymbol{C}\) during \(FindC\) and the mixture defined by \((\boldsymbol{R},J)\) during \(FindR\). \(FindC\) is now a function which partitions \((x_{i})_{i=1}^{n}\) into clusters based on a set of the representatives and the random membership variable \(J\). \(FindR\) is a function which generates a set of representatives from a partition of \((x_{i})_{i=1}^{n}\). Algorithm 2 is the skeleton of the Augmented Quantization algorithm.
Lloyd's algorithm can be seen as descent algorithm applied to the minimization of the quantization error (Equation (1)). It converges to a stationary point which may not even be a local optimum (Selim et al. 1984). Although Lloyd's algorithm converges generally to stationary points that have a satisfactory quantization error, Appendix B illustrates that its generalization to continuous distributions does not lead to a quantization error sufficiently close to the global optimum. To overcome this limitation, an additional mechanism for exploring the space of mixture parameters is needed. To this aim, we propose a perturbation of the clusters called \(Perturb()\), which takes place between \(FindC\) and \(FindR\).
```
Input:\(\boldsymbol{R}=(R_{1},\ldots,R_{\ell})\in\mathcal{R}^{\ell}\), samples \((x_{i})_{i=1}^{n}\) \(J\in\mathcal{J}\) r.v. with \(\mathbb{P}(J=j)=\frac{1}{\ell}\) \((R^{\star},C^{\star},\mathcal{E}^{\star})\leftarrow(\emptyset,\emptyset,+\infty)\) while stopping criterion not met do Update clusters: \(\boldsymbol{C}\gets FindC(\boldsymbol{R},J)\) Perturb clusters: \(\boldsymbol{C}\gets Perturb(\boldsymbol{C})\) Update mixture: \(\boldsymbol{R}\gets FindR(\boldsymbol{C})\), \(J\) r.v. with \(\mathbb{P}(J=j)=\frac{\mathrm{card}\left(C_{j}\right)}{n}\), \(j\in\mathcal{J}\) Update the best configuration: if\(\mathcal{E}_{p}(\boldsymbol{C},\boldsymbol{R})<\mathcal{E}^{\star}\)then\(\mathcal{E}^{\star}\leftarrow\mathcal{E}\), \(\boldsymbol{C}^{\star}\gets\boldsymbol{C}\), \(\boldsymbol{R}^{\star}\leftarrow\boldsymbol{R}\), \(J^{\star}\gets J\) endwhile Output:
* the membership discrete random variable \(J^{\star}\) with \(\mathbb{P}(J^{\star}=j)=\frac{\mathrm{card}\left(C_{j}^{\star}\right)}{n},\;j \in\mathcal{J}\)
* the mixture \(R_{J^{\star}}^{\star}\).
```
**Algorithm 2** Augmented Quantization algorithm
## 3 Definition and properties of quantization errors
Before describing in details the \(FindC\), \(Perturb\) and \(FindR\) steps, some theoretical elements are provided that will be useful to explain our implementation choices.
### Quantization and global errors
Let \(\boldsymbol{R}=(R_{1},\ldots,R_{\ell})\) be \(\ell\) probability measures and \(\boldsymbol{C}=(C_{1},\ldots,C_{\ell})\) be \(\ell\) disjoint clusters of points in \(\mathcal{X}\subset\mathbb{R}^{d}\). Let us denote \(n_{j}=\mathrm{card}\left(C_{j}\right)\) for \(j\in\mathcal{J}\), and \(n=\sum_{j=1}^{n}n_{j}\).
The _quantization error_ between \(\boldsymbol{C}\) and \(\boldsymbol{R}\) is defined by
\[\mathcal{E}_{p}(\boldsymbol{C},\boldsymbol{R}):=\left(\sum_{j}^{\ell}\frac{ n_{j}}{n}\mathcal{W}_{p}(C_{j},R_{j})^{p}\right)^{1/p}. \tag{3}\]
It should not be mistaken with the _global error_ between \(\mathbf{C}\) and \(\mathbf{R}\), which is defined by
\[\epsilon_{p}(\mathbf{C},\mathbf{R}):=\mathcal{W}_{p}\left(\bigcup_{j=1}^{\ell}C_{j},R_{J }\right), \tag{4}\]
where \(J\in\mathcal{J}\) is a random variable such that \(\mathbb{P}(J=j)=\frac{n_{j}}{n}\). The quantization error aggregates the local errors between the clusters and the representatives, while the global error characterizes the overall mixture.
The quantization error is a natural measure of clustering performance and clustering is a way to decompose the minimization of the global error. Such a decomposition is justified by Proposition 1, which shows that a low quantization error guarantees a low global error.
**Proposition 1** (Global and quantization errors).: _The global error between a clustering \(\mathbf{C}\) and a set of representatives \(\mathbf{R}\) is lower that the quantization error between them:_
\[\epsilon_{p}(\mathbf{C},\mathbf{R}) =\mathcal{W}_{p}\left(\bigcup_{j=1}^{\ell}C_{j},R_{J}\right)\] \[\leq\left(\sum_{j}^{\ell}\frac{n_{j}}{n}\mathcal{W}_{p}(C_{j},R_{ j})^{p}\right)^{1/p}=\mathcal{E}_{p}(\mathbf{C},\mathbf{R})\.\]
The proof is provided in Appendix C.
### Clustering error
For a given clustering \(\mathbf{C}=(C_{1},\ldots,C_{\ell})\), the optimal representatives can be calculated individually for each cluster. This optimization allows to introduce the notion of quantization error of a clustering.
**Proposition 2** (Quantization error of a clustering).: _Let \(\mathbf{C}=(C_{1},\ldots,C_{\ell})\) be \(\ell\) clusters of points in \(\mathcal{X}\) and the associated optimal representatives_
\[\mathbf{R}^{\star}(\mathbf{C}):=\operatorname*{arg\,min}_{\mathbf{R}\in\mathcal{R}^{\ell }}\mathcal{E}_{p}(\mathbf{C},\mathbf{R})\.\]
_Then:_
1. _The optimal representatives can be optimized independently for each cluster,_ \[\mathbf{R}^{\star}(\mathbf{C})=(R_{1}^{\star}(C_{1}),\ldots,R_{\ell}^{ \star}(C_{\ell}))\] \[\text{where }R_{j}^{\star}(C_{j}):=\operatorname*{arg\,min}_{r\in \mathcal{R}}\mathcal{W}_{p}(C_{j},r),\,j\in\mathcal{J}.\] _This results trivially from the fact that_ \(\mathcal{E}_{p}\) _is a monotonic transformation of a sum of independent components._
2. _The local error of a cluster_ \(C_{j}\) _is defined as the_ \(p\)_-Wasserstein distance between_ \(C\) _and its optimal representative_ \(R_{j}^{\star}(C_{j})\)_,_ \[w_{p}(C_{j}):=\mathcal{W}_{p}(C_{j},R_{j}^{\star}(C_{j}))=\min_{r\in\mathcal{ R}}\mathcal{W}_{p}(C_{j},r)\.\]
3. _The quantization error of a clustering_ \(\mathbf{C}\) _is the quantization error between_ \(\mathbf{C}\) _and_ \(\mathbf{R}^{\star}(\mathbf{C})\)_, its associated optimal representatives,_ \[\mathcal{E}_{p}(\mathbf{C}):=\mathcal{E}_{p}(\mathbf{C},\mathbf{R}^{\star}(\mathbf{C}))=\min_{ \mathbf{R}\in\mathcal{R}^{\ell}}\mathcal{E}_{p}(\mathbf{C},\mathbf{R})\.\]
4. _The global error associated to a clustering_ \(\mathbf{C}\) _is lower than its quantization error,_ \[\epsilon_{p}(\mathbf{C})\leq\mathcal{E}_{p}(\mathbf{C})\] _with_ \(\epsilon_{p}(\mathbf{C})=\epsilon_{p}(\mathbf{C},\mathbf{R}^{\star}(\mathbf{C}))=\mathcal{W}_ {p}(\bigcup_{j=1}^{\ell}C_{j},R_{J}^{\star}(C_{j}))\) _and_ \(\mathbb{P}(J=j)=\frac{n_{j}}{n},\,j\in\mathcal{J}\)_. This follows directly from Proposition_ 1_._
The above clustering error is based on a \(p\)-Wasserstein distance minimization in the space of probability measures \(\mathcal{R}\). This problem is further addressed in Section 4.3 when seeking representatives from given clusters.
## 4 Algorithm steps
The steps of the Augmented Quantization algorithm can now be presented in details. We will explain how they contribute to reducing the quantization error which was discussed in Section 3.
### Finding clusters from representatives
At this step, a mixture distribution \(R_{J}\) is given through its \(\ell\) representatives \(\mathbf{R}=(R_{1},\ldots,R_{\ell})\) and the associated membership random variable \(J\) such that \(P(J=j)=p_{j}\,\ j=1,\ldots,\ell\). Clustering
is performed by, first, creating \(N\) samples from the mixture distribution: the membership variable \(j\) is sampled from \(J\), and each point is drawn from \(R_{j}\). Second, the data points are assigned to the cluster to which belongs the closest of the \(N\) samples. Before returning to the algorithm, we provide a theoretical justification for it.
Let \((X_{i})_{i=1}^{n}\in\mathcal{X}^{n}\) be a random sample of the above \(R_{J}\) mixture distribution. \((J_{i})_{i=1}^{N}\) are i.i.d. samples with the same distribution as \(J\), and \((Y_{i})_{i=1}^{N}\) i.i.d. samples with \(Y_{i}\sim R_{J_{i}}\), \(i=1,\ldots,N\). We define the following clustering,
\[\boldsymbol{C}^{\star}(R,J,n,N):=\left(C_{1}^{\star}(R,J,n,N),\ldots,C_{\ell} ^{\star}(R,J,n,N)\right)\]
with
\[C_{j}^{\star}(R,J,n,N):=\{X_{k}\;s.t.\;J_{I_{N}(X_{k})}=j\;,\;1\leq k\leq n\},\]
and \(\quad I_{N}(x):=\underset{i=1,\ldots,N}{\arg\min}||\;x-Y_{i}\;||\;.\)
We work with general distributions that are combinations of continuous and discrete distributions,
\(\mathcal{R}_{s}:=\{\beta_{c}R_{\mathrm{c}}+\beta_{\mathrm{disc}}R_{\mathrm{ disc}},R_{\mathrm{c}}\in\mathcal{R}_{\mathrm{c}},R_{\mathrm{disc}}\in \mathcal{R}_{\mathrm{disc}},\beta_{\mathrm{c}}+\beta_{\mathrm{disc}}=1\}.\)
\(\mathcal{R}_{\mathrm{c}}\) and \(\mathcal{R}_{\mathrm{disc}}\) are the set of measures associated to almost everywhere continuous distributions with finite support, and, the set of measures associated to discrete distributions with finite support, respectively.
In the family of probability measures \(R_{s}\), the above clustering is asymptotically consistent: the _quantization error_ associated to this clustering is expected to tend to zero as \(n\) and \(N\) increase.
**Proposition 3**.: _If \(R_{j}\in\mathcal{R}_{s}\) for \(j\in\mathcal{J}\) and \(X_{i}\sim R_{J_{i}^{X}}\), \(i=1,\ldots,n\) with \((J_{i}^{X})_{i=1}^{n}\) i.i.d. sample with same distribution as \(J\), then_
\[\underset{n,N\rightarrow+\infty}{lim}\mathbb{E}\left(\mathcal{W}_{p}(C_{j}^{ \star}(R,J,n,N),R_{j})\right)=0,\quad j\in\mathcal{J}.\]
_As a consequence,_
\[\underset{n,N\rightarrow+\infty}{lim}\mathbb{E}\left(\mathcal{E}_{p}( \boldsymbol{C}^{\star}(R,J,n,N))\right)=0.\]
The proof along with further details about \(\mathcal{R}_{s}\) are given in Appendix D.
The objective of the \(FindC\) procedure is to associate a partition of a sample \((x_{i})_{i=1}^{n}\) to the representatives \(\boldsymbol{R}\) with probabilistic weights defined by the random variable \(J\). It is described in Algorithm 3.
**Input:** Sample \((x_{i})_{i=1}^{n}\), \(\boldsymbol{R}=(R_{1},\ldots,R_{\ell})\), \(N\), \(J\) r.v. \(\in\mathcal{J}\)
\(C_{j}=\emptyset\), \(j\in\mathcal{J}\)
\((j_{i})_{i=1}^{N}\) N independent realizations of \(J\)
\((y_{i})_{i=1}^{N}\) N independent realizations, \(y_{i}\) sampled with associated measure \(R_{j_{i}}\)
**for \(x\in(x_{i})_{i=1}^{n}\)do**
\(I(x)\leftarrow\underset{i=1,\ldots,N}{\arg\min}||\;x-y_{i}\;||\)
\(C_{j_{I(x)}}\gets C_{j_{I(x)}}\cup x\)
**endfor**
**Output:** Partition \(\boldsymbol{C}=(C_{1},\ldots,C_{\ell})\)
**Algorithm 3**\(FindC\)
The \(FindC\) algorithm is consistent in the sense of the above Proposition 3.
### Perturb clusters
Once clusters are associated to representatives, the \(Perturb\) step is required to explore the space of partitions of \((x_{i})_{i=1}^{n}\). Appendix B illustrates why such a step is important through the example of a mixture of two uniforms that, without perturbation, cannot be identified from a given, seemingly reasonable, starting clustering. Thus, a relevant cluster perturbation should be sufficiently exploratory. To increase convergence speed, we make it greedy by imposing a systematic decrease in quantization error.
**Proposition 4** (Greedy cluster perturbation).: _Let \(\boldsymbol{C}=(C_{1},\ldots,C_{\ell})\) be a clustering, and \(G(\boldsymbol{C})\) a set of perturbations of this clustering such that \(\boldsymbol{C}\subset G(\boldsymbol{C})\)._
_The greedy perturbation, \(Perturb()\), is a function that yields a new clustering \(\boldsymbol{C}^{\star}=Perturb(\boldsymbol{C})\) through \(\boldsymbol{C}^{\star}:=\underset{\boldsymbol{C}^{\prime}\in G(\boldsymbol{C})} {\operatorname{arg\min}}\mathcal{E}_{p}(\boldsymbol{C}^{\prime})\)._
_Trivially, \(\mathcal{E}_{p}(\boldsymbol{C}^{\star})\leq\mathcal{E}_{p}(\boldsymbol{C})\)._
The quantization error decrease comes from the inclusion of the current clustering in the set of perturbations. Here, we choose to perturb the clusters through the identification of the points contributing the most to the quantization error.
The need for a perturbation has also been described in classical K-means methods where it
takes the form of a tuning of the initial centroids when restarting the algorithm (Capo et al., 2022).
Our cluster perturbation consists in, first, identifying the elements to move which defines the \(split\) phase and, second, reassigning them to other clusters in what is the \(merge\) phase.
#### \(split\) phase
The clusters with the \(\ell_{\text{bin}}\) highest local errors \(w_{p}\) will be split. Their indices make the indexes\({}_{\text{bin}}\) list. During the \(split\) phase, for each cluster \(C_{j},j\in\text{ indexes}_{\text{bin}}\), a proportion of \(p_{\text{bin}}\) points are sequentially removed and put in a sister "bin" cluster. A point \(x^{\star}\) is removed from the cluster if the clustering composed of the cluster after point removal and the bin cluster has the lowest error.
The values of \(\ell_{\text{bin}}\) and \(p_{\text{bin}}\) determine the magnitude of the perturbations. Their values will be discussed after the \(merge\) procedure is presented. Algorithm 4 sums up the \(split\) procedure.
```
Input: a sample \((x_{i})_{i=1}^{n}\), a partition \(\mathbf{C}=(C_{1},\dots,C_{\ell})\), \(p_{\text{bin}}\in[0,1]\), indexes\({}_{\text{bin}}=\{j_{1},\dots,j_{\ell_{\text{bin}}}\}\) for\(j\in\text{ indexes}_{\text{bin}}\)do \(C_{\text{bin}}^{\text{bin}}\leftarrow\emptyset\) \(n_{\text{bin}}\gets p_{\text{bin}}\text{card}\left(C_{j}\right)\) while\(\text{card}\left(C_{j}^{\text{bin}}\right)<n_{\text{bin}}\)do \(x^{\star}\leftarrow\arg\min_{x\in C_{j}}\mathcal{E}_{p}(\mathbf{C}_{j}^{split}(x))\) where \(\mathbf{C}_{j}^{split}(x)=(C_{j}\setminus x,C_{j}^{\text{bin}}\cup x)\) \(C_{j}^{\text{bin}}\gets C_{j}\setminus x^{\star}\) endwhile endfor Output: a partition \(\hat{\mathbf{C}}=split(\mathbf{C})=(C_{1},\dots,C_{\ell},C_{j_{1}}^{\text{bin}},\dots, C_{j_{\ell_{\text{bin}}}}^{\text{bin}})\)
```
**Algorithm 4**\(split\) procedure
#### \(merge\) phase
The \(merge\) procedure goes back to \(\ell\) clusters by combining some of the \(\ell+\ell_{\text{bin}}\) clusters together. The approach here simply consists in testing all the possible mergings to go from \(\ell+\ell_{\text{bin}}\) to \(\ell\) groups, and in keeping the one with the lowest quantization error. The full algorithm is provided in Appendix E.
If \(\mathscr{P}\) is the set of all partitions of \(1,\dots,\ell+\ell_{\text{bin}}\) into \(\ell\) groups, the number of possible mergings, which is the cardinal of \(\mathscr{P}\), is equal to the Stirling number of the second kind \(S(\ell+\ell_{\text{bin}},\ell)\)(Chan et al., 2009). This number is reasonable if \(\ell_{\text{bin}}=2\). For instance, \(S(4,2)=7\), \(S(5,3)=25\), \(S(6,4)=65\), \(S(7,5)=140\). Thus, we select that value of \(\ell_{\text{bin}}\) for the applications.
It is important to note that the clustering before splitting, \(\mathbf{C}\), is described by one of the partitions of \(\mathscr{P}\), that is \(\mathbf{C}\in G(\mathbf{C})\). Therefore, the best possible merge can return to the clustering before the perturbation step, which guarantees that \(Perturb\) does not increase the quantization error (Proposition 4).
### Perturbation intensity
In our implementation, the clustering perturbation intensity is set to decrease with time. Looking at the algorithm as a minimizer of the quantization error, this means that the search will be more exploratory at the beginning than at the end, as it is customary in stochastic, global, optimization methods such as simulated annealing. The perturbation intensity is controlled by \(\ell_{\text{bin}}\) and \(p_{\text{bin}}\). \(\ell_{\text{bin}}\) is set equal to 2 to keep the computational complexity of \(merge\) low enough. \(p_{\text{bin}}\) decreases with an a priori schedule made of 3 epochs where \(p_{\text{bin}}=0.4\) then 0.2 and 0.1. These values were found by trial and error. Within each epoch, several iterations of (\(FindC\),\(Perturb\),\(FindR\)) are performed. Before explaining the stopping criterion, we need to describe the last step, \(FindR\).
### Finding representatives from clusters
Given a clustering \(\mathbf{C}=(C_{1},\dots,C_{\ell})\), the \(FindR\) step searches for the associated representatives \(\mathbf{R}^{\star}(\mathbf{C})=(R_{1}^{\star}(C_{1}),\dots,R_{\ell}^{\star}(C_{\ell}))\) that are optimal in the sense that \(R_{j}^{\star}(C_{j}):=\underset{r\in\mathscr{R}}{\arg\min}\mathcal{W}_{p}(C_{j },r)\). In practice, \(\mathcal{R}\) is a parametric family \(\{r(\underline{\eta}),\underline{\eta}\in\mathbb{R}^{d}\}\). The above minimization is approximated by replacing the distance between the multidimensional distributions \(\mathcal{W}_{p}(C_{j},r)\) by the sum over the \(m\) dimensions of the \(p\)-Wasserstein distances between the marginals. Such an approximation is numerically efficient because the Wasserstein distance in
1D can be easily expressed analytically for two probability measures \(\mu_{1}\) and \(\mu_{2}\)(Panaretos et al., 2019):
\[\mathcal{W}_{p}(\mu_{1},\mu_{2}):=\left(\int_{0}^{1}\mid F_{1}^{-1}(q)-F_{2}^{-1 }(q)\mid^{p}dq\right)^{\frac{1}{p}}, \tag{5}\]
where \(F_{1}\) and \(F_{2}\) are the cumulative distribution functions. Detailed examples of the \(FindR\) function with Dirac, normal and uniform distributions are described in Section 5. In these examples, the analytical minimization of the \(p\)-Wasserstein distance has only a single local optimum, which is inherently the solution to the problem. Situations with multiple local optima may happen, but the analytical expression of the distance is, in practice, a strong asset in favor of the numerical tractability of \(FindR\).
### Implementation aspects
A stopping criterion is implemented in the form of either a minimal change in representatives, or a maximal number of iterations. The minimal change in representatives is measured by the sum over the clusters \(j\) of the Euclidean norm between the parameters (\(\eta\)) of the previous representative \(j\) and the new one.
For each value of \(p_{\text{bin}}\), multiple iterations are performed until convergence is achieved, as explained above. The detailed Augmented quantization algorithm is presented in Appendix F.
## 5 Toy problems with homogeneous mixtures
In this section, mixtures will be estimated from samples that are generated with quasi-Monte Carlo methods in order to represent mixtures \(R_{J_{\text{true}}}^{\text{true}}\) of known distributions, referred to as the "true mixtures". The objective will be to best represent the distribution of the samples as measured by the quantization error (Equation 2) or the global error (Equation 4). On the opposite, the objective of AQ is not to minimize an underlying distance to \(R_{J_{\text{true}}}^{\text{true}}\) because it is typically not known in practice. Nevertheless, the errors between the samples and the true mixtures will be calculated as they quantify the sampling error. They are called _true errors_.
### 1D uniform example
We consider the sample \(S_{u}=(x_{i})_{i=1}^{300}\in\mathbb{R}^{300}\), obtained with a quasi-Monte Carlo method (a Sobol sequence, Joe et al., 2008) to represent the mixture \(R_{J_{\text{true}}}^{\text{true}}\) of density \(f_{\text{true}}=\frac{1}{3}\mathds{1}_{[0,1]}+\frac{2}{3}\frac{\mathds{1}_{[ 0,3,0.6]}}{0.3}\). The distribution of \(S_{u}\) is shown in Figure 1 and the mixture is defined in Table 1.
We seek to identify two representatives in \(\mathcal{R}=\{R_{U}(a,b),(a,b)\in\mathbb{R}^{2},a\leq b\}\), where \(R_{U}(a,b)\) is the measure associated to \(\mathcal{U}(a,b)\). In all the applications presented in this article, the \(p\)-Wasserstein distance has \(p=2\).
The quantization algorithms start from the two representatives \(R_{1}=R_{U}(0,0.5)\) and \(R_{2}=R_{U}(0.5,1)\). Regarding the \(FindR\) step, that provides representatives from clusters, we investigate for each cluster \(C_{j}\) and for each dimension \(k=1,\dots,m\), (\(m=1\) here) the best parameters \((a_{j}^{k},b_{j}^{k})\in\mathbb{R}^{2}\) minimizing \(\mathcal{W}_{2}(C_{j}^{k},R_{U}(a_{j}^{k},b_{j}^{k}))\), where \(C_{j}^{k}=\{x^{k}\ :\ (x^{1},\dots,x^{m})\in C_{j}\}\). Denoting \(Q_{j}^{k}\) the empirical quantile function of \(C_{j}^{k}\), the optimal uniform representative can be explicitly calculated as
\[\left\{\begin{aligned} a_{j}^{k}&=\int_{0}^{1}Q_{j}^{ k}(q)(-6q+4)dq\\ b_{j}^{k}&=\int_{0}^{1}Q_{j}^{k}(q)(6q-2)dq\end{aligned}\right.\]
The first \(FindC\) step of Algorithm 2 yields the two clusters shown in Figure 2. The \(Perturb\) step is illustrated in Figure 3 with \(p_{\text{bin}}=0.4\). Then the \(FindR\) function provides two new representatives, \(R_{1}=R_{U}(0.31,0.60)\) and \(R_{2}=R_{U}(0.03,0.93)\). This single iteration is enough to get close to the representatives of the investigated mixture. At the completion of the other iterations, the estimated
Figure 1: Distribution of \(S_{u}\).
mixture \(R_{J\mathbf{\cdot}}^{\star}\) reported in Table 1 is found. This mixture is represented in Figure 3(b), with the optimal clusters illustrated in Figure 3(a). The optimal quantization error is \(\mathcal{E}_{2}(\mathbf{C}^{\star},\mathbf{R}^{\star})=4.4\times 10^{-3}\) and the global error is \(\epsilon_{2}(\mathbf{C}^{\star},\mathbf{R}^{\star})=3.0\times 10^{-3}\). As shown in Table 1, this global error is slightly lower than that between the true mixture and the sample. Thus, the algorithm finds a solution whose accuracy is of the order of the sampling error.
To test the robustness of the method, the previous experiment is repeated with 15 other samples made of 300 points obtained from Sobol sequences that represent mixtures of respective densities \(f_{i}=p_{i}\frac{\mathbf{1}_{i_{u},k_{i}}}{b_{i}-a_{i}}+(1-p_{i})\frac{ \mathbf{1}_{i_{c},d_{i}}}{d_{i}-c_{i}}\), with \(a_{i},b_{i},c_{i},d_{i},p_{i}\in[0,1]\) sampled such that \(b_{i}>a_{i}\) and \(d_{i}>c_{i}\). The resulting distributions of the quantization errors and the global errors and the comparison with the errors of the true mixtures are shown in Figure 5. The errors are low throughout the repetitions. For example, the median quantization error, which falls below \(2\times 10^{-3}\), is smaller than the one achieved with the true mixtures. The distribution of the global errors have a similar pattern. The evolution of the quantization error throughout the iterations is described in Appendix G.
### A mixture of Dirac example
We now assume that the representatives are delta Dirac measures, which corresponds to the classical quantization problem: we have \(\mathcal{R}=\{\delta_{\gamma},\gamma\in\mathcal{X}\}\), where \(\delta_{\gamma}\) is the Dirac measure at the point \(\gamma\).
The \(FindC\) procedure described in Section 4.1 is equivalent to the computation of the Voronoi cells in Lloyd's algorithm. From \(\mathbf{R}=(\delta_{\gamma_{1}},\dots,\delta_{\gamma_{\ell}})\), it provides \(\mathbf{C}=(C_{1},\dots,C_{\ell})\) with \(C_{j}=\{x\,:\,j=\operatorname*{arg\,min}_{j^{\prime}}\lVert x-\gamma_{j^{ \prime}}\rVert\}\).
The \(FindR\) step of Section 4.3 amounts to finding, for all clusters \(j\), \(\gamma_{j}\) minimizing
\[\mathcal{W}_{2}(\gamma,C_{j})^{2}=\frac{1}{\operatorname{card}\big{(}C_{j} \big{)}}\sum_{x\in C_{j}}\lVert x-\gamma\rVert^{2}.\]
The solution is, for \(j\in\mathcal{J}\),
\[\gamma_{j}=\frac{1}{\operatorname{card}\big{(}C_{j}\big{)}}\sum_{x\in C_{j}}x.\]
With Dirac representatives, like for \(FindC\), \(FindR\) degenerates into a step of Lloyd's algorithm. Note that it is equivalent to optimize the sum of the Wasserstein distances on each marginal, as proposed in Section 4.3.
In the Dirac case, the Augmented Quantization is identical to the classical K-means quantization, but with an additional step of clusters
\begin{table}
\begin{tabular}{|l|l|} \hline
**True mixture** & **Estimated mixture** \\ \hline \(R_{1}^{\text{true}}=R_{U}(0.01,1.00)\) & \(R_{1}^{\star}=R_{U}(0.01,0.99)\) \\ \(R_{2}^{\text{true}}=R_{U}(0.30,0.60)\) & \(R_{2}^{\star}=R_{U}(0.30,0.60)\) \\ \(\mathbb{P}(J_{\text{true}}=1)=1/3\) & \(\mathbb{P}(J^{\star}=1)=0.33\) \\ \(\mathbb{P}(J_{\text{true}}=2)=2/3\) & \(\mathbb{P}(J^{\star}=2)=0.67\) \\ \hline \(\mathcal{W}_{2}(S_{u},R_{J_{\text{true}}}^{\text{true}})=3.6\times 10^{-3}\) & \(\mathcal{W}_{2}(S_{u},R_{J^{\star}}^{\star})=3.0\times 10^{-3}\) \\ \hline \end{tabular}
\end{table}
Table 1: True and estimated mixtures in the uniform 1D test case. \(\mathcal{W}_{2}(S_{u},R_{J_{\text{true}}^{\text{true}}}^{\text{true}})= \epsilon_{2}(\mathbf{C}^{\text{true}},R_{J_{\text{true}}^{\text{true}}}^{\text{true}})\) is the global error between the true clustering and the true mixture. Similarly, \(\mathcal{W}_{2}(S_{u},R_{J^{\star}}^{\star})=\epsilon_{2}(\mathbf{C}^{\star},R_{J^{ \star}}^{\star})\) is the global error between the estimated clustering and the estimated mixture.
Figure 2: Distribution of two clusters provided by the \(FindC\) step started from the representatives \(R_{1}=R_{U}(0,0.5)\) and \(R_{2}=R_{U}(0.5,1)\).
perturbation. We now investigate the effect of this operation.
Lloyd's algorithm and AQ are compared on 500 different samples of 20 points with 20 starting clusters centers tested for each sample,which means a total of \(2\times 10^{4}\) runs. The samples are i.i.d. realizations of a uniform distribution in \([0,1]^{2}\), and the quantization is performed with two representatives. Figure 6 shows one example of the results. For each of these tests, the quantization errors (Equation (1)) are computed for the two methods, and are denoted \(\mathcal{E}_{\text{Lloyd}}\) and \(\mathcal{E}_{\text{AQ}}\) for Lloyd's algorithm and AQ, respectively.
The distribution of the difference, \(\frac{\mathcal{E}_{\text{Lloyd}}-\mathcal{E}_{\text{AQ}}}{\mathcal{E}_{ \text{Lloyd}}}\times 100\), is provided in Figure 7. In 44% of the tests, both methods have the same quantization error. AQ outperforms Lloyd's algorithm (as characterized by a negative relative difference) in 50% of the cases.
The median of the improvement of AQ over Lloyd is 3.1%, and the third quartile is 7.7%. Thanks to the cluster perturbation, the Augmented Quantization produces significantly better results than K-means in the Dirac case.
### A Gaussian mixture example
The Gaussian mixture models, denoted GMM and estimated with the EM algorithm are very popular when it comes to identifying Gaussian representatives. In the following example, we compare the Augmented Quantization with this GMM in one dimension. The representatives belong to
Figure 4: 3: The optimal clusters estimated by AQ. 3: The optimal representatives and their weights. Each color is associated to a representative, and the support of the uniform distributions are plotted as a vertical bar.
Figure 3: \(Perturb\) step with \(p_{\text{bin}}=0.4\). Figure 2(a) shows the \(split\) into 4 clusters, and Figure 2(b) is the result of the \(merge\) phase: clusters 1 and 2 are grouped, clusters 1bin and 2bin are grouped too. Clusters 1bin and 2bin are the elements removed from clusters 1 and 2, respectively.
\(\{R_{N}(\mu,\sigma^{2}),(\mu,\sigma)\in\mathbb{R}^{2}\}\) where \(R_{N}\) is the measure associated to \(\mathcal{N}(\mu,\sigma^{2})\).
The \(FindR\) step is the minimization of \(\mathcal{W}_{2}(C_{j}^{k},R_{N}(\mu_{j}^{k},(\sigma_{j}^{k})))\), for each cluster \(C_{j}\) and each dimension \(k=1,\ldots,m\), (\(m=1\) here). It leads to
\[\left\{\begin{array}{ll}\mu_{j}^{k}&=\int_{0}^{1}Q_{j}^{k}(q)dq=\frac{1}{ \mathrm{card}\big{(}C_{j}^{k}\big{)}}\sum_{x\in C_{j}^{k}}x\\ \sigma_{j}^{k}&=\frac{\int_{0}^{1}Q_{j}^{k}(q)\mathrm{erf}^{-1}(2q-1)dq}{\sqrt {2}\int_{0}^{1}\left(\mathrm{erf}^{-1}(2q-1)\right)^{2}dq}\end{array}\right.\]
where \(\mathrm{erf}\) is the error function defined by \(\mathrm{erf}(z)=\frac{2}{\sqrt{(}\pi)}\int_{0}^{z}\exp{(-t^{2})}dt\).
It is important to note that this \(FindR\) procedure leads to representatives with independent marginals, while GMM can describe a covariance structure.
We consider a family of samples \((S_{g}^{i})_{i=1}^{15}\), made of \(400\) points obtained with Gaussian transformations of Sobol sequences. They represent mixtures of density \(f_{i}(x)=p_{i}\frac{1}{\sigma_{i}^{1}\sqrt{2\pi}}exp(-(\frac{x-\mu_{i}^{1}}{ \sigma_{i}^{1}})^{2})+(1-p_{i})\frac{1}{\sigma_{i}^{2}\sqrt{2\pi}}exp(-(\frac {x-\mu_{i}^{2}}{\sigma_{i}^{2}})^{2})\), with \(\mu_{i}^{1}\), \(\sigma_{i}^{1}\), \(\mu_{i}^{2}\), \(\sigma_{i}^{2}\), \(p_{i}\) randomly sampled in \([0,1]\).
Figure 5: Distributions of the errors in the uniform test case.
Figure 6: An example of the optimal clusters (red and blue dots) and their centroids (black crosses) obtained with Lloyd’s algorithm (left) and AQ (right). Both algorithms were started with the same representatives (i.e., here, cluster centers). The quantization error is \(0.28\) with Lloyd’s algorithm and \(0.25\) with AQ.
Figure 7: Distribution of the relative difference (in %) between the quantization errors of Lloyd’s algorithm and AQ. Negative values correspond to a better performance of AQ.
The distributions of the quantization and global errors are provided in Figure 8. The quantization error cannot be computed for GMM because the elements of the sample are not assigned to a cluster but belong to clusters in probability. On the opposite, the global error (formula (4)) of GMM can be calculated. The median of the distribution of AQ's quantization errors is very close to that of the true mixtures with a lower dispersion. The distributions of AQ and the true global errors compare identically: the median of the true mixture is sligthly lower but its spread slightly higher. This also shows that the AQ error is much lower than the sampling error. The GMM scheme, on the other hand, identifies with slightly better global errors. Note that these errors are one order of magnitude higher than those observed in the uniform test case. Indeed, the samples obtained by quasi-Monte Carlo methods are closer to their targeted distribution in the uniform case than in the Gaussian one.
Figure 8(a) displays the distribution of the sample \(S_{g}^{t}\) that corresponds to the median global error obtained with AQ. \(S_{g}^{t}\) is chosen as a typical sample of the Gaussian test case. Table 2 summarizes, for this median sample \(S_{g}^{t}\), the mixtures identified by the various methods along with their global errors. Even though GMM finds a closer to the sample than AQ does, Figure 8(c) illustrates that both distributions are close to each other. Figure 8(b) further shows that the clusters identified by AQ closely match the distributions of their representatives.
To conclude about the experiments made on the Gaussian test case, both GMM and AQ identify relevant sub-distributions, but GMM which is specifically designed to handle Gaussian mixtures remains slightly better. Of course, GMM is less general than AQ as it cannot mix distributions of different types such as Gaussian, uniform or Dirac measures.
## 6 An hydrib mixture test case
To illustrate the possibilities of augmented quantization compared to existing schemes such as GMM, we now cover the case of a hybrid mixture made of a Gaussian and a uniform representative.
We consider a sample \(S_{\text{hyb}}\) of 350 points distributed to represent a mixture of density \(f_{\text{hyb}}=\frac{1}{3}\mathbf{1}_{[0,2,0.5]}+\frac{2}{3}\frac{1}{0.2 \sqrt{2\pi}}exp(-(\frac{x-0.6}{0.2})^{2})\). The sample is displayed in Figure 9(a). The results of AQ are shown in Figure 9(b). The two representatives are well identified by AQ, leading to a quantization error of \(8.3\times 10^{-3}\) and a global error equal to \(7.6\times 10^{-3}\).
To investigate the robustness of AQ, a family of samples \((S_{\text{hyb}}^{i})_{i=1}^{15}\) of size 350 is considered. A comparison between the true mixtures and the mixtures estimated by AQ is provided in Figure 11. These boxplots illustrate that the identified mixtures represent well the samples, with distributions of errors near the distributions of the errors obtained with the true mixtures. The additional error of AQ is very small compared to the sampling error. It is attributed to limited size of the samples.
## 7 Application to river flooding
Thanks to its ability to mix uniform and Dirac measures, Augmented Quantization can be applied to the sensitivity analysis of numerical simulators of a real systems. The broad-based goal of a sensitivity analysis is to assess the impact of an uncertainty of the inputs on the uncertainty of the output. A review of the recent advances can be found in Da Veiga et al. (2021).
Our proposal for an AQ-based sensitivity analysis starts by estimating a mixture to represent the distribution of the inputs that lead to a specific output regime. The idea is to bring information about the causes of this regime. We compare the overall input distribution to their conditional distribution given the output belongs to the investigated output regime. Similar approaches have already been conducted with Hilbert-Schmidt Independence Criterion in **spagnol**. If an input variable has the same marginal distribution in the identified mixture as in the original distribution, then this variable has no direct influence on the triggering of the output regime. Vice versa, if the support of the marginal distribution of an input of the identified mixture is narrower than its original support, then its value contributes to the phenomenon under scrutiny. Although the sensitivity analysis of groups of variables could follow the same principle, it would require extensions that are out of the scope of this paper.
As a specific example, we investigate the probabilistic distribution of flood maps. The inputs are features of the environment that are possible causes of the floodings. The case study focuses on a section of the Loire River near Orleans, France, which is flanked by levees on both banks and has a history of significant flooding in the 19th century. The area studied, including historical levee breaches (Maurin et al., 2013) is illustrated in Figure 12.
To simulate the river flow of the Loire River between Gien and Jargeau over a distance of 50 km, the Institute for Radiological Protection and Nuclear Safety (IRSN) has built a hydraulic model using the open-source TELEMAC-2D simulator (Pheulpin et al., 2022). The model incorporates an upstream hydrograph and a calibration curve as boundary conditions and has been calibrated using well-known flood events by adjusting the roughness coefficients (Strickler coefficients). Breaches are considered as well, leading to the study of four variables, considered as independent:
* The maximum flow rate (\(Q_{\text{max}}\)) which follows a Generalized Extreme Value distribution, established using the Loire daily flow rate at Gien.
* Two roughness coefficients (\(K_{\text{s3}}\) and \(K_{\text{s4}}\)) of specific zones illustrated in Figure 12 that are calibration parameters. These coefficients are calculation artifacts and are not observed in real-world data sets. Therefore, triangular distributions are employed and the mode of these distributions corresponds to the calibration values defined for each roughness area.
* The erosion rate (\(er\)), describing the vertical extent of the breach during the simulated time, which follows a uniform distribution.
Due to the large size of the area that is modeled, our focus is limited to the left bank sector of the river, specifically Sully-sur-Loire. There, we have simulated seven breaches whose sites and lengths correspond to those of the actual history.
Figure 8: Distributions of the errors in the Gaussian test case.
Figure 9: Illustration of AQ and GMM with the sample \(S_{g}^{t}\).
Figure 11: Distributions of the errors in the hybrid test case.
Figure 12: Study area of the flooding test case, including the levees and the historical breaches (OpenStreetMap contributors 2017). The zones of the roughness coefficients (\(K_{\text{s3}}\) and \(K_{\text{s4}}\)) are delimited with the green lines.
Figure 10: Hybrid case of a Gaussian and a uniform mixture. Data (left) and distribution of the clusters identified by AQ (right).
Through quantization, we want to perform a sensitivity analysis of the inputs leading to the most severe floodings.
As a preliminary step, a classical quantization is performed in the output space of floodings maps (expressed as water heights outside of the river domain) with the approach detailed in Sire et al. (2023). 1000 flooding maps are simulated with the hydraulic model and form a training dataset to build the metamodel. This metamodel is generative in the sense that it is based on Gaussian processes. \(10^{6}\) maps are then sampled from the metamodel and quantized with a mixture of 5 Dirac distributions thanks to the R package FunQuant (Sire, 2023). This yields the 5 prototype maps, 5 clusters of floodings and the corresponding probabilities provided in Appendix H.
We specifically focus on analyzing the input distribution associated with the most severe pattern, which has a probability of \(5.8\times 10^{-4}\). The inputs are mapped through the empirical cumulated distribution function of each marginal for all the floodings, which amounts to use the rank statistics of the inputs instead of their actual values. It is indeed well-known that the distribution of these normalized inputs is uniform on each marginal. This transformation facilitates the comparison of the overall input distribution, which is thus uniform on the whole support by definition, with that of the inputs leading to a specific regime.
To accomplish this, Augmented Quantization is executed within the cluster of most severe floodings with a sample of 500 elements: maps are generated with the metamodel following the overall distribution of the inputs until 500 maps are associated to the investigated regime.
The identified mixtures contain 3 representatives that can be a Dirac or uniform distributions with a support of width 0.25, 0.5 or 1. This allows to highlight influential variables: the distribution of a variable that has no impact on the extreme floodings should remain a uniform between 0 and 1 while values concentrated around specific values reveal a directly explanatory variable. The family of measures considered here is written \(\mathcal{R}=\{R(\alpha_{1},\ldots,\alpha_{4},a_{1},\ldots,a_{4},\sigma_{1}, \ldots,\sigma_{4})\}\) where \(R=R_{1}\times\cdots\times R_{4}\) with
\[R_{i}([x_{1},x_{2}])=\] \[\bigg{[}\alpha_{i}\frac{min(x_{2},a_{i}+\frac{\sigma_{i}}{2})-max (x_{1},a_{i}-\frac{\sigma_{i}}{2})}{\sigma_{i}}\] \[\quad+(1-\alpha_{i})\mathds{1}_{[x_{1},x_{2}]}(a_{i})\bigg{]} \mathds{1}_{[a_{i}-\frac{\sigma_{i}}{2},+\infty[}(x_{2})\]
with \((\alpha_{1},\ldots,\alpha_{4},a_{1},\ldots,a_{4},\sigma_{1},\ldots,\sigma_{4}) \in\{0,1\}^{4}\times[0,1]^{4}\times\{0.25,0.5,1\}\).
Figure 13 presents the representatives estimated with AQ to approximate the distribution of inputs leading to the most severe floodings. Each of the 3 representatives is associated to a color (blue, green and red). A vertical bar materializes the support of a uniform distribution. A triangle indicates the position of a Dirac. This study shows the major influence of the maximum flow rate, \(Q_{\text{max}}\), as only Dirac distributions are identified at ranks very close to 1, meaning that only very high values of \(Q_{\text{max}}\) yield these extreme events.
The impact of the roughness in the zone associated with \(K_{\text{s4}}\) is also evident. Lower values of \(K_{\text{s4}}\) appear to promote intense floodings, as evidenced by the main blue representative of probability 60% that exhibits a uniform distribution of width 0.25 between \(K_{\text{s4}}=0\) and \(K_{\text{s4}}=0.25\). On the other hand, the influence of the low values of the roughness in the zone of \(K_{\text{s3}}\) is relatively
weaker, its main representative (in blue) being a uniform between 0 and 1. However, it has a joint influence with \(K_{\mathrm{s4}}\), as observed in the blue and green representatives. This combination accounts for a combined probability mass of approximately 87%, where either \(K_{\mathrm{s3}}\) or \(K_{\mathrm{s4}}\) is lower than 0.5. These results can be interpreted by remembering that \(K_{\mathrm{s4}}\) is a roughness in a section of the river downstream of the section of \(K_{\mathrm{s3}}\).
Interestingly, the high erosion rates, \(er\), that describe important breaches, do not seem to have a significant impact on the most severe flooding events: only 13% of the probability mass is represented by a small uniform, the red representative that has a support between 0.42 and 0.92. The rest of the mass is a uniform between 0 and 1. This can be attributed to the extremely high flow rates \(Q_{\mathrm{max}}\) involved, which render the breach impact almost negligible since water flows over the levees anyway.
## 8 Summary and perspectives
This work proposes an Augmented Quantization (AQ) scheme to build mixture models from a sample \((x_{i})_{i=1}^{n}\). It extends the classical Lloyd's algorithm which generates Dirac mixtures by minimizing the quantization error of Equation (1) through repeated clustering and centroid calculations. The quantization error can be reformulated with the Wasserstein distance to extend it to all types of distributions, discrete or continuous, with finite or infinite support. AQ is then more general than the classical mixture modeling approaches based on the Expectation-Maximization algorithm. AQ minimizes a Wasserstein-based quantization error over a space of probability measures called representatives. AQ proceeds by repeating clustering, clustering perturbation and representatives estimation calculations. The decomposition of the quantization error minimization into clustering and representatives estimation reduces the numerical complexity of the method compared to a direct optimization of the mixture parameters. For example, the number of parameters to optimize three uniform representatives in 3D is 20 but it reduces to 6 parameters by clusters. With respect to K-means algorithms, the cluster perturbation is also new. It is added to make the approach more robust to the initial choice of representatives.
We have provided some theoretical guarantees about AQ. The cluster perturbation and the representatives estimation locally minimize the quantization errors. The clustering scheme is asymptotically consistent with respect to the current mixture.
Tests with homogeneous mixtures of Dirac, uniform and Gaussian unidimensional distributions have been performed. They are completed by a four-dimensional application to river floodings where Dirac and uniform representatives are mixed. The results of these tests are promising, yet several areas of improvement are worth exploring.
_Tuning of the cluster perturbation intensity._
In our implementation of AQ, the cluster perturbation intensity is tuned through the parameters \(\ell_{\mathrm{bin}}\) and \(p_{\mathrm{bin}}\). While \(\ell_{\mathrm{bin}}\), the number of clusters to split, is fixed by the numerical complexity of the algorithm, \(p_{\mathrm{bin}}\), the proportion of points to remove from clusters that are split, was tuned manually.
An improved version of the method could adapt online the proportion \(p_{\mathrm{bin}}\). An ingredient towards such adaptation is to detect a cut-off point in clustering error after splitting.
_Active learning of the number of representatives._
The AQ method described here maintains a constant number of clusters (i.e., representatives). This number is chosen a priori, which assumes some knowledge of the data structure. In a more
Figure 13: Mixture of Dirac and uniform distributions estimated by AQ and approximating the distribution of the parameters associated to the most severe floodings. Each color corresponds to one of the three representatives. A vertical bar is a uniform distribution, and a triangle indicates the position of a Dirac.
general scenario, cluster perturbation could modify the number of clusters based on the Wasserstein distances between the clusters (including the bins) and their representatives. For example, some merges could significantly increase the quantization error, indicating that the number of clusters should increase. While such ideas should be explored, they are likely to add computational complexity to the method.
_Computational cost._
The computational cost of the current AQ algorithm restricts the sample size to be less than several thousands because of the execution of greedy algorithms. The split operation has the largest complexity, as the selection of the point to transfer from the cluster to the bin necessitates computing Wasserstein distances for all possible transfers. To alleviate the computational burden, it may be worthwhile to investigate batch and incomplete transfer approaches, although this may reduce precision. As the overall complexity depends on the complexity of the Wasserstein distance, it may be opportune to approximate it by sliced Wasserstein distance (Nietert et al., 2022).
## Acknowledgement
This research was conducted with the support of IRSN and BRGM, through the consortium in Applied Mathematics CIROQUO ([https://doi.org/10.5281/zenodo.6581217](https://doi.org/10.5281/zenodo.6581217)), gathering partners in technological and academia in the development of advanced methods for Computer Experiments.
## Statements and Declarations
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
**SUPPLEMENTARY MATERIAL**
_Codes related to the uniform, dirac and gaussian test cases:_ Git repository containing R notebooks to reproduce all the experiments related to the test cases that are described in the article. ([https://github.com/charliesire/augmented_quantization.git](https://github.com/charliesire/augmented_quantization.git))
|
2307.07516 | Voting-based Multimodal Automatic Deception Detection | Automatic Deception Detection has been a hot research topic for a long time,
using machine learning and deep learning to automatically detect deception,
brings new light to this old field. In this paper, we proposed a voting-based
method for automatic deception detection from videos using audio, visual and
lexical features. Experiments were done on two datasets, the Real-life trial
dataset by Michigan University and the Miami University deception detection
dataset. Video samples were split into frames of images, audio, and
manuscripts. Our Voting-based Multimodal proposed solution consists of three
models. The first model is CNN for detecting deception from images, the second
model is Support Vector Machine (SVM) on Mel spectrograms for detecting
deception from audio and the third model is Word2Vec on Support Vector Machine
(SVM) for detecting deception from manuscripts. Our proposed solution
outperforms state of the art. Best results achieved on images, audio and text
were 97%, 96%, 92% respectively on Real-Life Trial Dataset, and 97%, 82%, 73%
on video, audio and text respectively on Miami University Deception Detection. | Lana Touma, Mohammad Al Horani, Manar Tailouni, Anas Dahabiah, Khloud Al Jallad | 2023-06-30T17:05:11Z | http://arxiv.org/abs/2307.07516v3 | # Voting-based Multimodal Automatic Deception Detection
###### Abstract
Automatic Deception Detection has been a hot research topic for a long time, using machine learning and deep learning to automatically detect deception, brings new light to this old field. In this paper, we proposed a voting-based method for automatic deception detection from videos using audio, visual and lexical features. Experiments were done on two datasets, the Real-life trial dataset by Michigan University and the Miami University deception detection dataset. Video samples were split into frames of images, audio, and manuscripts. Our Voting-based Multimodal proposed solution consists of three models. The first model is CNN for detecting deception from images, the second model is Support Vector Machine (SVM) on Mel spectrograms for detecting deception from audio and the third model is Word2Vec on Support Vector Machine (SVM) for detecting deception from manuscripts. Our proposed solution outperforms state of the art. Best results achieved on images, audio and text were 97%, 96%, 92% respectively on Real-Life Trial Dataset, and 97%, 82%, 73% on video, audio and text respectively on Miami University Deception Detection
deception detection, trustworthiness, lie detection, Mu3d dataset, real life trial dataset
## 1 Introduction:
In recent years, many research works were done on automated deception detection stating that it may be an efficient solution for different problems such as deception in job interviews and court room trials.
Lying has a huge effect in our day to day lives. For example, in court trials where it could lead to falsely accusing the innocents and freeing the guilty. Also, in job interviews where hiring the wrong employees could prove detrimental to a company's success. This is why it is important to get an accurate decision on whether the person is telling the truth or not in such situations.
Traditional methods for deception detection include analyzing heart beats shifts in posture gaze aversion and limb movements. A study conducted in 2003 [1] shows that liars tell far fewer interesting stories than truth-tellers and that liars also make worse impressions and their demeanor is less calm in general the stories they tell seem more perfect and often contain unrealistic situations.
One of the most popular ways of detecting deception is polygraph or lie-detector machines which monitor heartbeat and physical cues. An article published by the co-inventor of the modern polygraph, L Keeler [2] mentions that the device consists of three units, one recording continuously and quantitatively the subject's blood pressure and pulse, one giving a duplicate blood-pressure pulse curve taken from some other part of the subject's body, and the third recording respiration. However, the device's success in revealing deception and guilt in criminal suspects is largely due to the psychological impact of such tests with an estimated 75% of convicted suspects being tested confessing their crimes. With that being said, this approach is impractical in most cases because it requires the use of skin-contact devices and a human expert's opinion to obtain accurate measurements and interpretations.
Considering the drawbacks of traditional methods of deception detection, automating the process of deception detection has been a hot research topic in recent years.
An article published in 2019 with the title "Can a Robot Catch You Lying? A Machine Learning System to Detect Lies During Interactions" [3] discusses the potential for robots to autonomously detect deception and aid in human interactions. The study involved showing participants videos of robberies and then interrogating them about what they saw, with half of their responses being true and half being false. The study found that there were strong similarities in participants' behavior when interacting with a human and a humanoid robot, and that certain behavioral variables could be used as markers of deception. The results suggest that robots could effectively detect lies in human-robot interactions using these markers. The article does not provide a detailed list of all the markers of deception that were used in the study. However, it mentions that behavioral variables such as eye movements, time to respond, and eloquence were measured during the task and were found to be valid markers of deception in both human-human and human-robot interactions. Other potential markers of deception could include changes in vocal pitch, facial expressions, and body language.
A well-known book by Paul Ekman [4], a pioneer in deception detection research, covers clues of detecting lies based on verbal, vocal and facial cues. The book is titled "Telling Lies: Clues to Deceit in the Marketplace, Politics, and Marriage", and Paul's main takeaways wer :
* brief, involuntary facial expressions
- can reveal when a person is lying or experiencing a negative emotion.
* Baseline statements are useful to compare changes in a person's vocal and facial cues when they are being deceptive. Multiple clues from verbal, vocal and facial cues together are more reliable indicators of deception than any single cue alone.
Overall, the use of automated deception detection could provide a more accurate and practical solution for detecting lies in different situations. By extracting various features from data including visual features such as hand movements and facial features or acoustic features such
as tone and pitch or lexical features by analyzing the spoken text and then passing those features through different machine learning models, researchers have concluded that it's possible to automatically detect deception from videos and obtain accurate results.
## 2 Related works:
Automatic deception detection is still a new research domain as the first research paper in automatic deception detection from videos using data science was done in 2015. There are two basic types of features that researches extract from videos in this domain, Verbal features (text and audio) and non-verbal features (images). Deep learning and machine learning models were applied on each type of features. Moreover, Studies on multi-model approaches have shown that using features from multiple modalities enhances the detection of deceptive behaviors to a significant degree when compared to using only one modality at a time. [5]
Table 1 is a comparison between state-of-the-art.
\begin{tabular}{c c c c c c c} \hline \hline \multicolumn{2}{c}{**Ref** Year & Dataset(s) & Verbal features & Non-verbal features & Words & Results \\ \hline \multirow{8}{*}{[6]} & \multirow{8}{*}{2015} & \multirow{8}{*}{Real-life} & \multirow{8}{*}{Real-life} & **Lexical:** & \multirow{8}{*}{Two broad categories:} & \multirow{8}{*}{Decision Tree, Random Forest.} & \multirow{8}{*}{Accuracies in the range of 60-75\%.} \\ & & & Unigrams and & & & & \\ \cline{1-1} & & & bigrams derived & & & & \\ \cline{1-1} & & & & & & & \\ \cline{1-1} & & & & from the & & & & \\ \cline{1-1} & & & bag-of-words & & & & & \\ \cline{1-1} & & & representation from & & & & & \\ \cline{1-1} & & & videos transcripts. & & & & & \\ \hline \multirow{8}{*}{[7]} & \multirow{8}{*}{2019} & \multirow{8}{*}{Real-life} & \multirow{8}{*}{**Acoustic:** & \multirow{8}{*}{Decision Tree} & \multirow{8}{*}{Transitive} & \multirow{8}{*}{Transitive} & \multirow{8}{*}{Transitive} \\ & & & & & & & \\ \cline{1-1} & & & basic features like & & & & \\ \cline{1-1} & & & Mel-frequency & & & & \\ \cline{1-1} & & & coefficient, & & & & \\ \cline{1-1} & & harmonics-to-noise & & & & & \\ \cline{1-1} & & ratio, jitter was & & & & & \\ \cline{1-1} & & extracted using & & & & & \\ \cline{1-1} & & & openSMILE. & & & & \\ \hline \hline \end{tabular}
By reviewing previous works from Table 1, we see that Veronica et al. [6] presented a novel dataset consisting of 121 deceptive and truthful clips, from real court trial videos. They used unigrams and bigrams derived from the bag-of-words representation of the video transcripts, and manually annotated the videos for several gestures that were then used to extract non-verbal features such as facial displays and hand gestures. They then built classifiers relying on individual or combined sets of verbal and non-verbal features, achieving accuracies in the range of 60-75%." on real-life trial dataset. This is stated to be the first work to automatically detect deception using both verbal and non-verbal features extracted from real trial recordings.
Jaiswal et al. [7] analyzed both the movement of facial features and the acoustic patterns of the witness and performed a lexical analysis on the spoken words. They improved on the previous study by using a Support Vector Machine (SVM) model and achieved a higher accuracy of 78.95% on the real-life trial dataset.
Gogate et al [8] showed that a deep learning approach improved results. They achieved 96.42% accuracy on real-life trial dataset using early fusion and accuracies of 87.5%, 83.78%,78.57% on audio, text and video respectively. This was also stated to be the first time use of audio cues for deception detection.
M. Umut Sen et al [9] did the most recent study, they experimented with linguistic features derived from the text transcripts that have been previously found to correlate with deception cues, extracting unigrams from the bag of words representation of each transcript and features derived from the Linguistic Inquire Word Count (LIWC) lexicon. They also extracted a set of visual features consisting of assessments of several facial movements described as Facial Action Units, these features denote the presence of facial muscle movements that are commonly used for describing and classifying expressions. The OpenFace library was used with the default multi-person detection model to obtain 18 binary indicators of Action Units (AUs) for each frame in the videos. Finally, for acoustic features they used pitch, estimated by obtaining the fundamental frequency (f0) of the defendants' speech using the STRAIGHT toolbox, plus silence and Speech Histograms obtained by running a voice activity detection algorithm.
Results showed that the best result is 72.88% on real-life trial dataset, obtained with the score-level combination, and the NN classifier. They also present a human deception detection study where they evaluate the human capability of detecting deception. Results show that the system they built outperforms the average human capability of identifying deceit.
## 3 Datasets:
We conducted our experiments on two datasets, the real-life trial dataset by the university of Michigan [6] and the Miami University Deception Detection Dataset (MU3D). [10] In this section we will explain about both of them.
### Real-life trial dataset
To the best of our knowledge, this dataset is used as a baseline for deception detection in real-life videos which is why we chose it. The dataset consists of 121 videos including 61 deceptive and 60 truthful clips taken from various real-life trial videos where some restrictions were imposed for instance the witness must be clearly identified in the video and their face has to be sufficiently visible for most of the clip. Also, the visual quality has to be clear enough to discern the facial expressions. Lastly, the voice quality should be clear enough to hear the voice and understand what the person is saying. All the video clips were transcribed via crowd sourcing using Amazon Mechanical Turk. The transcribers were asked to insert repetitive words or fillers such as "um", "ah", "uh" and to indicate deliberate silence using ellipsis. Incoming transcriptions were manually checked to avoid spam and ensure quality. The final transcription set consisted of 8,055 words, with an average of 66 words per transcription.
### Miami University Deception Detection Dataset (MU3D)
A dataset resource published by the university of Miami available for free, featuring people telling truthful and deceptive stories. Transcriptions were done by trained researcher assistants and assessed by naive raters and include all words and sounds indicating silence such as 'um', 'uh' but they don't contain things like coughs, laughs or throat clearing sounds.
Researchers can find additional information related to each video (trustworthiness, anxiety ratings, video length, video transcriptions...etc.), as well as information regarding the individuals featured in the video clips (attractiveness, age, race...etc.). As the Miami University Deception Detection Dataset (MU3D) dataset was unlabeled, we tried to label it automatically by making use of the information provided in the codebook. After various experiments with different equations and thresholds we found that the highest accuracies were achieved using a threshold of 70% for a parameter called 'Truthprop' (which measures the percentage of people who thought the video was deceptive). The videos that scored a Truth Prop of over 70% were labeled as truthful and the ones that didn't were labeled as deceptive.
## 3 Proposed Solution:
We proposed a system that incorporates three key components: visual features, acoustic features and lexical features. For each component, various machine learning experiments were conducted such as Decision trees [11], Naive Bayes classifiers [12], Support vector machines [13], Gradient Boosting [14], Random forests [15] and Neural networks [16].
### Lexical component:
Several deep learning-based experiments and machine leaning-based experiments were conducted.
#### 4.1.1 Preprocessing:
First, normalization was done by turning all letter to lowercase. Second, all English stop words were removed. Third, Lemmatization and POS tagging were extracted.
#### 4.1.2 Deep learning Model:
BERT embedding layer followed by a dropout layer and then a dense layer with a sigmoid activation function and Adam optimizer.
Figure 1: Our Proposed Solution
#### 4.1.3 Support Vector Machine (SVM) Model:
We proposed using Word2Vec TF-IDF with a Support Vector Machine (SVM) classifier (regularization parameter (C) = 2, coefficient\(=\) 9 and degree\(=\) 3)
Figure 3: Proposed Lexical Model
Figure 2: CNN for text classification
### Acoustic component:
#### 4.2.1 Preprocessing:
As for the deep learning Model, the audio was clipped into one-second chunks. Then, we standardized then converted the clips to have the exact same sample rate, that way all of the arrays would have equal dimensions. The silence was then padded to increase the duration of the audio and to resized the clips to the same length the next step was data augmentation with time shifting followed by one more round of augmentation but this time instead of being done on the original audio it was done on the Mel spectrogram.
As for the Support Vector Machine (SVM) Model, the clips were resized into four-seconds long frames. Then 25 various features were extracted from the audio clips using the Librosa library [17], including: Chroma STFT, Zero-crossing, RMS, Mel Spectrogram, Roll-off and audio bandwidth
#### 4.2.2 Deep learning Model:
A custom data loader was defined and the data was inserted into a model containing 8 convolution layers with Relu activation function, 5 adaptive layers and a linear layer with a learning rate of 0.5.
#### 4.2.3 Support Vector Machine (SVM) Model:
After extracting the features, the values of those features were normalized and they were fed to a support vector machine classifier with a regularization parameter (C) of 2 and an RBF type kernel with a coefficient of 6 and a degree of 3.
Figure 4: CNN for audio classification
### Visual Component:
#### 4.3.1 Preprocessing:
The proposed solution focuses mainly on the target's facial expressions. Each 0.1 second from each video was turned into a frame in order to get as many samples as possible. The frames were then resized to have the same dimensions.
Face detection was performed using the MTCNN face detection algorithm, and noticed that a lot of the frames contained people that were present during the trial other than the defendant being analyzed (the judge, the security, the audience...) so all of the images which contain more than one face were filtered.
Face detection however was not necessary when dealing with the Mu3d as the quality of the videos was much better and only one individual appeared in each frame, so we were able to obtain good results just by simply using the entire frame.
#### 4.3.2 Deep learning Models:
For our first excitement, we used all of the frames regardless of whether they contain one or several faces. We fed them to a CNN consisting of 4 convolution layers with a Relu activation function followed by a dense layer with a Relu activation function then another dense layer with a sigmoid activation function and Adam optimizer.
For our second excitement, we focused only on the defendant by only detecting faces from frames that contain a single face and feeding them to the same previous model which achieved better results than the previous experiment.
Figure 5: Acoustic Model Proposed Solution
Figure 6 video Model Proposed solution
## 5 Results and Discussion:
We have compared our results with previous state-of-the-art in the tables 2, 3, 4, 5, then we discussed our experiment results in detail.
### Text Model Results
On the Miami University Deception Detection Dataset (MU3D) the best accuracy was 73% also using Multinomial Naive Bayes with default parameters and an accuracy of 68.7% was achieved using a Support Vector Machine (SVM) model (C=1, Gamma=9).
The deep learning results were less than ideal achieving only 50% using the CNN shown in Figure2 and Adam optimizer.
### Acoustic component results:
Out of all the experiments done on audio, the best results were achieved using Support Vector Machine (SVM) model (C=2, Gamma = 1) which achieved an accuracy of 96% on the Real-life trial dataset. Results using the random forest model showed an accuracy of 84% when max depth set to 4, any depth over 4 resulted in overfitting. Finally, the best accuracy on the Gradient boosting model was 88% (Number of estimators = 50, Learning Rate = 1, Max Depth =1, gamma=4)
The best accuracy using the deep learning Model was 61% using the CNN (Batch size: 32, learning rate 0.01) on the real-life trial dataset.
The best result on Miami University Deception Detection Dataset (MU3D) was accuracy of 82% using the gradient boosting model (number of estimators = 5, learning rate = 0.5, max depth = 1).
The best accuracy using the deep learning Model was 60% with high loss using the CNN shown in Figure 4 on the Miami University Deception Detection Dataset (MU3D).
### Visual component results:
The best results were obtained using feature extraction algorithm that filter out any irrelevant faces that didn't belong to the defendant. Using a CNN with 6-layer convolutional neural network shown above with Adam optimizer.
Results on the Miami University Deception Detection Dataset (MU3D) were 97% using the full pictures. Face detection was not needed when dealing with the Mu3d as the quality of the videos was much better than Miami and only one individual appeared in each frame, so we were able to obtain good results just by simply using the entire frame.
## 6 Conclusion:
We proposed a voting-based method for automatic deception detection using verbal and non-verbal features, machine learning and deep learning. We implemented a voting on results from different lexical, acoustic and visual models on dataset of videos in order to achieve the best accuracies. Our proposed solution outperforms previous state-of-the-art models. Our Voting-based multimodal proposed solution consists of three models. The first model is CNN for detecting deception from images, the second model is Support Vector Machine (SVM) on Mel spectrograms for detecting deception from audio and the third model is Word2Vec on Support Vector Machine (SVM) for detecting deception from manuscripts. Experiments were conducted on Miami dataset and Miami University Deception Detection Dataset (MU3D) dataset. Best results achieved on images, audio and text were 97%, 96%, 92% respectively on Real-Life Trial Dataset, and 97%, 82%, 73% on video, audio and text respectively on Miami University Deception Detection Dataset (MU3D). Using the fusion equation which is (audio model results + image model results + text model results), we achieved an overall accuracy of around 90% on all 3 models using the real-life trial dataset and 77% on the Miami University Deception Detection Dataset (MU3D).
## 7 Declarations:
### Availability of data and materials
All datasets in this survey are available online, you can find links in references.
### Abbreviations
**CNN**: Convolutional Neural Network
**SVM**: Support Vector Machine
**MU3D**: Miami University Deception Detection Dataset
\begin{table}
\begin{tabular}{c|c|c|c} \hline \hline & \multicolumn{1}{c}{**With filtering unrelated faces**} & \multicolumn{1}{c}{**Without filtering unrelated faces**} \\ & **(committee and audience faces)** & **(committee and audience faces)** \\ \hline Trial & **Train Accuracy** & **Test accuracy** & **Train Accuracy** & **Test accuracy** \\ \hline Dataset & 97\% & 95\% & 98\% & 97\% \\ \hline \hline \end{tabular}
\end{table}
Table 11: video Model results with and without unrelated faces
### Acknowledgements
This paper and the research behind it would not have been possible without the exceptional support of our supervisors. We would like to express out deep gratitude to Professor Khloud Al Jallad for her support and guidance throughout this project. She was without a doubt the reason we were able to finish this work through her constant encouragement and willingness to give her time so generously. Also, Professor Anas Dahabiah, for his patient guidance, enthusiastic encouragement and useful critiques of this research work. We are thankful for their comments on earlier version of the manuscript.
Many thanks to everyone at Arab International University, staff and professors for their incredible support and kind guidance during our time there, we also extend a thanks to all of our classmates for their encouragement and moral support.
### Funding
The authors declare that they have no funding.
### Author information
### Authors and Affiliations
Faculty of Information Technology, Arab International University. Daraa, Syria.
### Contributions
Lana Touma took on the main role of text models so she performed the literature review, conducted the experiments and wrote the manuscript. Mohammad A- Horani, took on the main role of image models so he performed the literature review, and conducted the experiments as well as helping with the audio experiments. Manar Tailouni took on the main role of audio models so he performed the literature review, and conducted the experiments. Anas Dahabiah and Khloud Al Jallad took on a supervisory role, they made contribution to the conception and analysis of the work and oversaw the completion of the work.
All authors read and approved the final manuscript.
### Ethics declarations
### Ethics approval and consent to participate
The authors Ethics approval and consent to participate.
### Consent for publication
The authors consent for publication.
## Competing interests
The authors declare that they have no competing interests.
|
2309.14978 | Salinity-Dependent Interfacial Phenomena Towards Hydrovoltaic Device
Optimization | Evaporation-driven fluid flow in porous or nanostructured materials has
recently opened a new paradigm for renewable energy generation. Despite recent
progress, major fundamental questions remain regarding the interfacial
phenomena governing these so-called hydrovoltaic (HV) devices. Together with
the lack of modelling tools, this limits the performance and application range
of this emerging technology. By leveraging ordered arrays of Silicon
nanopillars (NP) and developing a quantitative multiphysics model to study
their HV response across a wide parameter space, this work reveals the complex
interplay of surface-charge, liquid properties, and geometrical parameters,
including previously unexplored electrokinetic interactions. Notably, we find
that ion-concentration-dependent surface charge, together with ion mobility,
dictates multiple local maxima in open circuit voltage, with optimal conditions
deviating from conventional low-concentration expectations. Additionally,
assessing the HV response up to molar concentrations, we provide unique
evidence of ion adsorption and charge inversion for a number of monovalent
cations. This effect interestingly enables the operation of HV devices even at
such high concentrations. Finally, we highlight that, beyond electrokinetic
parameters, geometrical asymmetries in the device structure generate an
electrostatic potential that augments HV performance. Overall, our work, which
lies in between single nanochannel studies and macro-scale porous system
characterization, demonstrates that evaporation-driven HV devices can operate
across a wide range of salinities, with optimal operating conditions being
dictated by distinct interfacial phenomena. Thus it offers crucial insight and
a design tool for enhancing the performance of evaporation-driven HV devices
and enables their broader applicability across the salinity scale of natural
and processed waters. | Tarique Anwar, Giulia Tagliabue | 2023-09-26T14:54:11Z | http://arxiv.org/abs/2309.14978v1 | # Salinity-Dependent Interfacial Phenomena Towards Hydroovoltaic Device Optimization
###### Abstract
Evaporation-driven fluid flow in porous or nanostructured materials has recently opened a new paradigm for renewable energy generation by converting thermal energy into electrical energy via an electrokinetic pathway. Despite recent progress, major fundamental questions remain regarding the interfacial phenomena governing these so-called photovoltaic (HV) devices. Together with the lack of modelling tools, this limits the performance and application range of this emerging technology. By leveraging ordered arrays of Silicon nanopillars (NP) and developing a quantitative multiphysics model to study their HV response across a wide parameter space, this work reveals the complex interplay of surface-charge, liquid properties, and geometrical parameters, including previously unexplored electrokinetic interactions. Notably, we find that ion-concentration-dependent surface charge, together with ion mobility, dictates multiple local maxima in open circuit voltage, with optimal conditions deviating from conventional low-concentration expectations. Additionally, assessing the HV response up to molar concentrations, we provide unique evidence of ion adsorption and charge inversion for a number of monovalent cations. This effect interestingly enables the operation of HV devices even at such high concentrations. Finally, we highlight that, beyond electrokinetic parameters, geometrical asymmetries in the device structure generate an electrostatic potential that augments HV performance. Overall, our work, which lies in between single nanochannel studies and macro-scale porous system characterization, demonstrates that evaporation-driven HV devices can operate across a wide range of salinities, with optimal operating conditions being dictated by distinct interfacial phenomena. Thus it offers crucial insight and a design tool for enhancing the performance of evaporation-driven HV devices and paves the way to their broader applicability across the salinity scale of natural and processed waters.
**Keywords**: hydrovoltaic devices, evaporation, electrokinetic effects, nanofluidics, interfacial phenomena
## Introduction
Water evaporation is ubiquitous in nature, occurring spontaneously even without solar illumination. It enables continuous energy exchange through the water cycle and is a currently untapped renewable energy source [1, 2, 3, 4]. As an example, the total power generation potential of natural evaporation from lakes and reservoirs in the contiguous United States was estimated at 325 gigawatts, > 69% of the US electric energy production rate in 2015[3]. Various evaporation-driven devices have been explored which could be broadly classified as tandem devices[5, 6], hybrid systems[7, 8], and self-powered generators[9, 10, 11, 12, 13]. In particular, in 2017, among the self-powered generators, it was demonstrated that evaporation-driven flow through a functionalized porous carbon film could reliably generate sustained voltages up to 1 V and 100 nA current at ambient conditions[11]. Following this, various approaches such as a decrease of fluidic impedance, electrical resistance[14], and surface functionalization[15] of the material structure have been demonstrated to improve the electrical power from 100 nW to \(\sim\) 10 uW from a cm-sized device. More recently, the use of Silicon has been shown to improve the current density significantly via ion-electron coupling through coulombic interaction, resulting in the enhancement in power density by two orders of magnitude[12, 16, 17, 18]. Overall, these results have attracted growing interest in the field of evaporation-driven HV devices[1, 19, 20, 21, 12, 13]. However, due to the complex micro- and nanostructure of the devices investigated so far, there remains a lack of fundamental understanding related to the underlying interfacial phenomena. Furthermore, the concentration-dependent performance of HV systems remains largely unknown. Interestingly, though, a recent study has shown that exploiting the synergistic effect of a nanofluidic diode and redox reactions[22] a substantial amount of energy can be harvested in high salinity conditions. Thus, it would be extremely appealing to explore the possibility of designing HV devices that can be operated optimally across salinity scales of natural water (fresh and seawater) and processed water (brine).
On the other hand, pressure-driven flow in functionalized nanochannels has been widely studied for electrical energy conversion via the streaming current pathway[23, 24, 25, 26]. In particular, ion transport in nanopores with a diameter spanning from a few microns to sub-nanometers has been carefully studied in the lontronic community[27, 28, 29, 30, 31]. This has shown that surface charge, pore geometry, ionic mobility, etc., are critical for the device output. In contrast, almost all the
previous reports on HV used bulk porous material[6, 9, 11, 12], which limits the understanding of the role of geometrical parameters of the nanostructured material on energy conversion. Additionally, previous studies on HV devices focused on a narrow parameter space, which could not highlight the relevance of many interesting phenomena observed in other electrokinetic devices, such as regime of EDL overlap, concentration dependent surface charge[25, 27], non-linear electrostatic screening[32], effect of ionic mobility, and charge inversion[33, 34]. Therefore, there remains a need to understand the fundamental role of a variety of electrokinetic phenomena in evaporation-driven HV devices. In particular, the interplay of structural, interfacial, and fluidic properties must be clarified in order to identify opportunities and guidelines for future device design and performance optimization.
Here we present a controlled study of evaporation-driven HV devices that reveals distinct interfacial phenomena across salinity scales and quantifies their impact on performance. By using a nanostructuring approach applied to arrays of Silicon nanopillars (Si NPs), we systematically change the solid-liquid interfacial area, the liquid confinement size as well as the ion concentration and type. We then correlate their effect to the open circuit voltage (V\({}_{\text{OC}}\)) and power output (P\({}_{\text{max}}\)) of the device. Importantly, by developing a quantitative Multiphysics model to interpret the experimental results, we provide deeper insights into the dominant solid-liquid interfacial phenomena at different regimes. For the first time, we show that chemical equilibrium at the interface plays a critical role in all tested conditions. In fact, it controls the dissociation of surface groups, leading to an electrolyte concentration dependent surface charge[35]. Additionally, we highlight that intrinsic geometrical asymmetries in the device structure boost the voltage output independently from electrokinetic modulation. In terms of concentration-dependent performance, we confirm that, at low concentrations (< 1 mM), geometry must be optimized to exploit electrical double layer overlap. Most interestingly, at intermediate concentrations (< 0.1 M), we demonstrate that, due to the interplay of concentration dependent surface-charge and ion mobility, multiple V\({}_{\text{OC}}\) local maxima exist, allowing for optimal operating conditions that deviate from low-concentration expectations. Additionally, by assessing the HV device response at high concentrations (up to 4M), we uniquely show evidence of ion adsorption and charge inversion for several monovalent cations (Li\({}^{+}\) and K\({}^{*}\)). We also report viable power levels under
these extreme operating conditions. Overall, our controlled nanostructuring approach, which lies in between nanofluidic studies and macro-scale porous system characterization, highlights the importance of controlling electrokinetic and geometrical parameters in HV systems. Our results also show that, by leveraging distinct interfacial phenomena, evaporation-driven HV devices can be developed for a broader range of concentrations, and hence applications, than previously thought. Therefore, in combination with the presented predictive model, this work opens new avenues for the engineering of evaporation-driven HV devices with improved performance and expanded applicability across the entire salinity scale of natural and processed water, from freshwater to brine.
Figure 1: **Mechanism of the HV device made of Silicon NPs array.****A)** Schematic of the electrical measurement setup for Silicon NPs photovoltaic device using a single Ag/AgCl electrode. **B)** The cross-section of the nanoconfinement formed due to the separation between adjacent NPs. The surface contains a negative surface charge (yellow circles) that forms an EDL. The evaporation-driven flow causes the streaming of ions, \(I_{\text{str}}\), and the inherent asymmetry causes a difference in EDL potential between the top and bottom surface even when \(I_{\text{str}}\)= 0. **C)** (Top) SEM image showing a top view of the fabricated array of silicon NPs. The red line shows the hexagonal arrangement of the NPs, while the blue line represents the triangular unit cell used for simulation. (Bottom) Cross-sectional view. **D)** (Left) 3D schematic of the triangular unit cell of the hexagonally arranged NPs showing the geometrical parameters of the nanostructures (pillar diameter, D\({}_{\text{np}}\), pillar length, L\({}_{\text{ep}}\) and mean pore diameter, D\({}_{\text{p}}\)). (Right) Annular cylindrical nanopore geometry was used for simulations, including the calculated electrical potential distribution (see also panel (F)). **E)** Equivalent electrical circuit in which streaming of ions and the asymmetrical EDL potential between top and bottom are represented by a current source, and a capacitor in parallel with a resistor respectively. The ionic resistance and external load are represented by the appropriate resistances. **F)** Vertical cut-plane of the simulated cylindrical nanopore in panel (D) showing the counter-ion concentration distribution, with ion flux (left), and the electrical potential distribution, with electric field lines (right). The bulk ionic concentration is 10 \(\upmu\)m KCl for the given simulation results.
## Results and discussion
Understanding the fundamental mechanisms of voltage and current generation in evaporation-driven HV devices requires control over the solid-liquid interface properties and the liquid nanoconfinement, i.e., nanochannel geometry. Our devices consist of cm-scale regular arrays of Si NPs etched in a p-type Si wafer (Figure 1**A**). Using a combination of colloidal lithography and metal-assisted chemical etching (see Methods), we fix the pitch (p = 600 nm) of the hexagonal array of NPs (see red line in Figure 1**C**) while varying their diameter (D\({}_{\mathrm{np}}\)) and length (L\({}_{\mathrm{np}}\)) in the range 420 nm - 560 nm and 1.23 \(\upmu\)m - 4.4 \(\upmu\)m respectively. The corresponding mean pore diameter D\({}_{\mathrm{p}}\) ranges between 140 nm and 280 nm (Figure 1**D** and Figure S1). Therefore, changing the Si NPs dimensions directly controls the nanochannel geometry and the solid-liquid surface area (A\({}_{\mathrm{S-L}}\)), as needed.
For photovoltaic testing, the sample is placed inside an HV cell and then wetted with 150 \(\upmu\) of deionized water containing different salts (KCl, LiCl, CsCl) of varying concentrations (from **1** mM to 4 M) while being placed in ambient conditions (T = 22-24\({}^{\circ}\)C; and humidity = 25-30%). Thus, evaporation occurs naturally (Figure S2). The electrical response is measured using an Ag/AgCl electrode, placed in the liquid right above the Si NPs, and an aluminum contact, previously deposited on the back surface of the Si wafer (Figure 1**A** and Methods). During the electrical measurements, the Ag/AgCl electrode and silicon substrate do not contact each other, and the electrical circuit is complete as soon as the liquid is dispensed, thereafter voltage and current can be measured. Figure 1**B** shows a zoom-in view of a single nanochannel within our device. Upon wetting, the native silicon oxide layer on the surface of the Si NPs [36] will dissociate, resulting in a net negative surface charge \(\sigma\) (yellow circles), for pH above the isoelectric point [35]. Concurrently, positive ions will adsorb on the surface (red circles) and an electrical double layer (EDL) will develop within the liquid. The distribution of ions in the confined liquid is governed by the Nernst-Planck-Poisson-Boltzmann equations (see Methods, eqns. (M1)-(M3)). The steady-state concentration distribution of the ionic species depends on different modes of ion transport, namely (i) conduction along an electrostatic potential gradient, (ii) diffusion due to a concentration gradient, and (iii) advection induced by the evaporation-driven flows (convection). Any resulting imbalance in the ion distribution along the nanochannel leads to the measured
electrical potential difference (Figure 1F). Overall, the total electrochemical potential of the i-th ionic component along the nanochannel length depends on the electrical potential, the ion concentration, and the chemical potential (see Methods eqn. (M4)). Interestingly, we observe that, due to the closed bottom surface of the nanochannel (z = 0), the studied geometry presents an intrinsic asymmetry in the surface-charge distribution. As a result, even in the absence of an evaporation-induced flow, an electrostatic potential, and therefore a Voc can be measured between the top and bottom surfaces of the nanochannel. Non-zero convection changes the distribution of ions and, depending on the flow profile, it can enhance or suppress the Voc. Overall, our device structure presents three main advantages: (i) control over the liquid nanoconfinement, (ii) geometrical asymmetry within the nanochannel, and (iii) easy electrical contacting. Upon wetting, the Voc evolves with time starting from V(t\({}^{0}\)), which is zero (negative value) for moderate (high) concentrations and attains a stable steady state value V(t\({}^{\infty}\)) (Figure S3). The Voc as well as the current-voltage curves (I-V) of the device (Methods and Figure S3) were recorded as a function of the physical and chemical properties of the system and will be discussed extensively later.
### Modelling Approaches
The development of a suitable model for these evaporation-driven HV devices is essential to interpret the experimental results and unravel the complex electrokinetic interactions. Based on the aforementioned observations, the studied system can be described with the equivalent electrical circuit shown in Figure 1E. It consists of three parts: 1) a current source, representing the evaporation-driven streaming current \(l_{str}\), and the related ionic resistance, \(R_{ionic}\), 2) a capacitor with a resistor in parallel, representing the EDL capacitance, CED, and the associated charge transfer resistance, \(R_{ct}\), due to geometrical asymmetrical and 3) an external load resistance, \(R_{L}\). One can express the measured voltage at a given load resistance as follows:
\[V_{L}=\frac{\left[I_{str}+\frac{\sigma}{R_{ct}C_{edl}}\right]R_{ionic}}{1+ \frac{R_{ionic}}{R_{L}}} \tag{1}\]
For the open circuit condition (\(R_{L}\rightarrow\infty\)), we thus obtain:
\[V_{OC}=\left[I_{Str}+\frac{\sigma}{R_{ct}C_{edl}}\right]R_{ionic}\hskip 28.452756pt( \textbf{2})\]
Concurrently, the power to an external load can be calculated as \(P_{L}=V_{L}^{2}/R_{L}\) and the maximum power output (dP\({}_{L}\) )/(dR\({}_{L}\)) = 0 is thus:
\[P_{max}=\frac{V_{OC}^{2}}{4R_{ionic}}\hskip 56.905512pt(\textbf{3})\]
The streaming current and the ionic resistance can be calculated from the ion concentration and the flow velocity (see Methods, eqns. (M5) and (M6)). However, no analytical expression is known for EDL capacitance for the geometry shown in Figure 1B. Therefore, we also developed a 3D numerical model (COMSOL*) to solve the Nernst-Planck-Poisson equation to determine the equilibrium distribution of ions and resulting electrostatic potentials (see Methods). To perform these calculations, however, we first need to identify an equivalent simplified geometry. Considering a top-view of the Si NPs array (Figure 1c), we observe that it is possible to define a hydraulic diameter (D\({}_{h}\)) for the resulting nanochannels, with D\({}_{min}\) as the minimum separation between adjacent NPs as well as A\({}_{S\perp}\), solid-liquid interface area normalized by sample area as:
\[D_{h}=\frac{2\sqrt{3}p^{2}}{\pi D_{np}}-D_{np}\hskip 56.905512pt(\textbf{4}A)\]
\[A_{S-L}=1+\frac{\pi D_{np}L_{np}}{\sqrt{3}p^{2}}\hskip 56.905512pt(\textbf{4}B)\]
\[D_{min}=p-D_{np}\hskip 56.905512pt(\textbf{4}C)\]
The 3D array of nanochannels can thus be reduced to an equivalent annular nanochannel geometry of inner and outer diameters R\({}_{1}\) and R\({}_{2}\), respectively, such that D\({}_{h}\) and A\({}_{S\perp}\) are equal to those of the original array (Figure 1C, D, and S1). A spatially uniform surface charge is used as a boundary condition. Importantly, as described by the Grahame Equation, its magnitude is governed by the diffuse layer potential and therefore by the chemical equilibrium between the silica surface and the electrolyte[35] (see Methods or Figure S4 ). This means that \(\sigma\) depends on the EDL thickness and therefore the bulk electrolyte concentration. The evaporative flux was instead used to define the mass flow rate in the channel. The computed potential difference between
the top and bottom of the nanochannel is equal to the V\({}_{\text{OC}}\) and can be directly compared to experimental results (Figure 1D, F).
### Role of surface-charge and ion transport at low-to-moderate concentrations
We first measure V\({}_{\text{OC}}\) as a function of ion concentration by using KCl solutions ranging from 1 uM to 0.1 M (Figure 2A). Indeed, this parameter plays a critical role both on the physical extension of the EDL and the chemical equilibrium at the interface. Regardless of the geometrical properties of the sample, our measurements clearly show a non-monotonic V\({}_{\text{OC}}\) trend as a function of electrolyte concentration. In particular, we observe a V\({}_{\text{OC}}\) peak for intermediate concentration values (0.1 - 1 mM), which can be more or less pronounced depending on the sample. This result is in sharp contrast to the monotonic decrease in V\({}_{\text{OC}}\) predicted by the mean-field Poisson-Boltzmann theory with a constant surface charge condition [25] (Figure S5). Instead, using our COMSOL model that accounts for the chemical equilibrium at the solid/liquid interface, it is possible to correctly capture the experimental trend (Figure 2B). In particular, for a fixed value of \(\Gamma\) in the chemical equilibrium equation, we retrieve the appearance of the V\({}_{\text{OC}}\) peak in the intermediate range of concentrations. Interestingly, the model reveals that the V\({}_{\text{OC}}\) peak magnitude increases for increasing the length of the Si NPs and for decreasing Si NPs size (i.e., increasing nanochannel size), consistent with experimental results. Additionally, increasing the value of \(\Gamma\) from 8 to 12 results in a more pronounced V\({}_{\text{OC}}\) peak. This is due to the increase in the number of available surface sites that can readily dissociate to enhance the surface charge density as schematically shown in Figure 2C. Overall, the observed trend for V\({}_{\text{OC}}\) is explained based on the interplay between the magnitude of the surface charge density and the extent of electrostatic screening (Figure 2C), quantified by the Debye length (\(\lambda_{D}\)). This is defined as \(\lambda_{\text{D}}^{2}=\epsilon\text{k}_{\text{B}}\text{T}/4\pi\text{e}^{2}C_{ \text{b}}\) and is inversely proportional to the square root of the bulk ionic concentration (\(C_{b}\)). Thus, it monotonically decreases with increasing concentration. On the other hand, because of the chemical equilibrium at the interface, the magnitude of surface charge density and the diffuse layer potential both increase with concentration (Figure S4). Hence, a relatively high surface charge density and a moderate electrostatic screening give rise to an optimum condition and a peak in V\({}_{\text{OC}}\) at intermediate concentrations. To elucidate the significance of surface charge, we measured V\({}_{\text{OC}}\) for a single device at 1 uM concentration of KCl but varying the pH of the
electrolyte across the isoelectric point by adding different concentrations of HCI (see time trace in Figure S6). As shown in Figure 2D, we observe the decrease in the magnitude of V\({}_{\text{OC}}\) with pH and then the change in sign below the isoelectric point. This highlights that the V\({}_{\text{OC}}\) is directly correlated to the surface charge, as the silanol groups at the surface could exist in one of these (_Si-O_, _Si-OH_, _SiOH\({}_{2}\)_+) forms depending on the pH, thus modulating the surface charge density and sign.
Figure 2: **Effect of Low to Moderate Electrolyte Concentrations.****A)** Dependence of the measured \(V_{\text{OC}}\) on the electrolyte concentration for a series of samples with different diameters (D\({}_{\text{np}}\)) and lengths (L\({}_{\text{np}}\)) of the NP. Lines are only a guide to the eye. **B)** Dependence of the simulated \(V_{\text{OC}}\) on the electrolyte concentration. Two different values of the free parameter gamma (the number of available sites on silica surface) are used to show the importance of surface charge modulation (solid lines \(\Gamma\) = 8, dashed lines \(\Gamma\) = 12). Lines are only a guide to the eye. **C)** Schematic of the individual NP, showing a qualitative difference in surface charge density for \(\Gamma\) = 8 _and_\(\Gamma\) = 12. The middle graph shows a qualitative difference in the surface potential \(\psi\) and decay of EDL potential due to a change in surface charge density. **D)** Measured V\({}_{\text{OC}}\) at 1uM concentration for different pH values. The flipping of the sign of the surface charge across the isoelectric point is reflected in the sign of the measured V\({}_{\text{OC}}\). Depending on the pH, the magnitude of \(\Gamma\),!’, and!’ changes that modulate the sign and magnitude of surface charge.
The values of V\({}_{\rm{oc}}\) for deionized (DI) water are also shown in Figure 2A (left-most point in graph). DI water in standard condition has a hydronium ion concentration of 10\({}^{\text{-}7}\) M and therefore the Debye length is large compared to 10\({}^{\text{-}6}\) M KCl. Thus, from the perspective of EDL thickness, we might expect that DI water should give the highest V\({}_{\rm{OC}}\). Interestingly, we observe that, in the case of DI water, V\({}_{\rm{OC}}\) is much less than for 1\(\upmu\)M electrolyte. This observation is consistent with previous reports although an accurate explanation has not been provided yet. From our equivalent circuit model, we note that the open circuit voltage is directly proportional to the ionic resistance (Eqn. (2)). R\({}_{\rm{ionic}}\), in turn, is inversely proportional to the mobility of ions. Thus, higher (lower) mobility ions are expected to result in lower (higher) open circuit voltages. In particular, for the studied system with a negative surface charge, we anticipate a stronger dependence on the cation mobility. Thus, in the case of DI water, the hydronium ions dominate the ionic transport, which results in much lower ionic resistance. These have much higher mobility due to the hopping mechanism[37] of ion transport compared to the potassium cations used in the electrolyte, leading to a sharp decrease in V\({}_{\rm{OC}}\).
To gain a deeper understanding of the effect of ionic mobility, we selected three chloride salts, namely LiCl, KCl, and CsCl, whose cations have increasing mobilities (\(Li^{+}<K^{+}<Cs^{+}\)) of 3.6, 7.32, 8.2 x10\({}^{\text{-}8}\) m\({}^{2}\)/Vs in aqueous medium[23]. For each salt, we varied the concentration from 1\(\upmu\)M to 0.1M and recorded the V\({}_{\rm{OC}}\) using the same device (Figure 3A). Both our experimental data and numerical modelling results (Figure 3B), which are in excellent agreement, clearly show that V\({}_{\rm{OC}}\) increases with decreasing ion mobility, for the studied range of concentrations. Interestingly, we observe that the ionic mobility also influences the V\({}_{\rm{OC}}\) peak at intermediate concentrations, which we previously related to the concentration dependent surface charge. We want to further highlight an interesting observation that connects the dependence of ionic mobility and the length of NP. The increase in length of NP and decrease in ionic mobility, both result in more pronounced intermediate peaks, which increase with surface charge, as shown in Figure 2A, B and Figure 3A, B. In particular, the lowest mobility cation (\(Li^{+}\)) exhibits the strongest V\({}_{\rm{OC}}\) peak. Next, to quantify the difference in ion mobility as well as the contribution of cations and anions to R\({}_{\rm{ionic}}\), we employed electrochemical impedance spectroscopy measurements for 1mM concentration of LiCl, KCl, and CsCl as well as KCl, KBr, KI. Figure 3C shows the diffusivity, and
therefore the ionic mobility, determined by fitting the Nyquist plots using an appropriate electrical circuit (Figure S7). Interestingly, the large variation in diffusivity for different cations compared to the case of different anions confirms that cations dominate the ion transport in the studied system, as expected for a system with negative surface charge.
Next, we measure the I-V characteristic of the device (Figure S8) and determine the peak power (P\({}_{\text{max}}\)) at each concentration (Figure 3D). We observe a non-monotonic power generation dependence on concentration, having a minimum of around 0.1 mM concentration. Based on the previous discussion, the increase in P\({}_{\text{max}}\) at higher (above 1 mM) concentrations is attributed to the increase in R\({}_{\text{ionic}}\), which dominates over the decrease in V\({}_{\text{OC}}\) as expressed in Eqn. (3). As a reference, we measured the bulk electrolyte conductivity for 0.01 mM, 1 mM, and 10 mM, as 1.64 uS, 95.4 uS, and 0.89 mS respectively. Importantly, we note that R\({}_{\text{ionic}}\) has contributions from both the bulk and surface conduction, which leads to a non-monotonic concentration dependent change in net ionic conductivity. This leads to an increase in power generation at low concentrations, which is attributed to the EDL overlap condition, where R\({}_{\text{ionic}}\) decreases due to an increase in surface conductivity. The relative contribution of the bulk and surface conductivity is governed by the concentration and size of the liquid confinement, which will be discussed in more detail in the last section.
**Effect of high concentration (>1M) on surface charge and ion transport**
The Poisson-Boltzmann equation is based on mean field theory where one can ignore ion correlations as well as the adsorption of ions to the surface[38, 39]. It thus holds for low
Figure 3: **Effect of ionic mobility on HV device performance at low-to-moderate concentration.****A)** Measured concentration-varying \(V_{OC}\) and its dependence on the counter-ion mobility. The studied electrolytes are LiCl (black curve), KCl (blue curve), and CsCl (green curve) in DI water. **B)** Calculated \(V_{OC}\) as a function of electrolyte concentration for different ionic mobilities, following the order \(Li^{+}<K^{+}<Cs^{+}\). Two different values of the free parameter \(\Gamma\) are used to show the variation with respect to the surface charge density. **C)** Diffusivity/mobility (related by Nernst-Einstein relation) of salts with different anions and cations determined from electrochemical impedance spectroscopy measurements for a constant concentration of 1mM. **D)** Measured maximum power output as a function of electrolyte concentration (KCI) for different diameters and lengths of the NPs. The colors correspond to the same devices as in Figures 2A and 4B.
electrolyte concentrations (up to \(\sim\)0.1 M), when \(\rm I_{B}^{3}C_{b}\ll 1\), where \(\rm I_{B}=e^{2}/\varepsilon k_{B}T\) the Bjerrum length of the solution and \(C_{b}\) is the bulk ion concentration, and \(\varepsilon\) is the dielectric constant of the electrolyte. However, for higher concentrations, the effect of ion-ion correlation becomes important. As a result, contrary to the Debye theory discussed above, the screening length (\(k_{L}^{-1}\)) increases with concentration and becomes dependent on the size of ions. In particular, for concentrated electrolytes, this dependence scales as[32]\(\rm k_{L}^{-1}\)\(\sim\)\(\rm I_{B}C_{b}a^{3}\), where \(a\) is the diameter of the counter-ion. Furthermore, at concentrations above ~1M, the influence of ion adsorption becomes relevant, leading to a reduction in the effective surface charge. This effect therefore reverses the trend discussed earlier of increasing surface charge with increasing concentration. In particular, in the case of multivalent ions (Z \(\geq\) 2), screening by counterions not only reduces the effective surface charge but can also reverse its sign.
This counterintuitive phenomenon is called _surface charge inversion_ and arises due to the counterions forming a strongly correlated liquid (SCL)[34, 40, 41], where the electrostatic interactions are much larger than thermal fluctuations (\(k_{B}T\)). Surface charge inversion has been already observed for \(Z=3\)_and_ 4[34]. Using pressure-driven streaming current measurements, it was also reported for \(Z=2\). However, there was an order of magnitude deviation from SCL theory in terms of the predicted minimum required ion concentration[33]. This deviation can be attributed to the omission of screening effects, which significantly enhances charge inversion[42]. Recently, direct measurements of surface excess charge using fluorescein particles have shown that at very high concentrations (>1M) even monovalent salts (Z=1) in silica confinement can give rise to charge inversion[43]. We thus explored experimentally the HV response of our devices for high electrolyte concentrations, including chloride salts of different-sized monovalent cations (Li\({}^{+}\), K\({}^{+}\), and Cs\({}^{+}\)).
We show in Figure 4A the time traces of \(V_{OC}\) for a few representative samples tested with 1M concentration KCl (purple and yellow curve), LiCl (green curve), and CsCl (pink curve). Upon wetting, we observe that _all_ the curves exhibit an initial negative \(V_{OC}^{0}\) value, where \(V_{OC}^{0}=V_{OC}(t=0)\), which was not observed for lower concentrations (see the gray curve for 1mM KCl concentration, and Figure S3). Interestingly, \(V_{OC}^{0}\) becomes more negative with increasing salt concentration (from 0.5M up to 4M) as well as for decreasing the length of the Si NPs and increasing liquid nanoconfinement (Figure 4B. left). This can be directly related to the effect of
counter-ion adsorption close to the Stern-plane leading to a reduction, or even inversion, of the surface charge. We note here that, based on Eqn. 14, a change in V\({}_{\text{oc}}\) sign must be related to surface charge inversion. Yet, a positive V\({}_{\text{oc}}\) cannot exclude such an effect due to the additional voltage term introduced by the asymmetrical device structures.
Considering a fixed surface charge of 1e/nm\({}^{2}\), which was measured previously for silicon HV structure[12], we can estimate the magnitude of the short-range interaction, which can give rise to charge inversion, as 2.7, 4.5, 8.8, and 14.3 k\({}_{\text{8}}\)T for ion valence Z=1, 2, 3, and 4 respectively[41]. This suggests that for monovalent cations, the high-concentration regions near the surface are mobile and are not expected to sustain charge inversion when equilibrium is achieved. Indeed, for some samples, the measured \(V_{\text{oc}}\) changes over time, eventually stabilizing on a positive \(V_{\text{oc}}^{\infty}\) value even at very high concentrations (Figure 4A, purple and pink curves, and Figure 4B (right)). In these cases, we can quantify the effective voltage generated by each sample as the difference between the initial and final V\({}_{\text{oc}}\) values, i.e. \(V_{\text{oc}}^{eff}=~{}V_{\text{oc}}^{\infty}-~{}V_{\text{oc}}^{0}\). As a function of concentration, we observe a small but positive increase in \(V_{\text{oc}}^{eff}\)(Figure S9), consistent with the increase in screening length at high concentration. Interestingly, we also repeatedly observed steady-state sign inversion of \(V_{\text{oc}}\), which we postulate is related to charge inversion at high concentrations. (Figure 4A, green and yellow curves).
In order to overcome sample-to-sample variability and verify this observation (i.e., Figure 4A, yellow and purple curves), we first naively apply the SCL theory to calculate the critical concentration C[34]. For Z=1 and \(\sigma\sim\)1e/nm\({}^{2}\) we obtain \(C_{0}\sim\) 1\(M\). Based on this, we measured \(V_{\text{oc}}^{\infty}\) for 4 different samples (S1-S4) at 1M electrolyte concentration using different cations like Li\({}^{+}\), K\({}^{+}\), and Cs\({}^{+}\)(Figure 4C). We observed that negative \(V_{\text{OC}}^{\infty}\), and therefore charge inversion, is most likely in the case of small cations like \(Li^{+}\), while no flipping in \(V_{\text{OC}}^{\infty}\) sign was observed for big cations like \(Gs^{+}\). This result is consistent with previous direct measurements of surface excess of fluorescein[43], which was positive for LiCl and NaCl (implying charge inversion of the silica surface) and the corresponding decay length at high concentrations. The dependence of ion size effect on charge reversal can be indeed attributed to EDL potential decay length at high concentrations, which scales as k\({}^{-1}\)\(\sim\)I\({}_{\text{B}}\)C\({}_{\text{b}}\)a[3], and the specific adsorption that leads to excess cationic surface charge.
Figure 4: **Effect of electrolyte concentration on the measured \(V_{OC}\) at high concentration.****A)** Time-trace of \(V_{OC}\) for various concentrations and geometrical parameters of the NP. In contrast to low concentration, the voltage at t=0 is negative instead of zero. **B)** Measured \(V_{OC}\) at t=0 (\(V_{OC}^{0}\), left) and steady-state \(V_{OC}\), (\(V_{OC}^{\infty}\), right) as a function of KCl concentration and for different samples (lines are a guide to the eye). **C)** Measured \(V_{OC}^{\infty}\), for 4 different samples (S1-S4) of 1.2 \(\upmu\)m length of the NPs and varying diameter of the NPs, at 1M concentration for different monovalent cations. Charge inversion (i.e. negative \(V_{OC}^{\infty}\)) is more likely to happen for small ions like _Li
Overall, in terms of HV device performance for operation in saline conditions, like seawater or brine, these results suggest that specific electrokinetic interactions should be accounted for while engineering surface charge and device geometry. For example, based on the composition of the natural water[41], it would be important to engineer the interface properties to limit the surface charge density such that the critical concentration, C\({}_{0}\) is much higher than the concentration of ions in water. Furthermore, together with the result of the previous section, these measurements confirm that effective operation in high concentration solutions (i.e. brine) would also be possible.
### Effect of geometry and liquid nanoconfinement
The impact of changes in surface charge and screening length on the device performance are strongly dependent on the size of liquid nanoconfinement, and therefore the geometry of the device. Although we have shown some selected results in the previous sections, we now analyze more in detail the effect of the mean nanopore diameter, D\({}_{\text{p}}\), and length, L\({}_{\text{np}}\). Firstly, we observe that the effect of geometrical parameters on measured V\({}_{\text{OC}}\) and P\({}_{\text{max}}\) is complex and nonlinear (Figure 5). Indeed, changes in D\({}_{\text{p}}\) and L\({}_{\text{np}}\) modify the interfacial area (A\({}_{\text{S-L}}\)). This, in turn, determines the net surface charge, the streaming current (I\({}_{\text{str}}\)), and the associated ionic resistance (R\({}_{\text{ionic}}\), see Eqns. M4-5). In particular, based on Eqns. 4A-C, A\({}_{\text{S-L}}\) decreases with D\({}_{\text{p}}\) and increases with L\({}_{\text{np}}\), leading to an overall increase in I\({}_{\text{str}}\). Additionally, we observe that R\({}_{\text{ionic}}\) can be also expressed as: R\({}_{\text{ionic}}\equiv\) 1/ (1/R\({}_{\text{B}}\) + 1/R\({}_{\text{S}}\)), where R\({}_{\text{B}}\) = L\({}_{\text{np}}\) /(\(\pi\)D\({}_{\text{p}}^{2}\) \(\sigma\)\({}_{\text{B}}\)) is the bulk ionic resistance and \(\sigma\)\({}_{\text{B}}\) is the bulk conductivity, while R\({}_{\text{S}}\) is the surface resistance caused by the presence of the EDL and its distinct ionic conductivity. At small D\({}_{\text{p}}\)(or D\({}_{\text{min}}\)), (Eqn. 4C), the EDL overlap increases, resulting in a higher space charge density and therefore a lower surface resistance contribution to R\({}_{\text{ionic}}\), as well as a higher streaming current.
Overall, the effect of surface resistance is quantified by a non-dimensional _Dukhin number_, Du = \(\sigma\)s/(\(\text{{\text@s{\text@s{\text@s{\text@s{\text@s{\text@s{\text@s{\text@s{\text@s{\text@text@
decreases, while the surface resistance can increase significantly, and therefore the relative contribution of the two is primarily determined by the diameter of the nanopore and bulk ionic concentration. From Figure 5A, B, we identify three different regimes with respect to change in the size of the nanoconfinement (i.e. D\({}_{\text{p}}\)) in the following order: 1) EDL overlapping regime, where, for larger D\({}_{\text{p}}\) values, the space charge decreases leading to a lower \(V_{OC}\) (blue shaded area). 2) Streaming current dominated regime, where an increase in _D\({}_{P}\)_ leads to higher streaming current (red shaded area). 3) Ionic resistance dominated regime, where a decrease in ionic resistance limits \(V_{OC}\) (grey shaded area) which can level off or even decrease. The concurrent variation in P\({}_{\text{max}}\) is due to the interplay between the ionic resistance and Voc as shown in Eqn. (4). It can be noted from our measurements that not all samples/electrolyte concentrations show the existence of all regimes. This is due to the limited number of samples with different geometrical parameters used for measurements and some intrinsic variability in the initial surface charge.
Finally, with an increase in the length of the NP, the ionic resistance increases, and therefore \(V_{OC}\) is expected to increase too. However, for a very long etching time, the surface composition can change, towards a reduction of the available surface sites on the NP due to HF removal of the native oxide layer on Silicon. As a result, we observe a Vo\({}_{\text{C}}\) saturation for large etching times, i.e., for longer Si NPs, as shown in Figure 5C. We would furthermore draw the readers' attention that other factors such as a change in the fluidic flow pattern and sticking of long NPs due to surface forces can also affect the electrokinetic response. However, the discussion related to those aspects is beyond the scope of this manuscript. Interestingly, if we consider the peak power per-unit solid-liquid interface area (Figure 5D) we observe that it decreases with an increase in interface area. This means that an optimum value of peak power exists for a given pitch and diameter of the NPs. Overall, this shows that geometrical parameter (D\({}_{\text{p}}\), A\({}_{\text{S-L}}\), and L\({}_{\text{NP}}\)) plays a critical role in controlling the performance metric of the HV device which has a complex interplay with the surface charge. In particular, for optimizing the device performance one has to carefully identify the regime of operation for a given set of geometrical parameters and variability of surface charge depending on the fabrication process.
## Conclusion
To summarize, we found that ion-concentration-dependent surface charge, together with ion mobility, entails optimal HV operating conditions that deviate from conventional low-concentration expectations. Additionally, we observed that HV devices can successfully operate at concentrations exceeding the critical value (Co), charge inversion affecting the sign and magnitude of the steady-state V\({}_{\text{oc}}\). This leads us to conclude that to-date studies of evaporation-driven HV devices have been unnecessarily limited to deionized water or low ion concentrations (regime of EDL overlap for \(\sim\)10-100nm liquid confinement size). Instead, our results show that
Figure 5: **Dependence of geometrical parameters on the measured electrical outputs.****A)** Measured _V\({}_{\text{oc}}\)_ as a function of the mean pore diameter (\(D_{p}\)) obtained by varying the diameters of the Si NP. Data are for a fixed length of the NP equal to 2.3 \(\upmu\)m. **B)** Measured peak power as a function of the mean pore diameter for the same length of NP and various concentrations. The different colored regions represent the different regimes of ion transport as described in the text. **C)** Measured _V\({}_{\text{oc}}\)_ as a function of the normalized solid-liquid interface area, \(A_{\text{S-L}}\). This is varied by changing the length of the NP (\(L_{np}\)), as shown on the top axis, for a fixed NP diameter equal to 480 nm. **D)** Measured power normalized by solid-liquid interface area. The respective lengths of NPs are shown on the top axis.
there is ample margin for improving their performance and extending their scope within the wide range of salinity conditions available in natural and processed water. More broadly, our controlled experiments and unique quantitative modeling clarify the complex interplay of surface charge, liquid properties, and geometrical constraints in evaporation-driven HV systems. They indeed uncover how different combinations of solid and liquid physical properties define distinct limiting regimes of operation. In this regard, Figure 6A highlights four main controlling quantities we have identified, namely ion concentration (C), surface charge (o), solid-liquid interface (A\({}_{\text{solid-liquid}}\)), and liquid nanoconfinement (D\({}_{\text{p}}\)), the latter two depending in our devices on the length (L\({}_{\text{np}}\)) and diameter (D\({}_{\text{np}}\)) of the Si NPs. It also shows how changes in the magnitude of these parameters, impact V\({}_{\text{oc}}\) and P\({}_{\text{max}}\). Importantly, within this complex and multidimensional parameter space, it is possible to take advantage of different interfacial phenomena and identify suitable operating conditions for different salinity values. We found that for fresh water conditions (\(\sim\)1 mM), EDL overlaps is necessary. This can be realized, as expected, under low total surface charge and small nanochannel size (high confinement). Yet, by increasing the surface area or the surface charge, larger nanochannel sizes become viable. Furthermore, under seawater conditions (from \(\sim\)10mM up to \(\sim\)100mM), thanks to the chemical equilibrium at the interface, by controlling the surface charge (i.e. Gamma value in Figure 2B) an optimum can be engineered at large D\({}_{\text{p}}\) values (\(>\) 100nm). This implies that nm-scale geometrical confinement can be avoided, simplifying the scalability of HV devices. Finally, at high salinity levels (above 1M) charge inversion can be leveraged by minimizing the solid-liquid interface and initial surface charge. Yet, long-term operation at very high concentrations can be challenging due to ion adsorption and salt crystallization, which directly affect the surface properties and geometry of the nanostructure (Figure S10). Thus, it will require further investigation.
From the perspective of device geometry engineering, our results confirmed that structural asymmetries lead to an open circuit voltage contribution entirely due electrostatic effect. Importantly, this term is what generates a sizable V\({}_{\text{oc}}\) (\(>\)0.1 V) in our system, despite very low evaporation rates (Figure S2). This could be further enhanced by engineering a spatially non-homogeneous surface charge distribution through chemical or physical processing[45, 46]. Interestingly, taking advantage of our comprehensive model for predicting the performance of
HV devices, we expect that the Voc performance metric can be further augmented by improving the rate of evaporation. Figure 6B shows that, depending on surface charge and geometry of the nanoconfinement, Voc can be doubled by a five-fold increase of the evaporation rate compared to ambient conditions, which means that power can be increased up to four times. This is largely due to the enhancement in streaming current and to be confirmed it will require a more in-depth understanding of the fluid dynamics in HV devices. Overall, based on these guidelines, by knowing the detailed composition of ions in the used water, it becomes possible to optimize geometrical and interfacial properties of the evaporation-driven HV devices. Thus, our work, which lies between nanofluidic studies of individual nanochannels and macro-scale porous device testing, offers critical insight into how to enhance the performance of evaporation-driven HV devices and points towards broader application opportunities for these self-powered systems.
Figure 6: **Strategies towards boosting HV device performance.****A)** Contour plots of the measured V\({}_{\text{0C}}\) and P\({}_{\text{max}}\) across a wide range of salinity conditions available in natural and processed water. At low salinity, optimal performance can be achieved by leveraging the EDL overlap. At intermediate salinity, optimal operation can be achieved by engineering the surface charge and geometrical parameters, which is also highlighted by the intermediate peaks in Figure 2A-B. At very high salinity, one can leverage the charge inversion regime for optimal operations of the device. **B)** Simulated open circuit voltage for a series of different geometrical parameters as a function of evaporative flux for two values of surface charge. This shows that the performance can be further augmented by fluid dynamic consideration and enhancing the rate of evaporation from the system.
## Methods
### Fabrication of Silicon nanopillars array
Metal-assisted chemical etching (MACE) of crystalline silicon combined with colloidal lithography was used for the fabrication of a cm-scale array of silicon NPs[47, 48] (Figure S11). It involves the self-assembly of polystyrene nanospheres at the water-air interface. Then the non-closed-packed assembly of PS nanospheres was compressed to a pressure of approximately 25-30(N/m\({}^{2}\)) using the Langmuir-Blodgett system, which resulted in a homogeneous closed-packed hexagonal lattice of PS nanospheres[49]. The closed-packed monolayer was then transferred to Pirhanna-cleaned Silicon substrates which were diced into 2cm X 2cm chips. Plasma etching was used to reduce the diameter of the PS nanospheres with an initial diameter of \(d=600nm\). After gold-sputtering deposition of thickness \(20nm\) and lift-off, a gold nanomesh is formed which is used as an etching mask for MACE. Prior to gold sputtering, 3-5 nm of Ti is sputtered as an adhesion layer. This forms a stable contact between gold and the substrate to avoid delamination during MACE. The liftoff was done by putting the substrate in Toluene and ultrasonicating it at moderate power for 3-5 minutes at room temperature. Finally, MACE was performed by putting the substrate in an aqueous HF/H\({}_{2}\)O\({}_{2}\) solution with a volumetric percentage of HF and H\({}_{2}\)O\({}_{2}\) as 10% and 2% respectively. The diameter of the NPs is controlled by changing the time of plasma trimming of the polystyrene nanosphere, while the length of the NPs is controlled by changing the MACE time. The obtained sample was then treated with oxygen plasma (60 sec, 1000 W), to improve the hydrophilicity and surface charge.
### Electrical Measurements
The electrical measurements configuration is shown in Figure 1A, in which silicon NP is the active substrate and the Al layer acts as a back electrode while Ag/AgCl is used as a top electrode. For comparison, we tested the planar silicon device and NP device with Ag/AgCl electrodes and different porous top electrodes (Figure S12). The planar silicon gives almost zero \(V_{OC}\). We used an Ag/AgCl electrode as it is considered fully reversible, which ensures that the charges accumulated in the electrode EDL are entirely consumed by the electrodes at overpotentials,
which are practically zero, and there is no unwanted potential difference, which induces a conduction current. The open circuit I-V and EIS measurements were done using CHI bipotentiostat. During EIS measurement, for covering the entire area of the active Silicon NPs, a porous graphite electrode was used instead of Ag/AgCl. I-V characteristics were obtained using linear sweep voltammetry (-0.5-0.5V) at a scan rate of 0.01V/s. The EIS measurements were done at zero applied DC voltage with an amplitude of 10mV in the frequency range of 100Hz-1MHz.
**Modelling and simulation**
\[\nabla\cdot J_{t}+U\cdot\nabla c_{t}=0 (\textbf{{M1}})\] \[J_{t}=-D\nabla c_{t}-z_{i}u_{t}^{m}Fc_{t}\nabla\Psi (\textbf{{M2}})\] \[\nabla^{2}\Psi=-\frac{1}{\epsilon_{0}\epsilon_{r}}\sum_{t}Fz_{i} c_{i}exp\left(-\frac{ez_{i}\Psi}{k_{B}T}\right) (\textbf{{M3}})\]
For modeling the evaporation-driven electrokinetic conversion phenomenon in an array of vertical NP, we considered different modes of ion transport using the Nernst-Planck equation for the transport of dilute species, coupled with the Poisson-Boltzmann equation for equilibrium distribution of ions. The details of the simulation such as package use, and boundary conditions are given in S13. Diffusion becomes relevant because only the bottom part of the nanopores has surface charge, streaming of ions due to evaporation-driven flow, and migration of ions due to EDL potential. Based on the potential and charge distribution, it is also possible to compute the streaming current and ionic resistance to be used in the equivalent electrical circuit model:
\[I_{str}=\int_{A}\sum_{i}ez_{i}n_{i}(r)U(r)dA (\textbf{{M4}})\] \[\frac{1}{R_{ionic}}=\int_{A}\frac{1}{L}\sum_{i}e\mu_{i}^{m}n_{i}(r )+\frac{\epsilon_{0}\epsilon_{r}}{\eta L}(\Psi(r)-\Psi_{0})dA (\textbf{{M5}})\]
Performing a full 3D simulation for a hexagonal lattice is computationally expensive, so we simplified our simulation, by transforming it to an equivalent annular cylindrical geometry. During this transformation, the two important parameters, the hydraulic diameter, and the solid-liquid interfacial area were kept constrained which gives us unique inner and outer radii \(R_{1}\) and
\(R_{2}\) respectively. An evaporative flux is used to impose the flow rate condition (Figure S2). A spatially uniform surface charge was used as a boundary condition.
\[\Psi_{d}(\sigma) =\frac{k_{B}T}{e}\left[ln\left(\frac{-\sigma}{\sigma+e\Gamma} \right)-\ln(10)(pH-pK)\right]-\frac{\sigma}{C} (\emph{M6})\] \[\sigma\left(\psi_{d}\right) =\frac{2\epsilon_{0}\epsilon\kappa k_{B}T}{e}\left[\sinh\frac{e \psi_{d}}{2k_{B}T}+\frac{2}{\kappa r}\tanh\frac{e\psi_{d}}{4k_{B}T}\right] (\emph{M7})\]
The magnitude of surface charge was governed by the equilibrium between the silica surface and the electrolyte M6. In the case of an isolated surface with a curvature or non-overlapping double layer, the charge density on the surface satisfies the Grahame equation[35] M7, and by solving this equation coupled with the M6, we can obtain surface charge as a function of electrolyte concentration. In our model, we use Gamma as a free parameter as it can vary largely depending on the surface preparation. So here, we use different values of Gamma based on the available literature to show how the number of surface sites available for deprotonation affects surface change, and therefore the \(V_{OC}\). We did not attempt to retrieve the experimental curve using our simulation, because we used the Grahame equation together with chemical equilibrium boundary conditions that hold for isolated surfaces. No analytical solution is available for a non-isolated surface and in that case, one has to sweep sigma for a range of values of surface potential and then find the intersection point with the chemical equilibrium condition.
To give a theoretical description for geometrical asymmetry-induced potential difference we define the total electrochemical potential of the i-th ionic component, \(\varPhi_{i}\) along the nanochannel length (\(z\)-direction), which is given by:
\[\Phi_{i}(z)=\Phi_{i}^{0}+k_{B}T\ln\left(\frac{c_{i}(z)}{c_{i}^{0}}\right)+ez_{ i}\Psi_{i}(z)+\mu_{i,ex} (\emph{M8})\]
where \(\Psi_{i}\) is the electrical potential, determined by the overall charge distribution, where \(\varPhi^{0}_{i}\), \(c^{0}_{i}\), \(z_{i}\), \(\mu_{i}\), \(\kappa\) are the standard chemical potential, standard concentration, valence, and excess chemical potential[50] of the i-th component, respectively. At equilibrium, the total
electrochemical potential of the system must be the same, at all spatial locations, so \(\sum_{i}\Phi_{i}(z=0)=\sum_{i}\Phi_{i}(z=L)\). The resulting potential difference between the top and bottom of the nanoconfinement is equal to the \(V_{OC}\) due to the difference in the distribution of ionic concentration (Figure S14).
## Acknowledgment
The authors acknowledge the support of the Swiss National Science Foundation (SNSF) through the Korean-Swiss Science and Technology Cooperation Fund (Grant No. IZKSZ2_188341), and the Swiss Government Excellence fellowship. The authors also acknowledge the support of the following experimental facilities at EPFL: Center of MicroNanoTechnology (CMi) and Interdisciplinary Centre for Electron Microscopy (CIME).
## Supporting Information Available
All supplemental details related to fabrication, experimental data, calculation, and simulation can be found in the Supporting Information file. In addition a supplementary video is available showing the HV testing.
## References
* [1] Zhang, Z. _et al._ Emerging photovoltaic technology. _Nature Nanotech_**13**, 1109-1119 (2018).
* [2] Yin, J., Zhou, J., Fang, S. & Guo, W. Hydrovoltaic Energy on the Way. _Joule_**4**, 1852-1855 (2020).
* [3] Cavusoglu, A.-H., Chen, X., Gentine, P. & Sahin, O. Potential for natural evaporation as a reliable renewable energy resource. _Nature Communications_**8**, 617 (2017).
* [4] Cavusoglu, A.-H. A Theory of Renewable Energy from Natural Evaporation. (Columbia University, 2017). doi:10.7916/D8DJ5SX5.
* [5] Gao, F., Li, W., Wang, X., Fang, X. & Ma, M. A self-sustaining pyroelectric nanogenerator driven by water vapor. _Nano Energy_**22**, 19-26 (2016).
* [6] Zhu, L., Gao, M., Peh, C. K. N., Wang, X. & Ho, G. W. Self-Contained Monolithic Carbon Sponges for Solar-Driven Interfacial Water Evaporation Distillation and Electricity Generation. _Advanced Energy Materials_**8**, 1702149 (2018).
* [7] Zhang, X. _et al._ Conversion of solar power to chemical energy based on carbon nanoparticle modified photomreelectric generator and electrochemical water splitting system. _Nano Energy_**48**, 481-488 (2018).
* [8] Zhu, L., Ding, T., Gao, M., Peh, C. K. N. & Ho, G. W. Shape Conformal and Thermal Insulative Organic Solar Absorber Sponge for Photothermal Water Evaporation and Thermoelectric Power Generation. _Advanced Energy Materials_**9**, 1900250 (2019).
* [9] Das, S. S., Pedireddi, V. M., Bandopadhyay, A., Saha, P. & Chakraborty, S. Electrical Power Generation from Wet Textile Mediated by Spontaneous Nanoscale Evaporation. _Nano Lett._**19**, 7191-7200 (2019).
* [10] Shuvra Das, S., Kar, S., Anwar, T., Saha, P. & Chakraborty, S. Hydroelectric power plant on a paper strip. _Lab on a Chip_**18**, 1560-1568 (2018).
* [11] Xue, G. _et al._ Water-evaporation-induced electricity with nanostructured carbon materials. _Nature Nanotech_**12**, 317-321 (2017).
* [12] Qin, Y. _et al._ Constant Electricity Generation in Nanostructured Silicon by Evaporation-Driven Water Flow. _Angewandte Chemie_**132**, 10706-10712 (2020).
* [13] Lu, J., Ren, G., Hu, Q., Rensing, C. & Zhou, S. Microbial biofilm-based photovoltaic technology. _Trends in Biotechnology_**41**, 1155-1167 (2023).
* [14] Sun, J. _et al._ Electricity generation from a Ni-Al layered double hydroxide-based flexible generator driven by natural water evaporation. _Nano Energy_**57**, 269-278 (2019).
* [15] Li, J. _et al._ Surface functional modification boosts the output of an evaporation-driven water flow nanogenerator. _Nano Energy_**58**, 797-802 (2019).
* [16] Shao, B. _et al._ Electron-Selective Passivation Contacts for High-Efficiency Nanostructured Silicon Hydrovoltaic Devices. _Advanced Materials Interfaces_**8**, 2101213 (2021).
* [17] Shao, B. _et al._ Bioinspired Hierarchical Nanofabric Electrode for Silicon Hydrovoltaic Device with Record Power Output. _ACS Nano_**15**, 7472-7481 (2021).
* [18] Shao, B. _et al._ Boosting electrical output of nanostructured silicon hydrovoltaic device via cobalt oxide enabled electrode surface contact. _Nano Energy_**106**, 108081 (2023).
* [19] Liu, C. _et al._ Hydrovoltaic energy harvesting from moisture flow using an ionic polymer-hydrogel-carbon composite. _Energy Environ. Sci._**15**, 2489-2498 (2022).
* [20] Xin, X. _et al._ Hydrovoltaic effect-enhanced photocatalysis by polyacrylic acid/cobaltous oxide-nitrogen doped carbon system for efficient photocatalytic water splitting. _Nat Commun_**14**, 1759 (2023).
* [21] Dao, V.-D., Vu, N. H., Thi Dang, H.-L. & Yun, S. Recent advances and challenges for water evaporation-induced electricity toward applications. _Nano Energy_**85**, 105979 (2021).
* [22] Jiang, Z. _et al._ Simultaneous electricity generation and steam production from a wide range of salinity by using unique nanofluidic diode. _Nano Energy_**108**, 108220 (2023).
* [23] van der Heyden, F. H. J., Bonthuis, D. J., Stein, D., Meyer, C. & Dekker, C. Electrokinetic Energy Conversion Efficiency in Nanofluidic Channels. _Nano Lett._**6**, 2232-2237 (2006).
* [24] Ren, Y. & Stein, D. Slip-enhanced electrokinetic energy conversion in nanofluidic channels. _Nanotechnology_**19**, 195707 (2008).
* [25] van der Heyden, F. H. J., Stein, D. & Dekker, C. Streaming Currents in a Single Nanofluidic Channel. _Phys. Rev. Lett._**95**, 116104 (2005).
* [26] van der Heyden, F. H. J., Bonthuis, D. J., Stein, D., Meyer, C. & Dekker, C. Power Generation by Pressure-Driven Transport of Ions in Nanofluidic Channels. _Nano Lett._**7**, 1022-1025 (2007).
* [27] Stein, D., Kruithof, M. & Dekker, C. Surface-Charge-Governed Ion Transport in Nanofluidic Channels. _Phys. Rev. Lett._**93**, 035901 (2004).
* [28] Feng, J. _et al._ Single-layer MoS2 nanopores as nanopower generators. _Nature_**536**, 197-200 (2016).
* ScienceDirect. [https://www.sciencedirect.com/science/article/pii/S2542435119302090](https://www.sciencedirect.com/science/article/pii/S2542435119302090).
* [30] Xiao, K., Jiang, L. & Antonietti, M. Ion Transport in Nanofluidic Devices for Energy Harvesting. _Joule_**3**, 2364-2380 (2019).
* [31] Principles and applications of nanofluidic transport | Nature Nanotechnology. [https://www.nature.com/articles/nnano.2009.332](https://www.nature.com/articles/nnano.2009.332).
* [32] Lee, A. A., Perez-Martinez, C. S., Smith, A. M. & Perkin, S. Scaling Analysis of the Screening Length in Concentrated Electrolytes. _Phys. Rev. Lett._**119**, 026002 (2017).
* [33] van der Heyden, F. H. J., Stein, D., Besteman, K., Lemay, S. G. & Dekker, C. Charge Inversion at High Ionic Strength Studied by Streaming Currents. _Phys. Rev. Lett._**96**, 224502 (2006).
* [34] Besteman, K., Zevenbergen, M. A. G. & Lemay, S. G. Charge inversion by multivalent ions: Dependence on dielectric constant and surface-charge density. _Phys. Rev. E_**72**, 061501 (2005).
* [35] Behrens, S. H. & Grier, D. G. The charge of glass and silica surfaces. _The Journal of Chemical Physics_**115**, 6716-6721 (2001).
* [36] Morita, M., Ohmi, T., Hasegawa, E., Kawakami, M. & Ohwada, M. Growth of native oxide on a silicon surface. _Journal of Applied Physics_**68**, 1272-1281 (1990).
* [37] Agmon, N. The Grotthuss mechanism. _Chemical Physics Letters_**244**, 456-462 (1995).
* [38] Borukhov, I., Andelman, D. & Orland, H. Steric Effects in Electrolytes: A Modified Poisson-Boltzmann Equation. _Phys. Rev. Lett._**79**, 435-438 (1997).
* [39] Borukhov, I., Andelman, D. & Orland, H. Adsorption of large ions from an electrolyte solution: a modified Poisson-Boltzmann equation. _Electrochimica Acta_**46**, 221-229 (2000).
* [40] Besteman, K., Zevenbergen, M. A. G., Heering, H. A. & Lemay, S. G. Direct Observation of Charge Inversion by Multivalent Ions as a Universal Electrostatic Phenomenon. _Phys. Rev. Lett._**93**, 170802 (2004).
* [41] Shklovskii, B. I. Screening of a macroion by multivalent ions: Correlation-induced inversion of charge. _Phys. Rev. E_**60**, 5802-5811 (1999).
* [42] Nguyen, T. T., Grosberg, A. Yu. & Shklovskii, B. I. Macroions in Salty Water with Multivalent Ions: Giant Inversion of Charge. _Phys. Rev. Lett._**85**, 1568-1571 (2000).
* [43] Gaddam, P. & Ducker, W. Electrostatic Screening Length in Concentrated Salt Solutions. _Langmuir_**35**, 5719-5727 (2019).
* [44] Sarkadi, Z., Fertig, D., Valisko, M. & Boda, D. The Dukhin number as a scaling parameter for selectivity in the infinitely long nanopore limit: Extension to multivalent electrolytes. _Journal of Molecular Liquids_**357**, 119072 (2022).
* [45] Nanofluidic Diode. [https://pubs.acs.org/doi/epdf/10.1021/nl062924b](https://pubs.acs.org/doi/epdf/10.1021/nl062924b) doi:10.1021/nl062924b.
* [46] Constantin, D. & Siwy, Z. S. Poisson-Nernst-Planck model of ion current rectification through a nanofluidic diode. _Phys. Rev. E_**76**, 041202 (2007).
* [47] Wendisch, F. J., Rey, M., Vogel, N. & Bourret, G. R. Large-Scale Synthesis of Highly Uniform Silicon Nanowire Arrays Using Metal-Assisted Chemical Etching. _Chem. Mater._**32**, 9425-9434 (2020).
* [48] Kheyraddini Mousavi, B. _et al._ Metal-assisted chemical etching of silicon and achieving pore sizes as small as 30 nm by altering gold thickness. _Journal of Vacuum Science & Technology A_**37**, 061402 (2019).
* [49] Thangamuthu, M., Santschi, C. & Martin, O. J. F. Reliable Langmuir Blodgett colloidal masks for large area nanostructure realization. _Thin Solid Films_**709**, 138195 (2020).
* [50] Bazant, M. Z., Kilic, M. S., Storey, B. D. & Ajdari, A. Towards an understanding of induced-charge electrokinetics at large applied voltages in concentrated solutions. _Advances in Colloid and Interface Science_**152**, 48-88 (2009).
Supplementary for: Salinity-Dependent Interfacial Phenomena Towards Hydrovoltatic Device Optimization
Tarique Anwar and Giulia Tagliabue*
_Laboratory of Nanoscience for Energy Technologies (INET), STI, Ecole Polytechnique Federale de Lausanne (EPFL), Lausanne 1015, Switzerland_
*Email: [email protected]
20 minutes. The length of pillars was almost linearly increasing with MACE time. For MACE time of 5, 10, and 20 minutes, the lengths of the pillars were _1.23\(\mu\)m_, _2.3\(\mu\)m_, _and 4.4\(\mu\)m_ respectively. The longer etching time not only increases the length of the NP but also affects the surface composition such as roughness, change in surface activity, or surface charge regulation effects. This could have a significant effect on the \(V_{OC}\), as can be seen from simulation **Fig. 2B** that a small change in the free parameter \(\Gamma\) can affect the surface charge and give a much different \(V_{OC}\).
### S2: Measured evaporative flux and the corresponding mass flow rate imposed in the simulation
**Figure S2:** The change in mass due to evaporation in an ambient environment was recorded using a microbalance. Three successive measurements were done on the same sample after thorough drying. The average value of ambient evaporative flux was estimated to be around \(2\times 10^{-5}gcm^{-2}s^{-1}\)_or \(\sim\)1 \(\mathbf{kgm^{-2}h^{-1}}\). The flow rate imposed in the simulation was normalized with respect to the porosity of the sample in the range of 0.1-0.3 as shown in **Fig. S1**. In the simulation, the parabolic flow profile (for pressure driven) and constant flow profile (limiting case of electroosmotic flow for low Debye-length) were considered, and the resultant V\({}_{OC}\) was almost invariant with respect to the chosen profile under the range of evaporative flux used.
**Figure S3:****a)** Open circuit voltage with time at low concentrations showing that the initial voltage is zero and stabilizes to a steady positive value. The different curve corresponds to a set of parameters namely concentration of the electrolyte, length, and diameter of the nanopillars. **b)** Open circuit voltage at higher concentrations, showing that the initial voltage is negative and its magnitude also increases with concentration. The final steady value measured was positive or negative depending on the set of parameters, but especially at the highest concentration. The electrolyte here is KCl in DI water.
\[\Psi_{d}(\sigma) =\frac{k_{B}T}{e}\left[ln\left(\frac{-\sigma}{\sigma+e\Gamma} \right)-\ln(10)(pH-pK)\right]-\frac{\sigma}{C} \tag{1}\] \[\sigma\left(\psi_{d}\right) =\frac{2\epsilon_{0}\epsilon\kappa k_{B}T}{e}\left[\sinh\frac{e \psi_{d}}{2k_{B}T}+\frac{2}{\kappa r}\tanh\frac{e\psi_{d}}{4k_{B}T}\right] \tag{2}\]
The equations were solved as a function of concentration at pH=7, pK=7.2, and C=0.29 Fm^2, e=78.5, T=300 K, and average radius of curvature, r=300 nm.
Figure S5: To elucidate the importance of concentration dependent surface charge, we performed simulation with a fixed surface charge of -10 mC/m\({}^{2}\). This result, which are monotonically decreasing with concentration, is in sharp contrast with the experimental trends. The monotonic decrease is due to increase in surface charge screening characterized by the Debye length, while the surface charge density is fixed.
Figure S6: Time trace of the measured open circuit voltage for different pH at 1 \(\upmu\)m KCl concentration. Different pH of the electrolyte was obtained by adding varying concentration of HCl to the original electrolyte (KCl in DI water). By changing the pH across the isoelectric point (pH=2-3) we observed both the change in sign and magnitude of open circuit voltage.
Figure S7: Nyquist plot for a single device and 1mM electrolyte concentration for different salt monovalent salts. The inset shows the equivalent electrical circuit used to fit the results and obtain the required circuit elements. The W2 is the Warburg impedance, which can be used to calculate the diffusivities.
## S8: IV characteristics and power
**Figure S8: a)** IV characteristics were obtained using Linear Sweep Voltammetry (LSV) by sweeping the voltage from -0.5V-0.5V. Using IV data, the maximum power output was determined. **b)** The IV characteristics for different concentration. Direct contact with electrochemical means that the Ag/AgCl is contacted directly with the silicon substrate, while in case of electrochemical contact, the Ag/AgCl electrode was put in the electrolyte 'film' on top of silicon substrate. In the Dry condition, the device behave as a diode with zero current at zero applied voltage. In presence of electrolyte, the curve shifts downwards having a negative short circuit current and positive open circuit voltage. For 1mM, we intentionally make a direct contact of Ag/AgCl wire with Si substrate, and we see a diode like behavior but with negative short current. The magnitude of current in this case is consistent with the increasing trend with concentration.
\[V_{0C}^{eff}=V_{0C}^{\infty}-V_{0C}^{0} \tag{4}\]
## S10 Silicon Nanopillars after long time testing
## S11: Fabrication methodology
\begin{tabular}{|c|c|c|} \hline Step & Process description & Cross-section after process \\ \hline
01 & Substrate: _P-type Si_ & \\ & _Self-assembly of_ & 600nm \\ & _Polystyrene(PS) particles_ & \\
01 & _(600nm)_ & \\ & Thickness: _monolayer_ & \\ & _assembly_ & \\ \hline
02 & _Oxygen Plasma etching_ & \\ & Oxygen flow rate:5-10 & \\
02 & ml/min & \\ & Power:50W & \\ & Time:5-15mins & \\ \hline
03 & Material: _Ti_ & \\ & Thickness:5nm & \\ \hline \end{tabular}
Figure S11: Fabrication of the Silicon NP using a combination of Colloidal lithography and metal assisted chemical etching[3]. The SEM image showing the sample after each step. **a)** Monolayer assembly of Polystyrene nanosphere after step 2. **b)** Gold Nano mesh after step 5. **c)** Silicon nanopillars after MACE in step 6.
**S12: Open circuit voltage measurement with different top electrode and planar silicon**
**Figure S12: a) Measurement of open circuit voltage with a top porous electrode. b) The measured voltage measured in a single Silicon Nanopillars device with and without top porous electrode. Three different electrode were used that varied and sheet resistance and hydrophilicity and therefore different open circuit was obtained with the same device[4]. The voltage was also measured with top electrode on a planar silicon and it showed almost zero open circuit voltage.**
The simulation was performed using Comsol 6.0 using the transport of dilute species, and electrostatics packages that were coupled. The simulation was performed in 2D-axis symmetric domain as shown in the figure above. The normal flux of ions at the wall was set to zero, while at the top a concentration boundary condition was used. A concentration dependent surface charge was used as the boundary conditions at the solid surface as described in S4, while potential was set equal to zero at the top surface. The computed potential difference between top and bottom is equal to V\({}_{\mathrm{OC}}\). The meshing was finer near the wall due to large gradients, while a slightly coarser mesh was used near the bulk to optimize the simulation time.
## S14 Ionic concentration distribution for two representative electrolyte concentrations |
2310.20200 | Multi-Domain Polarization for Enhancing the Physical Layer Security of
MIMO Systems | A novel Physical Layer Security (PLS) framework is conceived for enhancing
the security of the wireless communication systems by exploiting multi-domain
polarization in Multiple-Input Multiple-Output (MIMO) systems. We design a
sophisticated key generation scheme based on multi-domain polarization, and the
corresponding receivers. An in-depth analysis of the system's secrecy rate is
provided, demonstrating the confidentiality of our approach in the presence of
eavesdroppers having strong computational capabilities. More explicitly, our
simulation results and theoretical analysis corroborate the advantages of the
proposed scheme in terms of its bit error rate (BER), block error rate (BLER),
and maximum achievable secrecy rate. Our findings indicate that the innovative
PLS framework effectively enhances the security and reliability of wireless
communication systems. For instance, in a $4\times4$ MIMO setup, the proposed
PLS strategy exhibits an improvement of $2$dB compared to conventional MIMO,
systems at a BLER of $2\cdot 10^{-5}$ while the eavesdropper's BLER reaches
$1$. | Luping Xiang, Yao Zeng, Jie Hu, Kun Yang, Lajos Hanzo | 2023-10-31T05:50:24Z | http://arxiv.org/abs/2310.20200v1 | # Multi-Domain Polarization for Enhancing the Physical Layer Security of MIMO Systems
###### Abstract
A novel Physical Layer Security (PLS) framework is conceived for enhancing the security of the wireless communication systems by exploiting multi-domain polarization in Multiple-Input Multiple-Output (MIMO) systems. We design a sophisticated key generation scheme based on multi-domain polarization, and the corresponding receivers. An in-depth analysis of the system's secrecy rate is provided, demonstrating the confidentiality of our approach in the presence of eavesdroppers having strong computational capabilities. More explicitly, our simulation results and theoretical analysis corroborate the advantages of the proposed scheme in terms of its bit error rate (BER), block error rate (BLER), and maximum achievable secrecy rate. Our findings indicate that the innovative PLS framework effectively enhances the security and reliability of wireless communication systems. For instance, in a \(4\times 4\) MIMO setup, the proposed PLS strategy exhibits an improvement of 2dB compared to conventional MIMO, systems at a BLER of \(2\cdot 10^{-5}\) while the eavesdropper's BLER reaches 1.
Physical layer security (PLS), multi-domain polarization, MIMO, secrecy code construction
## I Introduction
To enhance the security of wireless communication systems, traditional approaches have primarily relied on secret key based encryption techniques at the network layer. However, the high computational burden of these methods has prompted researchers to explore secure transmission methods at the physical layer (PHY) [1, 2]. Physical layer security (PLS) based mechanisms can be broadly categorized into two groups: keyless PLS transmission techniques based on Wyner's theory [3] and key-based PLS transmission techniques rooted in Maurer's theory [4]. By appropriately integrating these techniques with modulation schemes and channel coding, the security of the system can be improved, while maintaining communication efficiency.
Keyless PLS techniques by definition operate without the need for a key, utilizing sophisticated signal processing methods to degrade the eavesdropper's (E) channel state, while simultaneously enhancing the quality of the legitimate communication channel. The concept of constructive interference, introduced in [5], relies on the transmission of directional artificial noise (AN) to interfere with E. In [6], symbol-level transmit pre-encoders (TPC) are employed for reducing the transmitter's energy consumption and for enhancing the system's overall performance while jamming E. Considering angular errors, Hu _et al._[7] derive a closed-form expression for the AN projection matrix, assuming realistic directional angular estimation errors obeying a uniform distribution within a practical range. Xu _et al._[8] designs an effective Artificial Noise Assisted Security Scheme (ANAS), relying on two phases of transmission: in Phase 1, the legitimate parties send two independent artificial noise sequences (ANs), while in Phase 2, the transmitter superimposes the ANs received in Phase 1 on the signals and transmits the resultant sequences mixed signal. Secure communication is achieved since the ANs superimposed on the legitimator, signal in phase 2 can be effectively cancelled by the legitimate receiver while still interfering with the eavesdropper. Shu _et al._[9] present a robust, AN-based multi-beam broadcast system capable of improving both the security and the rate. Although AN-based keyless designs succeed in increasing the secure transmission rates, this is achieved at the cost of increased complexity and peak to average power ratio (PAPR).
The family of key-based PLS transmission techniques has also garnered interest from numerous researchers [10, 11]. Key generation methods exploit the random physical layer attributes of the channel [12] to prevent E from gleaning confidential information from the legitimate links [13, 14, 15]. The legitimate user employs traditional channel estimation techniques for acquiring the channel state information (CSI) of the legitimate link and subsequently generates the physical layer key [16, 17]. By contrast, E is unable to access the CSI of the legitimate link and the associated key. However, CSI-based key generation schemes are challenging to implement in practice due to biases introduced by channel estimation. This issue has been mitigated through the development of high-performance secure channel coding techniques [18].
In conventional communication systems, coding and encryption are treated as separate processes, where physical layer coding is harnessed for enhancing the reliability [25], while upper layer encryption is used for ensuring security [26]. For circumventing the weaknesses of upper layer encryption, researchers have embarked on investigating the joint design of coding and encryption at the physical layer [27]. This
approach is eminently suitable for wireless channels upon using appropriate coding schemes, for simultaneously improving the legitimate link and for preventing E from accessing any confidential information. Powerful low-density parity-check (LDPC) codes are particularly suitable for secure channel coding design. In this context, Li _et al._[22] proposes an LDPC-based McEliece secrecy coding scheme for enhancing the information reliability of legitimate users and the information security against E. Motamedi _et al._[28] examine the-perfect-security' physical layer authentication problem of wireless networks using LDPC codes and hash functions, achieving high authentication rates in the presence of an E having high computational power.
Additionally, the integration of polar codes [29] and physical layer security has garnered widespread scholarly attention [30, 31]. Polar codes, conceived by Arikan [32], achieve symmetric capacity for binary input memoryless channels (BMCs). In [23], a concatenated coding scheme combining polar codes and fountain codes is proposed by Yang and Zhuang for memoryless erasure binary eavesdropping channel models, while relying on finite code lengths for ensuring security. Hao _et al._[33] discuss a secure transmission scheme employing two-dimensional polar codes designed for block fading eavesdropping channels, in the face of instantaneous secrecy capacity fluctuations. Bao _et al._[24] combine polar codes with artificial noise to derive upper and lower bounds of the symmetric capacity for polarized bit channels, which benefit the legitimate receiver but not the eavesdropper.
The core of polar code construction lies in the so-called channel polarization processing detailed in [34]. As the coding space dimension approaches infinity, all sub-channels become fully polarized. However, under practical finite code lengths, many sub-channels remain partially polarized, hence impacting the system's secrecy rate. To address this issue, we explore the introduction of multi-domain polarization into physical layer security research. Dai _et al._[35], guided by the concept of generalized polarization, propose a polarization-coded MIMO model that significantly enhances the benefits of polarization. Explicitly, they demonstrate that multi-domain polarization is eminently suitable for PLS-enhancement.
In this context, we jointly design multi-domain polarization and encryption. On one hand, MIMO detection schemes apply different processing methods and detection orders for the individual spatial layers, resulting in varying signal reliability. Based on this, we design a random detection order based multi-domain polarization model that prevents eavesdroppers from inferring with the legitimate link's MIMO detection mode or multi-domain polarization process, leading to extremely high eavesdropper decoding error rates. On the other hand, since the time-division duplex (TDD) systems' channel reciprocity prevents eavesdroppers from obtaining the legitimate link's instantaneous gain, we partition the gain range into multiple contiguous but non-overlapping intervals. Based on this, we design an instantaneous channel gain mapping based polarization scheme for increasing the randomness of the secret key, hence enhancing the overall system performance, as detailed bellow.
The key innovations of this scheme are boldly contrasted to the state-of-the-art in Table I, which are further detailed as follows:
* We propose a novel PLS architecture based on a MIMO scheme, modulation, and multi-domain polarization. This scheme integrates the multi-domain polarization structure with the classic binary polarization coding structure for enhancing the overall system's polarization effect, to a benefit, our solution achieves significant performance improvements over conventional MIMO transmissions. Exploiting the randomness of the MIMO detection order as our secret physical layer key, distinct polarization designs are derived based on different detection orders, yielding unique coding constructions. Since E cannot infer the legitimate link's detection order, it also fails to acquire the corresponding coding construction. This approach enhances the legitimate link's decoding performance and simultaneously it degrades the E link's quality, hence improving the security.
* We conceive an instantaneous channel gain based mapping and coding structure. To further enhance the PLS, this method partitions the legitimate link's instantaneous gain into multiple contiguous but non-overlapping intervals, each mapping to a distinct coding construction. By employing the Gaussian approximation (GA) algorithm to match the subchannel reliability, which uses the noise variance of the channel as input to select the most reliable bits, the secret key may be obtained without incurring any additional overhead. Even if E has powerful computational capabilities, it fails to perform accurate decoding. Again, partitioning the legitimate link's gain improves the legitimate link's error correction capability, while degrading the decoding capability of E.
* To validate the proposed scheme's confidentiality in the presence of eavesdroppers, we analyze the maximum achievable secrecy rate from various perspectives. Our numerical results confirm the scheme's confidentiality. Furthermore, we evaluate the performance of this ap
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c} \hline Contributions & ours & [1, 2] & [3] & [4, 19, 20, 21] & [5, 6] & [7, 9] & [12] & [16, 17] & [22] & [23] & [24] \\ \hline Multiple mapping patterns & \multicolumn{1}{c}{✓} & & & & & & & & & & \\ \hline Physical layer security (PLS) & \multicolumn{1}{c}{✓} & & & & & & & & & & \\ \hline Reduce receiver latency & \multicolumn{1}{c}{✓} & & & & & & & & & & & \\ \hline Secrecy rate analysis & \multicolumn{1}{c}{✓} & & & & & & & & & & & \\ \hline MIMO polarization & \multicolumn{1}{c}{✓} & & & & & & & & & & \\ \hline Detection of sequential mapping coding construction & \multicolumn{1}{c}{✓} & & & & & & & & & & & \\ \hline \end{tabular}
\end{table} TABLE I: Boldly contrasting our novelty to the literature
proach in terms of both its bit error rate (BER) and block error rate (BLER). Our simulation results demonstrate that even in possession of formidable computing power, eavesdroppers cannot correctly decode a complete data frame. For example, within a \(4\times 4\) MIMO configuration, the proposed PLS approach attains an SNR enhancement of 2dB in comparison to conventional MIMO, while the eavesdropper's BLER approaches 100% and the legitimate user's BLER is as low as \(10^{-5}\).
The rest of this paper is composed as follows. In Section II, we portray the system model and provide a detailed description of the key generation scheme relying on MIMO based multi-domain polarization. Section III presents the receiver models of both the legitimate user and of the eavesdropper. Subsequently, in Section IV, we analyze the system's secrecy rate. Section V provides our simulation results and theoretical analysis. Finally, Section VI concludes of the paper.
As for our notations, random variables and their actual values are represented by uppercase Roman letters and lowercase letters, respectively. Furthermore, \(\Re(x)\) and \(\Im(x)\) represent the real and imaginary parts of \(x\), respectively. The modulus of \(x\) is written as \(\|x\|=\sqrt{\Re(x)^{2}+\Im(x)^{2}}\). The calligraphic characters \(\mathcal{X}\) and \(\mathcal{Y}\) are used to denote sets, and \(|\mathcal{X}|\) denotes the number of elements in \(\mathcal{X}\). The notation \(P(X)\) represents the probability density function (PDF) of random variables, and the probability density function of \(X\) is expressed as \(p(X|A)\) under the condition of a given \(A\). In addition, \(\Gamma(n)\) represents the gamma distribution having \(n\) degrees of freedom. Matrices and vectors are represented by bold uppercase and lowercase letters, respectively. In particular, \(\mathbf{0}_{N\times 1}\) denotes the \((N\times 1)\) zero vector and \(\mathbf{I}_{N}\) denotes the \((N\times N)\) identity matrix. The transpose and conjugate transpose operators are denoted by \((\cdot)^{\prime}\) and \((\cdot)^{\dagger}\), respectively. Moreover, the element in the \(i\)-th row and the \(j\)-th column of matrix \(\mathbf{H}\) is written as \(h_{i,j}\), while \(\mathbf{x}_{1}^{N}\) represents the vector \((x_{1},x_{2},...,x_{N})^{\prime}\). Finally, we employ the notation \(E(\cdot)\) to represent the mean operator, and \(\|\cdot\|_{F}\) denotes the two-norm operation.
## II PLS design for Multi-domain polarisation MIMO system
This section elaborates on our PLS framework, which relies on MIMO based multi-domain polarization.
### _Channel Model_
Consider the MIMO wiretap channel model depicted in Fig. 1. Given a total of \(S\) time slots (TS), the transmitter (Alice) sends \(K\) information bits to the legitimate user (Bob) after polar coding, interleaving, and modulation using a coding rate of \(R=K/N\), where \(N\) is the code length. An eavesdropper attempts to intercept the confidential information transmitted via the legitimate link. Alice is equipped with \(T_{A}\) transmit antennas (TAs), while Bob and Eve have \(N_{B}\) and \(N_{E}\) receive antennas (RAs), respectively. The uncorrelated Rayleigh fading channels encountered by the legitimate link and the eavesdropping link are denoted by \(\mathbf{H}=\left[\mathbf{h_{1}},\mathbf{h_{2}},\cdots,\mathbf{h_{T_{A}}}\right]\) and \(\mathbf{G}=\left[\mathbf{g_{1}},\mathbf{g_{2}},\cdots,\mathbf{g_{T_{A}}}\right]\), which have sizes of \((N_{B}\times T_{A})\) and \((N_{E}\times T_{A})\), respectively. Each column vector in the matrices \(\mathbf{H}\) and \(\mathbf{G}\) is expressed as \(\mathbf{h}_{1}=\left[\mathbf{h}_{1,4},\mathbf{h}_{2,4},\cdots,\mathbf{h_{N_{E },4}}\right]^{\prime}\) and \(\mathbf{g}_{i}=\left[\mathbf{g}_{1,4},\mathbf{g}_{2,1},\cdots,\mathbf{g}_{N_{ E},4}\right]^{\prime}\), where \(t=1,2,...,T_{A}\), respectively. The vectors \(\mathbf{h}_{1}\) and \(\mathbf{g}_{i}\) include the channel coefficients of the link spanning from Alice's \(t\)-th TA to all RAs of Bob and Eve. Additionally, for any TS, all channel coefficients \(\mathbf{h}_{b,t}\) and \(\mathbf{g}_{e,t}\) obey \(\mathcal{CN}(0,1)\), where \(b\) and \(e\) represent the \(b\)-th row and \(e\)-th row of \(\mathbf{H}\) and \(\mathbf{G}\), respectively, while \(t\) represents the \(t\)-th column of \(\mathbf{H}\) and \(\mathbf{G}\), respectively, with \(b=1,2,...,N_{B},e=1,2,...,N_{E}\).
In a Time Division Duplex (TDD) system, the channel's reciprocity may be exploited without additional resources or overhead, ensuring that Alice and Bob have similar channel coefficients at both end of the link. Therefore, in any TS \(s\), the received signal expressions for Bob and Eve are given by:
\[\mathbf{y}_{1}^{N_{B}}(s) =\mathbf{H}(s)\cdot\mathbf{x}_{1}^{T_{A}}(s)+\mathbf{z}_{1}^{N_{B }}(s), \tag{1}\] \[\mathbf{y}_{1}^{N_{E}}(s) =\mathbf{G}(s)\cdot\mathbf{x}_{1}^{T_{A}}(s)+\mathbf{z}_{1}^{N_{E }}(s). \tag{2}\]
In the \(s\)-th TS, \(s=1,2,...,S\), the vector \(\mathbf{y}_{1}^{N_{B}}(s)\) of size \((N_{B}\times 1)\) represents Bob's received signal, and the vector \(\mathbf{y}_{1}^{N_{E}}(s)\) of size \((N_{E}\times 1)\) contains Eve's received signal. The \((T_{A}\times 1)\) vector \(\mathbf{x}_{1}^{T_{A}}(s)\) represents the symbol transmitted by Alice. Furthermore, the \((N_{B}\times 1)\) vector \(\mathbf{z}_{1}^{N_{B}}(s)\) and the \((N_{E}\times 1)\) vector \(\mathbf{z}_{1}^{N_{E}}(s)\) obey the complex Gaussian distributions \(\mathcal{CN}\left(\mathbf{0}_{N_{E}\times 1},\sigma^{2}\mathbf{I}_{N_{B}}\right)\) and \(\mathcal{CN}\left(\mathbf{0}_{N_{E}\times 1},\sigma^{2}\mathbf{I}_{N_{E}}\right)\), containing Bob's and Eve's additive white Gaussian noise (AWGN) components, respectively.
### _Key generation based on multi-domain polarization_
Building on the concept of generalized polarization, we aim for enhancing the MIMO transmission efficacy and hence the overall system performance by jointly optimizing the coding and MIMO transmission [35]. Again, we propose a MIMO based multi-domain polarization architecture that improves the error correction capability of the legitimate link, while degrading the eavesdropping link's performance. As depicted in Fig. 2, the scheme comprises three primary stages [35]. In the first stage, MIMO polarization is carried out, which defined as partitioning the original MIMO channel into multiple parallel sub-channels. In the second stage, modulation polarization is carried out following the multi-level coding concept [36, 37] to generate additional bit-based subchannels. Finally, the time slot index is introduced to maximize the system's polarization effect and to select the most reliable bit subchannel for information transmission. Moreover, for avoiding the practical challenges of obtaining the complete legitimate link's CSI, we utilize only the channel's instantaneous gain to design the secure system based on this multi-level polarization approach.
We define the original MIMO channel as \(\mathbf{W}:\mathcal{X}^{T_{A}}\mapsto\mathcal{Y}\), where \(\mathcal{X}^{T_{A}}\) represents the set of transmitted symbols for each antenna and \(|\mathcal{X}^{T_{A}}|=M\), with \(M\) being the modulation order, while \(\mathcal{Y}\) represents the set of received signals. In TDD systems, the legitimate link's instantaneous channel gain is estimated by the legitimate party. Under such circumstances, the transition probability \(\mathbf{W}\left(\mathbf{y}_{1}^{N_{B}}(s)\mid\mathbf{x}_{1}^{T_{A}}(s),\mathbf{ H}(s)\right)\) of the legitimate link can be derived according to equation (1), which can
be expressed in the \(s\)-th TS as [35]:
\[\mathbf{W}\left(\mathbf{y}_{1}^{N_{B}}(s)\mid\mathbf{x}_{1}^{T_{A}}(s), \mathbf{H}(s)\right)=\left(\pi\sigma^{2}\right)^{-N_{B}}\cdot\exp\left(-\sum_{i =1}^{N_{B}}\frac{\|y_{i}-\tilde{x}_{i}\|^{2}}{\sigma^{2}}\right), \tag{3}\]
where \(\tilde{x}_{i}\) is the \(i\)-th element of the \((N_{B}\times 1)\) vector \(\tilde{\mathbf{x}}_{s}^{N_{B}}(s)=\mathbf{H}(s)\cdot\mathbf{x}_{1}^{T_{A}}(s ),i=1,2,\ldots,N_{B}\), \(s=1,2,...,S\), while \(y_{i}\) is the \(i\)-th element of the \((N_{B}\times 1)\) vector \(\mathbf{y}_{1}^{N_{B}}(s)\), and \(\sigma^{2}\) denotes the noise variance.
At this stage, we perform MIMO polarization. Since the MIMO detection scheme has varying detection orders for each spatial layer, which results in different signal reliability across the individual antennas. For instance, under the linear minimum mean square error (MMSE) successive interference cancellation (SIC) algorithm, the first detected antenna has relatively low reliability due to the interference imposed by the other antennas. Provided that the corresponding symbol was still detected without error, the detected symbol is remodulated and then subtracted both the composite signal, This way the interference is gradually peeled off, thence typically the last detected antenna has the highest reliability due to the absence of interference, which was cancelled by subtracting the remodulated signals of all other RAs. As illustrated in Fig. 3, an incremental detection pattern was used in the detection process. In the figure we can see a comparison of the reliability of the different antennas both before and after polarisation. The results show that the average reliability of the antennas after polarisation is significantly higher, further
Fig. 1: Physical layer security scheme based on MIMO multi-domain polarization.
Fig. 2: Architecture of MIMO based polarisation at the transmitter.
validating the effectiveness of the polarisation technique used. In addition, it should be noted that in the incremental detection mode, the average reliability of the antennas detected in the reverse scan exceeds that of the antennas in the forward scan. This confirms the conclusion of the previous analysis, namely that the interference imposed on the last detected antenna is completely removed. Under this condition, the original MIMO scheme is divided into \(T_{A}\) independent sub-channels \(\mathbf{W}\rightarrow\mathbf{W}_{t}:\mathcal{X}\mapsto\mathcal{Y},t=1,2,\ldots,T_{A}\), each associated with different symbol reliability, where \(\mathcal{X}\) denotes the set of transmitted symbols. The associated transition probabilities can be further expressed as:
\[\mathbf{W}_{t}\left(\mathbf{y}_{1}^{N_{A}}(s)\mid x_{t},\mathbf{H}(s)\right)= \sum_{\mathbf{x}_{1}^{T_{A}}(s)\mid x_{t}}\frac{1}{2^{m(T_{A}-1)}}\cdot \mathbf{W}\left(\mathbf{y}_{1}^{N_{A}}(s)\mid\mathbf{x}_{1}^{T_{A}}(s),\mathbf{ H}(s)\right), \tag{4}\]
where \(m=\log_{2}^{M}\) represents the number of bits per \(M\)-ary quadrature amplitude modulation (QAM) symbol, and \(\mathbf{x}_{1}^{T_{A}}(s)\backslash x_{t}\) denotes the subvector of \(\mathbf{x}_{1}^{TA}(s)\), excluding element \(x_{t}\) at the \(s\)-th TS.
After obtaining \(T_{A}\) independent sub-channels having different symbol reliability levels, we proceed to perform modulation polarization [37], introducing polarization effects into the modulated symbol so that each bit sub-channel constituted for example the first or the last bit of the symbol exhibits varying reliability \(\mathbf{W}\rightarrow\mathbf{W}_{t}\rightarrow\mathbf{W}_{t,j}:\mathcal{B} \mapsto\mathcal{X}\mapsto\mathcal{Y},t=1,2,\ldots,T_{A},j=1,2,\ldots m\), where \(\mathcal{B}\) represents the set of transmitted bits \(b_{t,j}\). At this point, the transition probability can be written as:
\[\mathbf{W}_{t,j}\left(\mathbf{y}_{1}^{N_{A}}(s)\mid b_{(t-1)m+j}, \mathbf{H}(s)\right) \tag{5}\] \[=\sum_{\mathbf{b}_{(t-1)m+j}^{m}\backslash b_{(t-1)m+j}}\left( \frac{1}{2^{m-1}}\cdot\mathbf{W}_{t}\left(\mathbf{y}_{1}^{N_{A}}(s)\mid x_{t},\mathbf{H}(s)\right)\right)\] \[=\sum_{\mathbf{b}_{(t-1)m+j}^{m}\backslash b_{(t-1)m+j},\mathbf{ x}_{1}^{T_{A}}(s)\backslash x_{t}}\left(\frac{1}{2^{T_{A}N_{B}-1}}\cdot \mathbf{W}\left(\mathbf{y}_{1}^{N_{A}}(s)\mid\mathbf{x}_{1}^{T_{A}}(s),\mathbf{ H}(s)\right)\right),\]
where \(\mathbf{b}_{(t-1)m+j}^{m}\backslash b_{(t-1)m+j}\) represents the bit subvector \(\mathbf{b}_{(t-1)m+j}^{m}\)
Fig. 3: Examples of \(4\times 4\) MIMO antenna polarisation.
excluding the element \(b_{(t-1)m+j}\). Then the binary vector \(\mathbf{b}_{(t-1)m+j}^{lm}\) is mapped to the \(M\)-ary transmitted symbol \(x_{t}\) according to the modulation order \(M\).
Lastly, we incorporate the time index. Given that the total number of TSs is \(S\), the original information sequence is mapped to the corresponding bit sub-channel using polarization coding to state \(N\) independent bit sub-channels \(\mathbf{W}\rightarrow\mathbf{W}_{t}\rightarrow\mathbf{W}_{t,j}\rightarrow \overline{\mathbf{W}}_{t,j}:\mathcal{U}\rightarrow\mathcal{B}\mapsto \mathcal{X}\mapsto\mathcal{Y}\), where \(\mathcal{U}\) represents the set of original information bits \(u_{t,j}\) having a cardinality of \(|\mathcal{U}|=K\). The transition probability can then be expressed as:
\[\begin{split}&\overline{\mathbf{W}}_{t,j}\left(\mathbf{Y}_{B}, \mathbf{u}_{1}^{m-1}\mid u_{n}\right)\\ &=\sum_{\mathbf{w}_{t,1}^{N},\mathbf{b}_{(t-1)m+j}^{m}\setminus b _{(t-1)m+j}}\frac{\prod_{s=1}^{S}\mathbf{W}_{t,j}\left(\mathbf{y}_{1}^{N_{B}}( s)\mid b_{(t-1)m+j},\mathbf{H}(s)\right)}{2^{N-1}}.\end{split} \tag{6}\]
Upon employing the above three-level polarization based channel transformation, the original MIMO channel is polarized into \(N\) binary memoryless channels (BMCs). Our MIMO based multi-domain polarization design relies on this cascading principle. The most reliable antenna is selected first through antenna polarization, followed by the selection of the most reliable bit from each RA's modulated symbol. Ultimately, the information bits having the highest reliability are matched across all TSs, resulting in the final polar coding structure. As a benefit of its iterative application [38], the MMSE detection algorithm is used for generating the physical layer key, which is used for mapping the different coding constructs to different antenna detection sequences. In Fig. 4, a toy example is presented to compare the reliability of the antenna that was detected last after polarisation to its unpolarised state, when considering detection executed in ascending order. The figure shows a constellation diagram for QPSK modulation with 8 points forming 4 different QPSK symbols. In the unpolarised case, only a limited number of reliable bits can be obtained in the transmitted symbols, the rest being known as frozen bits. However, after polarisation, more reliable bits can be obtained under the same conditions. The reason for this is that after polarisation the average reliability of the bit sub-channel is increased, especially for the symbols transmitted by the last detected antenna, which suffers the least interference. This leads to a significant alteration in the pattern of the polarisation coding structure.
Based on Equations (1) and (4), the MMSE detector acquires soft estimates of \(T_{A}\) independent data streams in the \(s\)-th TS, after the legitimate party receives the signal associated with the known instantaneous gain of the legitimate link. In this case, the eavesdropper is unable to infer the specific polarization pattern and coding structures since the specific detection method is unattainable. Following the increasing detection order, the soft estimate [39] of the \(t\)-th data stream is formulated as:
\[\gamma_{t}(s)=\sum_{\xi=1}^{N_{R}}w_{1,\xi}^{t}(s)\tilde{y}_{\xi}(s), \tag{7}\]
where \(\tilde{y}_{\xi}(s)\) represents the \(\xi\)-th element of the error vector \(\tilde{\mathbf{y}}_{1}^{N_{R}}(s)\triangleq\mathbf{y}_{1}^{N_{R}}(s)-\sum_{l= 1}^{t-1}\mathbf{H}_{\xi}(s)\tilde{x}_{\tilde{l}}\) of the received signal in \(s\)-th time slot. \(\mathbf{H}_{\xi}(s)\) represents a fraction of the original MIMO matrix \(\mathbf{H}(s)\) scanning him first column to the \(\tilde{t}\)-th column, while \(\tilde{x}_{\tilde{l}}\) represents the symbolic estimate of the \(\tilde{t}\)-th data stream. Moreover, \(w_{1,\xi}^{t}(s)\) represents the \(\xi\)-th element in the first row of \(\mathbf{W}^{t}(s)\), which is the MMSE detection matrix for the \(t\)-th data stream and its expression is as follows [38] :
\[\mathbf{W}^{t}(s)=\left(\left(\mathbf{H}^{t}(s)\right)^{\dagger}\mathbf{H}^{t} (s)+\sigma^{2}\mathbf{I}_{T_{A}+1}\right)^{-1}\left(\mathbf{H}^{t}(s)\right)^{ \dagger}, \tag{8}\]
where the matrix \(\mathbf{H}^{t}(s)\) represents a fraction of \(\mathbf{H}(s)\) scanning him \(t\)-th column to the \(T_{A}\)-th column and \(\mathbf{I}_{T_{A}-t+1}\) is a unit matrix of size \(T_{A}-t+1\).
Considering that the MMSE detection order is random and the transmitter is equipped with \(T_{A}\) antennas, the legitimate link will possess \(T_{A}\)! distinct detection modes, resulting in \(T_{A}\)! unique coding structures for the legitimate link. Under various detection modes, we introduce the equivalent AWGN channel \(\widehat{\mathbf{W}}_{t,j}\) for transmission. The bit subchannel noise variance, which is obtained under a specific channel fading condition, is transformed into the effective noise variance under the AWGN channel, allowing the same error performance to be achieved under both channels. This implies that the average mutual information (AMI) of the equivalent AWGN channel and the polarized bit subchannel are identical, yielding:
\[I\left(\overline{\mathbf{W}}_{t,j}\right)=I\left(\overline{\mathbf{W}}_{t,j} \right). \tag{9}\]
Given the noise variance \(\sigma^{2}\), the expression can be written as [35]:
\[\begin{split}& I_{\overline{\mathbf{W}}_{t,j}}(\sigma)=I_{ \overline{\mathbf{W}}_{t,j}}(\sigma_{t,j})\\ &=\int_{-\infty}^{+\infty}\int_{-\infty}^{+\infty}p(y_{R})\log_{2 }[p(y_{R})]dudv-0.5\log_{2}\left(2\pi e\sigma_{t,j}^{2}\right),\end{split} \tag{10}\]
where \(y_{B}\) denotes the signal received by the legitimate users, and \(u=\Re\left(y_{B}\right),v=\Xi\left(y_{B}\right)\).
In the end, the equivalent noise variance \(\sigma_{t,j}^{2}\) of each bit subchannel is utilized to employ a Gaussian approximation (GA)
Fig. 4: Examples of bit polarisation of the last antenna detected in increasing order within a 4\(\times\)4 MIMO scheme where QPSK is used.
algorithm for matching the reliability of each sub-channel, as illustrated in Algorithm 1. Subsequently, confidential information is transmitted with the aid of polarization coding. The distinct detection sequences of the MIMO polarization result in varying antenna reliability levels, leading to different equivalent AWGN variances and coding methods due to the chain reaction of modulation polarization and bit polarization. Again, the random detection order of MIMO polarization determines the secret physical layer key, which is shared by the legitimate link. By contrast, the eavesdropper has only a \(1/T_{A}!\) chance of obtaining the correct key. Even if E tentatively tiles all possible detection orders, it still cannot determine the correct decoding result. The reason for this is that the detection order determined only ranks the reliability of the antenna and does not give a specific coding structure, which substantially increases the error probability of E. This approach significantly enhances the performance of the legitimate link with the aid of our specific MIMO polarization design, but also considerably degrades the decoding performance of the eavesdropper.
### _Channel gain segmentation design_
The MIMO polarization scheme of the previous subsection exhibited confidentiality limitations when the number of TAs is small. Consequently, we further explore potential methods of enhancing the system's confidentiality. As a benefit of the reciprocity of TDD systems, both parties have similar instantaneous gain values; however, the eavesdropper cannot obtain the legitimate link's instantaneous gain. Building on this concept, we model the gain \(\mu_{\rm t}=\mathbf{h}_{\rm t}^{\dagger}\mathbf{h}_{\rm t}\) of all RAs corresponding to the transmitter's \(t\)-th antenna and partition it into \(P\) contiguous, but non-overlapping sub-intervals. In the Rayleigh fading channel model, the probability distribution function (PDF) of the gain \(\mu\) for each TA can be expressed as:
\[p(\mu)=\frac{1}{2^{T_{A}}\Gamma(T_{A})}x^{T_{A}-1}e^{-\mu/2}, \tag{11}\]
where the Gamma function is \(\Gamma(T_{A})=\int_{0}^{+\infty}\tau^{T_{A}-1}e^{-\tau}\mathrm{d}\tau\).
Integrating the above equation yields \(P\) continuous subintervals :
\[\int_{\alpha_{p-1}}^{\alpha_{p}}\frac{1}{2^{T_{A}}\Gamma(T_{A})}\mu^{T_{A}-1} e^{-\mu/2}d\mu=1/P. \tag{12}\]
Upon incorporating the channel gain segments into our MIMO polarization design, the different channel gain intervals map to distinct equivalent variances during the MIMO polarization process, subsequently yielding different coding methods, when matching the sub-channel reliability utilizing the classic GA algorithm, as outlined in Algorithm 1. Moreover, the transmitter has \(P\) unique coding methods for an identical detection order pattern. Table II exemplifies the coding patterns for each sub-channel, when we have \(P=16\) and a code length of \(N=32\).
The segmentation of channel gain not only compensates for the constraints of the MIMO polarization design scheme, but it even enhances the system's security. Under different detection sequences, distinct gain modes yield \(T_{A}!\times P\) disparate coding schemes. However, the eavesdropper is unable to ascertain the detection sequence mode during the MIMO polarization process, nor can it obtain the legitimate link's instantaneous gain. Consequently, even if the eavesdropper acquires confidential information, it remain unaware of the correct coding structure, and thus, cannot achieve accurate decoding results.
## III Receiver Design
In this section, a detailed description of our receiver design employing MIMO polarization techniques is provided, along with an exposition of the processing steps for both the legitimate and eavesdropping parties.
### _Legitimate receiver_
For the legitimate user, a shared physical layer key exists for communication with the transmitter, enabling the acquisition of accurate MIMO detection sequence patterns and channel gain segmentation patterns. To minimize the processing latency and enhance the receiver performance attained, the legitimate receiver utilizes a minimum mean square error (MMSE) algorithm for concatenated MIMO detection and
decoding. The MIMO detection's soft estimate is forwarded to the demodulator to derive the log-likelihood ratio (LLR), which is subsequently sent to the decoder for a hard decision, as illustrated in Fig. 5. The LLR expression is as follows [30]:
\[\mathrm{LLR_{B}}\left(b_{i,j}\right)=\ln\frac{\sum_{b_{i,j}=0}\exp \left(-\frac{\|\mathbf{y}_{\text{B}}(b_{i,j})-\left[\mathbf{b}_{\text{i}}- \mathbf{b}_{\text{B}_{\text{B}}}|\mathbf{x}(b_{i,j})|\right]_{\text{B}}^{2}}{ \sigma_{B}^{2}}\right)}{\sum_{b_{i,j}=1}\exp\left(-\frac{\|\mathbf{y}_{\text{B }}(b_{i,j})-\left[\mathbf{b}_{\text{i}}-\mathbf{b}_{\text{B}}|\mathbf{x}(b_{i, j})|\right]_{\text{B}}^{2}}{\sigma_{B}^{2}}\right)}, \tag{13}\]
where \(\mathbf{y}_{\text{B}}(b_{i,j})\) represents the signal received by the legitimate receiver, while \(\mathbf{x}(b_{i,j})\) denotes the modulation symbol comprising the transmitted bits \(b_{i,j}\), and \(\sigma_{B}^{2}\) is the noise variance of the legitimate link.
The LLRs are derived based on equation (13) and subsequently they are input into the successive cancellation (SC) based stack polar decoder [39] for making hard decisions, as depicted in Fig 6.
Initially, the SC decoder carries out the operation seen in Fig 6(a), executing the \(f\) function to the \((j+1)\)-st layer using the \(i\)-th and \((i+2^{j-1})\)-th LLRs on the left to obtain a new LLR, \(f_{i}^{(j)}\). This can be expressed as:
\[\begin{split}\hat{l}_{i}^{(j)}&=f\left(\hat{l}_{i} ^{(j+1)},\hat{l}_{i+2^{j-1}}^{(j+1)}\right)\\ &=2\tanh^{-1}\left(\tanh\left(\hat{l}_{i}^{(j+1)}/2\right)\tanh \left(\hat{l}_{i+2^{j-1}}^{(j+1)}/2\right)\right)\\ &\approx\mathrm{sign}\left(\hat{l}_{i}^{(j+1)}\right)\mathrm{ sign}\left(\hat{l}_{i+2^{j-1}}^{(j+1)}\right)\min\left(\left|\hat{l}_{i}^{(j+1)} \right|,\left|\hat{l}_{i+2^{j-1}}^{(j+1)}\right|\right)\end{split} \tag{14}\]
The new LLR, \(\hat{l}_{i}^{(j)}\), is then subjected to hard decisions based on the coding structure of the legitimate link, which can be formulated as:
\[\hat{u}_{i}=\left\{\begin{array}{ll}0&\text{if }\hat{l}_{i}^{(j)}\geq 0 \text{ or frozen bit}\\ 1&\text{otherwise}\end{array}\right. \tag{15}\]
Once the hard-decision based value of the \(i\)-th bit is determined, the LLRs \(\hat{l}_{i}^{(j+1)}\) and \(\hat{l}_{i+2^{j-1}}^{(j+1)}\) of the \((j+1)\)-st layer are combined for executing the \(g\) function, subsequently acquiring the soft information for the next bit. This is expressed as:
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**P** & **Channel Gain Interval** & **Code Patterns** \\ \hline
1 & [0,1,4746] & 1755 \\ \hline
2 & [1,4746,1,8982) & 5555 \\ \hline
3 & [1,8982,2,2,346) & 5754 \\ \hline
4 & [2,2346,2,5353) & 115F \\ \hline
5 & [2,5353,2,8199) & 017F \\ \hline
6 & [2,8199,3,0993) & 1577 \\ \hline
7 & [3,0993,3,3811) & 107F \\ \hline
8 & [3,3811,3,6721) & 5457 \\ \hline
9 & [3,6721,3,9795) & 1755 \\ \hline
10 & [3,9795,4,3132) & 3355 \\ \hline
11 & [4,3132,4,6823) & 1557 \\ \hline
12 & [4,6823,5,1096) & 5353 \\ \hline
13 & [5,1096,5,6293) & 70F1 \\ \hline
14 & [5,6293,6,3184) & FF00 \\ \hline
15 & [6,3184,7,4166) & 01F7 \\ \hline
16 & [7,4166, +\infty) & 017F \\ \hline \end{tabular}
\end{table} TABLE II: Coding pattern for \(P=16\) and \(N=32\)
Fig. 5: Architecture based on our MIMO polarisation design at the receiver.
\[I_{i+2^{\gamma-1}}^{(j)}=\left\{\begin{array}{ll}\hat{I}_{i+2^{\gamma-1}}^{(j+1) }+x_{i}^{(j+1)}&\text{if }\hat{u}_{i}^{(j)}=0\\ \hat{I}_{i+2^{\gamma-1}}^{(j+1)}-x_{i}^{(j+1)}&\text{otherwise }.\end{array}\right. \tag{16}\]
Likewise, the hard decision in Equation (15) is executed based on the encoding structure of the legitimate link. Following this, \(\hat{\mu}_{i}^{(j)}\) and \(\hat{\mu}_{i+2^{\gamma-1}}^{(j)}\) undergo XOR processing to derive \(\hat{\mu}_{i}^{(j+1)}\), while \(\hat{\mu}_{i+2^{\gamma-1}}^{(j)}\) is directly transferred to \(\hat{\mu}_{i+2^{\gamma-1}}^{(j+1)}\). By iteratively performing the three operations depicted in Fig. 6, hard decisions are obtained for all transmitted bits, resulting in the final decoding outcome.
Furthermore, to enhance the decoding capability of the legitimate link, the so-called successive cancellation list (SCL) and cyclic redundancy check (CRC)-SCL decoding algorithms of [40] can be employed, which offer superior performance.
As for the receiver design, the detector and decoder rely on a serial by concatenate construction. The computational overhead of the MMSE algorithm mainly depends on the dimension of the channel matrix and on the implementation of the algorithm, with a complexity order of \(O[(T_{A}^{2})]\) per symbol, where \(T_{A}\) is the number of transmit antennas. Subsequently, the soft information representing the data is fed to the polarisation decoder, and the complexity of the SC decoder depends both on the number of iterations as well as on the dimensionality of the input data, which in our scheme has a complexity of \(O[(\log(T_{A}))]\) per symbol. Specifically, the complexity per symbol in the proposed scheme may reach \(O[(T_{A}^{2}*\log(T_{A}))]\).
The main reason for adopting the cascaded structure based on MMSE detection and SC decoding is that this receiver has both a low computational complexity as well as delay, which is favourable for employment in practical systems. In large-scale MIMO systems, this low-complexity and low-latency implementation is of pivotal significance.
### _Eavesdropper_
As for the eavesdropper, an identical MMSE detection algorithm is employed for performing soft estimation of the intercepted signal. This is then entered into the demodulator to derive the soft LLR, which can be expressed as:
\[\text{LLR}_{\text{E}}\left(b_{i,j}\right)=\ln\frac{\sum_{b_{i,j}=0}\exp\left( -\frac{\left\|\mathbf{y}_{\text{E}}\left(b_{i,j}\right)-\left[\mathbf{g}_{ \text{E}}-\mathbf{g}_{\text{E}}\right]\mathbf{x}\left(b_{i,j}\right)\right\|_ {\text{E}}^{2}}{\sigma_{\text{E}}^{2}}\right)}{\sum_{b_{i,j}=1}\exp\left(- \frac{\left\|\mathbf{y}_{\text{E}}\left(b_{i,j}\right)-\left[\mathbf{g}_{ \text{E}}-\mathbf{g}_{\text{E}}\right]\mathbf{x}\left(b_{i,j}\right)\right\|_ {\text{E}}^{2}}{\sigma_{\text{E}}^{2}}\right)}, \tag{17}\]
where \(\mathbf{y}_{\text{E}}(b_{i,j})\) represents the signal received by the eavesdropper, Eve, while \(\mathbf{x}(b_{i,j})\) represents the modulation symbol comprising the transmitted bits \(b_{i,j}\) and \(\sigma_{E}^{2}\) is the noise variance of Eve's link.
Subsequently, these LLRs are fed into the decoder for error correction. On one hand, Eve is incapable of obtaining the antenna detection sequence pattern during the MIMO polarization of the legitimate link. She only has a \(1/TA!\) probability of acquiring the correct detection pattern, which prevents her from inferring the variance of the equivalent fading channel or the coding structure of the legitimate link. On the other hand, even when the transmitter has a limited number of antennas, the eavesdropper is unable to determine the channel gain range of the legitimate link, which also prevents her from acquiring the coding structure of the legitimate link. The PLS framework, based on our MIMO polarization design combined with the channel gain segmentation based design, enhances the performance of the legitimate link, while significantly degrading the eavesdropper's success probability.
## IV Secrecy rate analysis
In this section, the secrecy rate for the proposed scheme is analyzed under both Gaussian-distributed input and finite-alphabet input scenarios. The secrecy rate is defined as the positive difference between the maximum achievable data rates of the legitimate and eavesdropping links.
### _Gaussian-distributed input_
Under the Gaussian-distributed input condition, it is assumed that the signal transmitted by the legitimate link obeys the complex Gaussian distribution \(\mathcal{CN}\left(0,\sigma_{B}^{2}\right)\). Based on the above secrecy rate definition, the secrecy rate under the Gaussian-distributed input condition is formulated as:
\[I_{PLS}=\max\left\{0,I\left(\mathbf{W}_{B}\right)-I\left(\mathbf{W}_{E}\right) \right\}, \tag{18}\]
where \(I(\mathbf{W}_{B})\) and \(I(\mathbf{W}_{E})\) denote the channel capacities of the legitimate and eavesdropping links, respectively.
Since the instantaneous gain of the channel is discretised, the channel capacities of the legitimate and eavesdropping links under Gaussian-distributed input conditions can be further expressed as:
\[I\left(\mathbf{W}_{B}\right)=\frac{1}{p}\cdot\sum_{p=1}^{p}I\left(\mathbf{W}_{ B}\right)^{(p)}, \tag{19}\]
\[I\left(\mathbf{W}_{E}\right)=\frac{1}{p}\cdot\sum_{p=1}^{p}I\left(\mathbf{W}_{E} \right)^{(p)}, \tag{20}\]
where \(P\) represents the number of gain segments. Furthermore, \(I(\mathbf{W}_{B})^{(p)}\) and \(I(\mathbf{W}_{E})^{(p)}\) correspond to the channel capacities of
Fig. 6: The SC decoding process for the mod-2 sum of the \(i\)-th and the (\(i+2^{j-1}\))-th bits at the \(j\)-th level: (a) the \(f\) function, (b) the \(g\) function and (c) partial sum calculation.
the legitimate and eavesdropping links, when the channel gain falls within the \(p\)-th interval.
Furthermore, for a specific channel gain interval, following the transmitter's MIMO, modulation and bit polarization, the symmetric capacity expression becomes:
\[I\left(\mathbf{W}_{B}\right)^{\left(p\right)}=S\cdot\sum_{t=1}^{T_{A}}I\left( \mathbf{W}_{j}\right)^{\left(p\right)}=S\cdot\sum_{t=1}^{T_{A}}\sum_{j=1}^{m}I \left(\mathbf{W}_{i,j}\right)^{\left(p\right)}, \tag{21}\]
where \(S\) represents the total number of transmission time slots and \(m\) denotes the number of bits contained in each modulation symbol. Furthermore, \(I(\mathbf{W}_{i,j})^{\left(p\right)}\) is the capacity of the MIMO-polarised bit sub-channel, which is given by:
\[I\left(\mathbf{W}_{i,j}\right)^{\left(p\right)}\] \[=\sum_{b_{i,j}}\int_{-\infty}^{+\infty}\int_{-\infty}^{+\infty} \frac{1}{2^{j}}p_{t}\left(y_{B}\mid b_{i,j}\right)\cdot\log\frac{p_{t}\left(y _{B}\mid b_{i,j}\right)}{p_{t}\left(y_{B}\mid 1\right)p_{t}\left(y_{B}\mid 0 \right)}dudv, \tag{22}\]
where \(y_{B}\) denotes the received signal, and \(u=\Re\left(y_{B}\right),v=\Xi\left(y_{B}\right)\). Furthermore, under the Gaussian-distributed input condition, the expression for \(p_{t}\left(y_{B}\mid b_{i,j}\right)\) is:
\[p_{t}\left(y\mid b_{i,j}\right)=\frac{1}{2^{m-j}}\sum_{x_{t}}\frac{1}{\pi \sigma_{B}^{2}}\cdot\exp\left(\frac{-\left[y_{B}-x_{t}\right]^{2}}{\sigma_{B} ^{2}}\right), \tag{23}\]
where \(x_{t}\) denotes the \(t\)-th antenna's transmitted signal in the legitimate link.
Simultaneously, the eavesdropper is unaware of the transmitter's specific MIMO-polarization design process, implying that it will encounter \(T_{A}\)! signal detection patterns. Thus, the eavesdropper has a maximum probability of inferring the correct pattern given by \(1/T_{A}\)!, Hence the channel capacity of the eavesdropping link becomes:
\[I\left(\mathbf{W}_{E}\right)^{\left(p\right)}=\frac{S}{T_{A}!}\cdot\sum_{t=1} ^{T_{A}}I\left(\mathbf{W}_{i}\right)^{\left(p\right)}. \tag{24}\]
Consequently, under the Gaussian-distributed input condition, the system's secrecy rate can be reformulated as:
\[I_{PLS}=\max\left\{0,I\left(\mathbf{W}_{B}\right)-\frac{1}{T_{A}!}\cdot I \left(\mathbf{W}_{B}\right)\right\}, \tag{25}\]
where \(I\left(\mathbf{W}_{B}\right)\) is provided by Equation (19).
### _Finite-Alphabet Input_
Taking into account a more practical scenario, the secrecy rate is formulated under finite symbol input conditions, representing the maximum positive difference between the achievable rates of the legitimate and eavesdropping links. To consolidate the expressions, we assume that the transmitter's transmit power is \(\hat{\sigma}_{B}^{2}\), resulting in the secrecy rate expression:
\[R_{\text{PLS}}=\max\left(0,R_{B}-R_{E}\right), \tag{26}\]
where \(R_{B}\) denotes the legitimate link's maximum achievable rate, while \(R_{E}\) represents the eavesdropper's maximum achievable rate.
As the transmit power increases, an upper bound on the legitimate link's achievable rate can be formulated as:
\[\lim_{\hat{\sigma}_{\mathbf{k}}^{2}\rightarrow+\infty}R_{\text{B}}=T_{A} \cdot\log_{2}M. \tag{27}\]
Based on equation (27), for simplicity, we disregard the time index and express the legitimate link's achievable rate [41] under a given channel as:
\[R_{B}=T_{A}\cdot\log_{2}M-\frac{1}{T_{A}\cdot M}\sum_{t=1}^{T_{A}}\sum_{k=1}^ {M}E\]
\[\left\{\log_{2}\left(1+\sum_{\begin{subarray}{c}t=1\\ t\neq t\end{subarray}}^{T_{A}}\exp\left(-\rho\left[\left(\mathbf{v}_{\text{ tr}^{\prime}}+\mathbf{z}_{B}\right)^{\dagger}\left(\mathbf{v}_{\text{ tr}^{\prime}}+\mathbf{z}_{B}\right)-\mathbf{z}_{B}^{\dagger}\mathbf{z}_{B} \right]\right)\right\}, \tag{28}\]
where \(\mathbf{v}_{t,t^{\prime}}=\mathbf{H}_{t}x_{k}-\mathbf{H}_{t^{\prime}}x_{k}\), \(\mathbf{H}_{t}\) represents the first column through the \(t\)-th column of the original MIMO matrix \(\mathbf{H}\) and \(\rho=\hat{\sigma}_{B}^{2}/\sigma_{B}^{2}\) denotes the SNR.
Similarly, for the eavesdropper, there is only a \(T_{A}\)! probability of inferring the correct MIMO detection sequence pattern. Hence, the eavesdropping link's achievable rate under this condition is expressed as:
\[R_{E}=\frac{1}{T_{A}!}\cdot R_{B}. \tag{29}\]
Thus, under the finite-alphabet input condition, the system's secrecy rate can be reformulated as:
\[R_{PLS}=\max\left\{0,R_{B}-\frac{1}{T_{A}!}\cdot R_{B}\right\} \tag{30}\]
As demonstrated by the aforementioned equation, as the number of transmit antenna and the power increase, the system's secrecy rate approaches the legitimate link's achievable rate. The eavesdropper's achievable rate is substantially reduced, resulting in a relatively high secrecy rate for the system.
## V Simulation result
In this section, we initially confirm that the proposed scheme exhibits a substantial performance enhancement compared to the conventional MIMO system. Then, we compare the performance of authorized users and eavesdroppers both in terms of their BER and BLER, thereby establishing the scheme's security enhancement. Subsequently, we present numerical results for the secrecy rate of the proposed method, considering both Gaussian distributed and discrete symbol input, which substantiates the efficiency of this approach. The simulation parameters are shown in Table III.
### _BER and BLER performance_
As depicted in Fig. 7, MIMO-polarization transmission, modulation-polarization and bit-level polarization scheme, yields substantial performance improvements compared to conventional MIMO transmission. Explicitly, when we set the
number of instantaneous channel gain intervals to \(P=32\), our scheme provides an improvement of about 2dB over a conventional MIMO scheme at BLER \(\approx 2\cdot 10^{-5}\). This enhancement is attributed to the increased polarization effect attained by our multi-domain polarization system, leading to improved bit sub-channel reliability and more secure confidential information transmission for a given code length.
Fig. 8(a) characterizes the BER of both the legitimate party and of the eavesdropper, given a code length of \(N=512\). The number of instantaneous channel gain intervals was set to \(P=8\), and 4 transmit and receive antennas were used. Fig. 8(a) employs QPSK modulation, illustrating that as the SNR increases, Bob's BER is reduced rapidly, while Eve's BER remains approximately 0.5. When the high-performance decoding algorithms are employed 40, the legitimate party's BER improves, further, but the eavesdropper fails tolean any useful information. Comparable results are observed also for 16QAM, as shown in Fig. 8(b), which validates the benefits of the proposed scheme. Furthermore, it can be observed in Fig. 8 that the performance of the legitimate link is improved compared to the conventional MIMO scheme.
Let us how explore the impact of increasing the number of antennas and the code length, while enhancing the code length is known to improve the error correction performance of polar codes. Fig. 9 demonstrates the decoding performance when the code length is \(N=1024\), the number of instantaneous channel gain intervals is \(P=8\), and the number of transmit and receive antennas is 8. The trend observed aligns with that of Fig. 8. Regardless of whether high-order or low-order modulation is employed, the eavesdropper's bit error rate remains approximately 0.5, showing no improvement.Upon increasing the SNR, this is a testimony to the reliability of our PLS scheme based on MIMO-polarization.
Fig. 10 examines the influence of the number of channel gain intervals on the BLER of both the legitimate and eavesdropping links. As the number of intervals increases, the legitimate link's BLER performance improves, while the eavesdropper's performance degrades. Exploiting the segmented channel gain enhances the key randomness, making it more challenging for the eavesdropper to infer any useful information.
In order to characterize the achievable security performance of this scheme, we added simulation results, where the
\begin{table}
\begin{tabular}{l|r} \hline
**Parameters** & **Values** \\ \hline Number of transmitter antennas \(T_{A}\) & 2,4,8 \\ \hline Number of receiver antennas for legitimate \(N_{B}\) & 1,2,4,8 \\ \hline Number of receiver antennas for eavesdropper \(N_{E}\) & 1,2,4,8 \\ \hline Length of polar code \(N\) & 512,1024 \\ \hline Length of information bits \(K\) & 256,512 \\ \hline Number of channel segments \(P\) & 1,4,8,16,32 \\ \hline MQAM modulation order \(M\) & 2,4,16 \\ \hline Number of elements in the lists \(L\) & 16 \\ \hline Number of CRC bits & 24 \\ \hline Channel model & Rayleigh \\ \hline \end{tabular}
\end{table} TABLE III: Simulation parameters
Fig. 7: BLER performance based on MIMO-polarization system versus conventional MIMO system.
eavesdropper uses different detection algorithms. As shown in Fig.11, the eavesdropper still fails to decode a complete frame when using the zero forcing (ZF) detection algorithm and the serial interference cancellation (SIC-ZF) detection algorithm.
By observing Figs. 7, 8, 9, 10, 11, it becomes evident that the PLS scheme based on MIMO-polarization attains significant performance improvement, compared to conventional MIMO transmission.
### _Secrecy-rate results_
In this subsection, we characterize the secrecy rate of the proposed scheme, with \(I_{B}\) denoting the channel capacity of the legitimate link, and \(I_{P}\) representing the system's secrecy rate.
#### Vi-B1 Gaussian distributed input
Under the Gaussian distribution input condition, as depicted in Fig. 12, the secrecy rate of the proposed scheme approaches the channel capacity of the legitimate link, as the number of transmit antennas increases. Notably, when \(T_{A}=8\), the two values essentially coincide, demonstrating that the eavesdropper's decoding performance is significantly degraded under these conditions, ensuring the system's confidentiality. Additionally, the influence of the number of receive antennas and of the modulation scheme is also investigated. As illustrated in Fig. 13, the system's secrecy rate using BPSK is lower than that of QPSK, which is consistent with our theoretical expectations. Under both modulation schemes, the system's secrecy rate is very close to the legitimate link's channel capacity, confirming the system's practicality. Upon scrutinising Fig. 12 and Fig. 13, it becomes apparent that increasing the number of receive antennas, given the same number of transmit antennas and modulation scheme, has a certain impact on the system's rate due to the prior influence of data flow and interference from other antennas, which aligns with the theory.
Overall, under the Gaussian distributed input condition,
Fig. 11: BER performance at Bob and Eve, where Eve used different detection algorithms and \(N=1024,P=8\) and \(T_{A}=N_{B}=N_{E}=8\).
Fig. 10: BLER performance of Bob and Eve for different \(P\) values
Fig. 9: BER performance at Bob and Eve, where \(N=1024,P=8\) and \(T_{A}=N_{B}=N_{E}=8\). (a) QPSK, (b) 16QAM.
the system's secrecy rate approaches the channel capacity of the legitimate link, as the number of transmit antennas increases, regardless of the choice of modulation scheme or the number of receive antennas. This observation is in line with the previously discussed BER performance and further validates the reliability of the proposed scheme.
#### V-B2 Finite-Alphabet Input
In a more practical scenario, under the finite-alphabet input condition, this section presents the maximum achievable rate for both the legitimate link and the system. As depicted in Fig. 14, \(R_{B}\) represents the legitimate link's achievable rate, and \(R_{p}\) denotes the system's confidential achievable rate. Upon increasing the number of transmit antennas, the system's achievable rate gradually approaches that of the legitimate link, exhibiting a similar trend to that observed under the Gaussian distributed input condition, which substantiates the scheme's reliability. Furthermore, for the same number of transmit antennas, reducing the number of receive antennas has some impact on the system rate, but as the SNR increases, both upper limits become identical. This consistency with the theory does not affect the difference between the secrecy rate and the legitimate link's achievable rate.
Additionally, to verify that our multi-domain polarization-based design can enhance the system's overall polarization effect, the BER performance and secrecy rate are jointly analyzed. Under the same conditions, the legitimate link and eavesdropper's BER values are substituted into the binary symmetric channel (BSC) to obtain the secrecy rate as the theoretical value in the current situation. This is because polar codes have been shown to achieve the theoretical channel capacity of BSC. Upon comparing this theoretical value to the system's secrecy rate, we can see in Fig. 15, that the
Fig. 12: The ergodic secrecy rate for Gaussian-distributed input, where \(N=1024,P=8\), \(N_{B}=N_{E}=T_{A}\) and QPSK is used.
Fig. 14: The ergodic secrecy rate for Finite-Alphabet Input, where \(N=1024,P=8\) and BPSK is used.
Fig. 13: The ergodic secrecy rate for Gaussian-distributed input, where \(N=1024,P=8\) and \(N_{B}\)=\(N_{E}\).
Fig. 15: The ergodic secrecy rate for Finite-Alphabet Input, also showing theoretical values related to BER, where \(N=1024,P=8\) and BPSK is used.
difference between the two secrecy rates is minimal, and they converging as the SNR increases. This result demonstrates that the proposed PLS scheme based on our multi-domain polarization design approaches the theoretical value under Rayleigh channel conditions, further corroborating the advantages of this approach.
To further validate the potential of the proposed scheme, We added a comparison to the above two schemes [8, 15], as shown in Fig. 16, observe that the ergodic secrecy rate of our proposed scheme is higher than that of the above two schemes. Compared to the AN and CSI based schemes, our scheme improves the secrecy rate of the system despite its reduced a overhead, which verifies the effectiveness of the proposed scheme.
## VI Conclusions
A novel physical layer security framework was conceived by leveraging both MIMO, modulation, and bit polarization. The proposed framework improves the legitimate link's performance, while significantly degrading the eavesdropper's reception to the point, where correctly decoding a complete data frame becomes nearly impossible. Furthermore, the channel's instantaneous gain is partitioned into segments to increase the key's randomness, hence again, improving the legitimate link's performance and degrading the eavesdropper's reception capability. The scheme's reliability is validated through simulations. Moreover, the system's secrecy rate is examined, and the numerical results demonstrate the scheme's confidentiality. It is worth mentioning that the receiver uses a simple cascaded design, and we will consider proposing more complex receiver architectures with better performance in our future work.
|
2305.19637 | Eta Form and Spectral Sequence for the Composition of Fibrations | In this paper, inspired by the spectral sequences constructed by signature
operators with respect to the composition of fibrations, we define the
"spectral sequences" for fiberwise Dirac operators and prove the equivariant
family version of the adiabatic limit formula of eta invariants using the heat
kernel method and the analytic localization techniques established by
Bismut-Lebeau. In our formula, the remainder terms are constructed by "spectral
sequences" and completely extend those of Dai and Bunke-Ma. | Bo Liu, Mengqing Zhan | 2023-05-31T08:07:41Z | http://arxiv.org/abs/2305.19637v1 | # eta form and spectral sequence for the composition of fibrations
###### Abstract.
In this paper, inspired by the spectral sequences constructed by signature operators with respect to the composition of fibrations, we define the "spectral sequences" for fiberwise Dirac operators and prove the equivariant family version of the adiabatic limit formula of eta invariants using the heat kernel method and the analytic localization techniques established by Bismut-Lebeau. In our formula, the remainder terms are constructed by "spectral sequences" and completely extend those of Dai and Bunke-Ma.
**Keywords:** Equivariant eta form; index theory and fixed point theory; Chern-Simons form; adiabatic limit; spectral sequence.
**2020 Mathematics Subject Classification:** 58J20, 19K56, 58J28, 58J35.
## 1. Introduction
The Bismut-Cheeger eta form serves as the family extension of the eta invariant in index theory, which originally comes from the adiabatic limit of eta invariants. This limit is initiated by E. Witten [23] for physical consideration and well studied by Bismut-Cheeger [6] and Dai [10]. In the general case of the adiabatic limit for Dirac operators in [10], a global spectral term arises from the (asymptotically) very small eigenvalues. If we consider the signature operators, this spectral term can be constructed by Leray spectral sequences.
In [9], in order to discuss the secondary index theory for flat bundles with duality, Bunke and Ma generalize the signature operators to the flat case and the adiabatic limit formula to the family case. In this case, the spectral terms are generalized to the finite dimensional eta forms constructed by spectral sequences.
In [13, 14, 15], for Dirac operators, the first author generalizes the adiabatic limit formula to the equivariant family case for a fiberwise Lie group action. In [15], the spectral terms are explained as equivariant Dai-Zhang higher spectral flows [11]. But those higher spectral flow terms cannot degenerate to the terms in [9] and [10] directly when restricted on the cases there.
In this paper, we make use of the descriptions in [3] to define a series of vector bundles over the base manifold which can be taken as the analogy of spectral sequences. Then the generalization of the remainder terms in [9] and [10] are finite dimensional eta forms associated with these vector bundles. Moreover, these terms can also be considered as the refinement of the remainder terms in [15].
Now we explain our result in some details.
Let \(\pi_{X}:W\to V\) be a submersion of two closed manifolds with oriented closed fiber \(X\). Let \(TX:=\ker(\pi_{X,*}:TW\to TV)\) be the relative tangent bundle over \(W\). Let \(T^{H}W\) be a horizontal subbundle of \(TW\) such that \(TW=T^{H}W\oplus TX\). Let \(g^{TX}\) be a metric on \(TX\). Let \(\mathcal{E}_{X}=(\mathcal{E}_{X},h^{\mathcal{E}_{X}},\nabla^{\mathcal{E}_{X}})\) be a \(\mathbb{Z}_{2}\)-graded self-adjoint \(\operatorname{Cl}(TX)\)-Clifford module with Clifford connection (see (2.7) and (2.9)). Let \(D^{\mathcal{E}_{X}}_{X}\) be the fiberwise Dirac operators associated with \((g^{TX},\nabla^{\mathcal{E}_{X}})\) (see (2.11)).
Assume that \(\ker D^{\mathcal{E}_{X}}_{X}\) forms a vector bundle over \(V\). Under this assumption, the Bismut-Cheeger eta form \(\tilde{\eta}(\underline{\pi}_{X},\underline{\mathcal{E}_{X}})\in\Omega^{*}(V)\) (non-equivariant version of Definition 2.2) is well-defined.
Let \(g^{TV}\) be a Riemannian metric on \(TV\). Let \(\nabla^{TV}\) be the Levi-Civita connection. Let \(\mathcal{E}_{V}=(\mathcal{E}_{V},h^{\mathcal{E}_{V}},\nabla^{\mathcal{E}_{V}})\) be a \(\mathbb{Z}_{2}\)-graded self-adjoint Clifford module over \(V\) with Clifford connection. For \(T>0\), let \(g^{TW}_{T}:=\pi^{*}_{X}g^{TV}\oplus T^{-2}g^{TX}\), which is a Riemannian metric on \(TW\). Let \(\mathcal{E}=\pi^{*}_{X}\mathcal{E}_{V}\widehat{\otimes}\mathcal{E}_{X}\). Let \(\nabla^{\mathcal{E},T}\) be the connection on \(\mathcal{E}\) defined in (3.36). Then \(\underline{\mathcal{E}}=(\mathcal{E},\pi^{*}_{X}h^{\mathcal{E}_{V}}\otimes h^ {\mathcal{E}_{X}},\nabla^{\mathcal{E},T})\) is a \(\mathbb{Z}_{2}\)-graded self-adjoint Clifford module over \(W\) with Clifford connection associated with \(g^{TW}_{T}\). Let
\(D^{\mathcal{E}}_{W,T}\) and \(D^{\mathcal{E}_{V}\otimes\ker D^{\mathcal{E}_{X}}_{X}}_{V}\) be the Dirac operator associated with \((g^{TW}_{T},\nabla^{\mathcal{E},T})\) and \((g^{TV},\nabla^{\mathcal{E}_{V}}\otimes 1+1\otimes\nabla^{\ker D^{\mathcal{E}_{X}}_{X}})\) (see (2.16) for the definition of \(\nabla^{\ker D^{\mathcal{E}_{X}}_{X}}\)). Let \(\eta(D^{\mathcal{E}}_{W,T})\) and \(\eta(D^{\mathcal{E}_{V}\otimes\ker D^{\mathcal{E}_{X}}_{X}}_{V})\) be the corresponding Atiyah-Patodi-Singer ets invariants in [1]. The famous adiabatic limit formula is stated as follows.
**Theorem 1.1**.: _[_6, 10_]_ _If \(\dim W\) is odd,_
\[\lim_{T\to+\infty}\eta\left(D^{\mathcal{E}}_{W,T}\right)=2\int_{V}\widehat{ \mathrm{A}}(TV,\nabla^{TV})\tilde{\eta}(\underline{\pi}_{X},\underline{ \mathcal{E}}_{X})+\eta\big{(}D^{\mathcal{E}_{V}\otimes\ker D^{\mathcal{E}_{X} }_{X}}_{V}\big{)}+R, \tag{1.1}\]
_where \(R\) is an integer-valued remainder term and \(\widehat{\mathrm{A}}(\cdot)\) is the corresponding \(\widehat{\mathrm{A}}\)-form (see [2, SS1.5] for the definition). Moreover,_
1. _[_6_, (0.5)]_ _if_ \(W\) _and_ \(V\) _are spin, and if_ \(D^{\mathcal{E}_{X}}_{X}\) _is invertible, then_ \(R=0\)_;_
2. _[_10_, Theorem 0.1]_ _if_ \(W\) _and_ \(V\) _are spin,_ \(\ker D^{\mathcal{E}_{X}}_{X}\) _forms a vector bundle over_ \(V\)_, and if_ \(\dim\ker D^{\mathcal{E}}_{W,T}\) _is independent of_ \(T\)_, then_ (1.2) \[R=\sum_{\lambda\in A_{r}/A_{r+1},r\geq 2}\mathrm{sgn}(\lambda),\quad A_{r}:= \left\{\lambda\in\mathrm{Sp}(D^{\mathcal{E}}_{W,T}):\lambda=\mathrm{O}\left( \frac{1}{T^{r-1}}\right)\right\},\] _where_ \(\mathrm{Sp}(D^{\mathcal{E}}_{W,T})\) _is the set of the spectrum of_ \(D^{\mathcal{E}}_{W,T}\)_;_
3. _[_10_, Theorem 0.3]_ _if all Dirac operators are signature operators, then_ \(R\) _is the sum of the signatures of the spectral sequences associated with the fiber bundle_ \(\pi_{X}\)_._
Note that the case (1) in Theorem 1.1 is a special case of case (2). And if \(\dim V\) is even, then the term \(\eta(D^{\mathcal{E}_{V}\otimes\ker D^{\mathcal{E}_{X}}_{X}}_{V})\) in (1.1) vanishes.
_Remark 1.2_.:
1. In [6, 10], (1.1) was only proved for these three cases listed in Theorem 1.1. But the formula (1.1) for Clifford modules is the natural extension of these results.
2. In [6, 10], the authors use the rescaling \(g^{TW}_{t}=g^{TX}+t^{-2}\pi^{*}_{X}g^{TV}\), \(t\to 0\). Here we can consider \(T=t^{-1}\). Then \(g^{TW}_{T}=t^{2}g^{TW}_{t}\). Note that when we multiply a constant on the metric, the eta invariant does not change. So (1.1) is the same as the results in [6, 10].
3. Let \(\nabla^{TW}_{T}\) be the Levi-Civita connection associated with \(\pi^{*}_{X}g^{TV}\oplus T^{-2}g^{TX}\). Then by [13, Proposition 4.5] (cf. also [18, (4.32)]), \(\lim_{T\to+\infty}\widetilde{\widetilde{\mathrm{A}}}(TW,\nabla^{TW}_{T},\,{}^{ 0}\nabla^{TW})=0\), where \(\,{}^{0}\nabla^{TW}=\pi^{*}_{X}\nabla^{TV}\oplus\nabla^{TX}\) and \(\widetilde{\widetilde{\mathrm{A}}}(\cdot)\) is the Chern-Simons form for the \(\widehat{\mathrm{A}}\)-form (cf. [21, Definition B.5.3]). The formula (1.1) can also be formulated as an equality: (1.3) \[\eta\left(D^{\mathcal{E}}_{W,T}\right)=2\int_{V}\widehat{\mathrm{A}}(TV, \nabla^{TV})\tilde{\eta}(\underline{\pi}_{X},\underline{\mathcal{E}}_{X})-2 \int_{W}\widetilde{\widetilde{\mathrm{A}}}(TW,\nabla^{TW}_{T},\,{}^{0}\nabla^ {TW})+\eta\big{(}D^{\mathcal{E}_{V}\otimes\ker D^{\mathcal{E}_{X}}_{X}}_{V} \big{)}+R.\] We usually take \(T=1\).
Note that if \(V\) is a point and \(\dim X\) is odd,
\[\tilde{\eta}(\underline{\pi}_{X},\underline{\mathcal{E}}_{X})=\frac{1}{2}\eta( D^{\mathcal{E}_{X}}_{X}). \tag{1.4}\]
Thus the Bismut-Cheeger eta form can be considered as the higher degree version of the eta invariant. If \(V\) is a fibration over a closed manifold \(S\), then \(W\) is also a fibration over \(S\). Then we could generalize the eta invariants in (1.3) to the Bismut-Cheeger eta forms. In fact, we could generalize them directly to the equivariant eta forms for compact Lie group action.
Let \(G\) be a compact Lie group. Let \(W\), \(V\), \(S\) be closed \(G\)-manifolds. Let \(\pi_{X}:W\to V\), \(\pi_{Y}:V\to S\) be equivariant submersions with closed oriented fibers \(X\), \(Y\). Then \(\pi_{Z}=\pi_{Y}\circ\pi_{X}:W\to S\) is an equivariant submersion with closed oriented fiber \(Z\). Assume that \(G\) acts on
trivially. We have the diagram of fibrations:
(1.5)
Let \(\pi_{X}=(\pi_{X},T^{H}_{X}W,g^{TX})\), \(\pi_{Y}=(\pi_{Y},T^{H}_{Y}W,g^{TX})\) and \(\pi_{Z}=(\pi_{Z},T^{H}_{Z}W,g^{TZ})\) be equivariant geometric data with respect to \(\pi_{X}\), \(\pi_{Y}\) and \(\pi_{Z}\) as in (2.10). Assume that \(T^{H}_{Z}W\subset T^{H}_{X}W\) and \(g^{TZ}=\pi^{*}_{X}g^{TY}\oplus g^{TX}\). Let \(\nabla^{TX}\), \(\nabla^{TY}\) and \(\nabla^{TZ}\) be the corresponding connections on \(TX\), \(TY\) and \(TZ\) as in (2.3). Set \(\,{}^{0}\nabla^{TZ}:=\pi^{*}_{X}\nabla^{TY}\oplus\nabla^{TX}\).
Let \(\mathcal{E}_{X}=(\mathcal{E}_{X},h^{\mathcal{E}_{X}},\nabla^{\mathcal{E}_{X}})\) (resp. \(\mathcal{E}_{Y}=(\mathcal{E}_{Y},h^{\mathcal{E}_{Y}},\nabla^{\mathcal{E}_{Y}})\)) be a \(\mathbb{Z}_{2}\)-graded \(G\)-equivariant self-adjoint \(\overline{\mathrm{Cl}}(TX)\)-module over \(W\) (resp. \(\mathrm{Cl}(TY)\)-module over \(V\)) with a \(G\)-invariant Clifford connection as in (2.10). Let \(\mathcal{E}=\pi^{*}_{X}\mathcal{E}_{Y}\widehat{\otimes}\mathcal{E}_{X}\). Then \(\mathcal{E}=(\mathcal{E},\pi^{*}_{X}h^{\mathcal{E}_{Y}}\otimes h^{\mathcal{E}_ {X}},\nabla^{\mathcal{E}})\) is a \(\mathbb{Z}_{2}\)-graded \(G\)-equivariant self-adjoint \(\mathrm{Cl}(TZ)\)-module over \(W\) with a \(G\)-invariant Clifford connection. Here \(\nabla^{\mathcal{E}}\) is defined in (3.4).
Let \(D^{\mathcal{E}_{X}}_{X}\) and \(D^{\mathcal{E}}_{Z}\) be fiberwise Dirac operators associated with \((g^{TX},\nabla^{\mathcal{E}_{X}})\) and \((g^{TZ},\nabla^{\mathcal{E}})\) respectively. Assume that \(\ker D^{\mathcal{E}_{X}}_{X}\) (resp. \(\ker D^{\mathcal{E}}_{Z}\)) forms a vector bundle over \(V\) (resp. \(S\)). Let \(\nabla^{\ker D^{\mathcal{E}_{X}}_{X}}\) be the induced \(G\)-invariant connection on the vector bundle \(\ker D^{\mathcal{E}_{X}}_{X}\) as in (2.16). Let \(D^{\mathcal{E}_{Y}\otimes\ker D^{\mathcal{E}_{X}}_{X}}_{Y}\) be the fiberwise Dirac operator twisted with the vector bundle \(\ker D^{\mathcal{E}_{X}}_{X}\) over \(V\) associated with \((g^{TY},\nabla^{\ker D^{\mathcal{E}_{X}}_{X}})\). Assume that \(\ker D^{\mathcal{E}_{Y}\otimes\ker D^{\mathcal{E}_{X}}_{X}}_{Y}\) forms a vector bundle over \(S\).
**Theorem 1.3**.: _[_15_, Theorem 1.6]_ _For \(g\in G\), modulo exact forms on \(S\), we have_
\[\begin{split}&\widetilde{\eta}_{g}(\underline{\pi_{Z}},\mathcal{E} )=\widetilde{\eta}_{g}(\underline{\pi_{Y}},\underline{\mathcal{E}_{Y}}\otimes \ker D^{\mathcal{E}_{X}}_{X})+\int_{Y^{g}}\widehat{\mathrm{A}}_{g}(TX,\nabla^{ TY})\mathrm{ch}_{g}(\mathcal{E}_{Y}/\mathcal{S},\nabla^{\mathcal{E}_{Y}}) \widetilde{\eta}_{g}(\underline{\pi^{g}_{X}},\underline{\mathcal{E}_{X}})\\ &\quad-\int_{Z^{g}}\widetilde{\widetilde{\mathrm{A}}}_{g}(TZ, \nabla^{TZ},{}^{0}\nabla^{TZ})\mathrm{ch}_{g}(\mathcal{E}/\mathcal{S},\nabla ^{\mathcal{E}})+\widetilde{R},\end{split} \tag{1.6}\]
_where \(\underline{\pi^{g}_{X}}\) is defined in (3.13) and \(\widetilde{R}\in\mathrm{ch}_{g}(K^{0}_{G}(S))\), the image of the equivariant Chern character \(\mathrm{ch}_{g}\) on the equivariant topological \(K\)-group of \(S\). Here, \(Y^{g}\) and \(Z^{g}\) are the fixed point sets of \(g\in G\) on \(Y\) and \(Z\) respectively, which are assumed to be oriented, \(\widehat{\mathrm{A}}_{g}(\cdot)\) and \(\mathrm{ch}_{g}(\mathcal{E}_{Y}/\mathcal{S},\nabla^{\mathcal{E}_{Y}})\) are the equivariant \(\widehat{\mathrm{A}}\)-form and the equivariant relative Chern character form (see, e.g., [17, (1.32), (1.33)] for the definitions) and \(\widetilde{\widetilde{\mathrm{A}}}_{g}(\cdot)\) is the equivariant Chen-Simons form associated with the equivariant \(\widehat{\mathrm{A}}\)-form which is the natural equivariant extension of [21, Definition B.5.3]._
_Remark 1.4_.:
1. In [13], the first author proves that if there is no higher spectral flow for any deformation there, then \(\widetilde{R}=0\).
2. In [9, Theorems 5.9 and 5.10], if all Clifford modules are exterior algebra bundles twisted with flat bundles and the Dirac operators are generalized signature operators, without the group action, Bunke and Ma show that the remainder term \(\widetilde{R}\) is the sum of finite dimensional eta forms constructed by spectral sequences. If \(S\) is a point and all flat bundles are trivial line bundles, then \(\widetilde{R}\) in this case degenerates to \(R/2\) in Theorem 1.1 (3).
3. The proof of such formula is highly related to the analytical localization technique developed in [8].
The purpose of this paper is to establish the following result, which we state in Theorem 3.8.
**Theorem 1.5**.: _Under the setting of Theorem 1.3 and Assumption 3.7, for \(g\in G\), modulo exact forms on \(S\), (1.6) holds and_
\[\widetilde{R}=\sum_{r=2}^{\infty}\widetilde{\eta}_{g}(\mathscr{E}_{r},\mathscr{ E}_{r+1},\nabla^{r},\nabla^{r+1})+\widetilde{\mathrm{ch}}_{g}(\ker D^{ \mathcal{E}}_{Z},\nabla^{\infty},\nabla^{\ker D^{\mathcal{E}}_{Z}}). \tag{1.7}\]
_The definitions of the notations above follow from (3.32) and (3.45)._
Note that if \(S\) is a point, the setting for the cases (1)-(3) in Theorem 1.1 fulfills Assumption 3.7. If \(S\) is not a point, without the group action, the settings in Remark 1.4 also fulfills Assumption 3.7. In these cases, our theorem degenerates to previous results (see Proposition 5.4).
**Notation**. All manifolds in this paper are smooth and without boundary. All fibrations in this paper are submersions with closed oriented fibers. We denote by \(d\) the exterior differential operator and \(d^{S}\) when we like to insist the base manifold \(S\).
We use the Einstein summation convention in this paper: when an index variable appears twice in a single term and is not otherwise defined, it implies summation of that term over all the values of the index.
We use the superconnection formalism of Quillen [22]. If \(A\) is a \(\mathbb{Z}_{2}\)-graded algebra, and if \(a,b\in A\), then we will note \([a,b]:=ab-(-1)^{\deg(a)\deg(b)}ba\) as the supercommutator of \(a,b\). If \(E,E^{\prime}\) are two \(\mathbb{Z}_{2}\)-graded spaces, we will note \(E\widehat{\otimes}E^{\prime}\) as the \(\mathbb{Z}_{2}\)-graded tensor product as in [2, SS1.3]. If one of \(E,E^{\prime}\) is ungraded, we understand it as \(\mathbb{Z}_{2}\)-graded by taking its odd part as zero.
For the fiber bundle \(\pi:W\to S\), we use the sign convention for the integration of the differential forms along the oriented fibers \(Z\) as follows: for \(\alpha\in\Omega^{\bullet}(S)\) and \(\beta\in\Omega^{\bullet}(W)\),
\[\int_{Z}(\pi^{*}\alpha)\wedge\beta=\alpha\wedge\int_{Z}\beta. \tag{1.8}\]
## 2. Equivariant eta form
In this section, we review the basic object of this paper -- eta forms. In Section 2.1, we describe the geometry of a fibration and introduce the Bismut superconnection to define the equivariant Bismut-Cheeger eta forms (cf. [2]). Then in Section 2.2, we introduce the finite version of eta forms for a vector bundle, which we call the equivariant finite dimensional eta forms.
### Equivariant Bismut-Cheeger eta form
In this subsection, we recall the definition of the equivariant Bismut-Cheeger eta form.
Given a submersion of closed manifolds \(\pi:W\to S\) with closed oriented fiber \(Z\), let \(G\) be a compact Lie group which acts on \(W\) with \(\pi\circ g=\pi\), \(\forall g\in G\). In this case, the \(G\)-action on \(S\) is trivial. We denote by \(TZ:=\ker(\pi_{*}:TW\to TS)\) the relative tangent bundle and \(T^{H}W\) a horizontal subbundle of \(TW\) such that
\[TW=T^{H}W\oplus TZ. \tag{2.1}\]
Then \(T^{H}W\) and \(TZ\) are both vector bundles over \(W\). We assume that the \(G\)-action preserves the orientation of \(TZ\). We assume that \(T^{H}W\) is also \(G\)-equivariant. Then the \(G\)-action preserves the splitting (2.1). For \(U\in TS\), let \(U^{H}\in T^{H}W\) be its horizontal lift in \(T^{H}W\) such that \(\pi_{*}U^{H}=U\). Let \(P^{TZ}:TW\to TZ\) be the projection with respect to (2.1).
Let \(g^{TZ}\) and \(g^{TS}\) be \(G\)-invariant metrics on \(TZ\) and \(TS\) respectively. Then
\[g^{TW}:=\pi^{*}g^{TS}\oplus g^{TZ} \tag{2.2}\]
is a \(G\)-invariant metric on \(TW\).
Let \(\nabla^{TW}\) be the Levi-Civita connection associated with \((TW,g^{TW})\) and
\[\nabla^{TZ}:=P^{TZ}\nabla^{TW}P^{TZ}, \tag{2.3}\]
which is a \(G\)-invariant Euclidean connection on \(TZ\) depending only on \((T^{H}W,g^{TZ})\) (cf. [4, Theorem 1.9]). Let \(\nabla^{TS}\) be the Levi-Civita connection on \((TS,g^{TS})\). Let
\[{}^{0}\nabla^{TW}:=\pi^{*}\nabla^{TS}\oplus\nabla^{TZ} \tag{2.4}\]
be a connection on \(TW\), which is also \(G\)-invariant. We define
\[\mathcal{S}:=\nabla^{TW}-{}^{0}\nabla^{TW}. \tag{2.5}\]
Then \(\mathcal{S}\) is a 1-form on \(W\) with values in antisymmetric elements of \(\mathrm{End}(TW)\). Let \(\mathcal{T}\) be the torsion of \({}^{0}\nabla^{TW}\). Then by [4, (1.30)], for \(U,V\in TS\),
\[\mathcal{T}(U^{H},V^{H})=-P^{TX}[U^{H},V^{H}]\in TZ. \tag{2.6}\]
Let \(\mathrm{Cl}(TZ)\) be the Clifford algebra bundle of \((TZ,g^{TZ})\), whose fiber at \(x\in W\) is the Clifford algebra \(\mathrm{Cl}(T_{x}Z)\) of the Euclidean vector space \((T_{x}Z,g^{T_{x}Z})\). A \(\mathbb{Z}_{2}\)-graded self-adjoint \(\mathrm{Cl}(TZ)\)-module,
\[\mathcal{E}=\mathcal{E}_{+}\oplus\mathcal{E}_{-}, \tag{2.7}\]
is a \(\mathbb{Z}_{2}\)-graded complex vector bundle equipped with a Hermitian metric \(h^{\mathcal{E}}\) preserving the splitting (2.7) and a fiberwise Clifford multiplication \(c\) of \(\mathrm{Cl}(TZ)\) such that the action \(c\) restricted to \(TZ\) is skew-adjoint on \((\mathcal{E},h^{\mathcal{E}})\) and anticommutes (resp. commutes) with the \(\mathbb{Z}_{2}\)-grading if the dimension of the fibres is even (resp. odd). Locally, the Clifford module \(\mathcal{E}\) could be written as
\[\mathcal{E}=S(TZ)\widehat{\otimes}E, \tag{2.8}\]
where \(S(TZ)\) is the spinor and \(E=E_{+}\oplus E_{-}\) is a \(\mathbb{Z}_{2}\)-graded complex vector bundle. In this case, if \(\dim Z\) is even, \(S(TZ)=S_{+}(TZ)\oplus S_{-}(TZ)\) and
\[\mathcal{E}_{+}=(S_{+}(TZ)\otimes E_{+})\oplus(S_{-}(TZ)\otimes E_{-})\,, \quad\mathcal{E}_{-}=(S_{+}(TZ)\otimes E_{-})\oplus(S_{-}(TZ)\otimes E_{+})\,;\]
if \(\dim Z\) is odd,
\[\mathcal{E}_{+}=S(TZ)\otimes E_{+},\quad\mathcal{E}_{-}=S(TZ)\otimes E_{-}.\]
Let \(\nabla^{\mathcal{E}}\) be a Clifford connection on \(\mathcal{E}\) associated with \(\nabla^{TZ}\), that is, \(\nabla^{\mathcal{E}}\) preserves \(h^{\mathcal{E}}\) and the splitting (2.7) and for any \(U\in TW\), \(V\in\mathcal{C}^{\infty}(W,TZ)\),
\[\left[\nabla^{\mathcal{E}}_{U},c(V)\right]=c\left(\nabla^{TZ}_{U}V\right). \tag{2.9}\]
We assume that the action of \(G\) could be lifted on \(\mathcal{E}\) such that it is compatible with the Clifford action and preserves the splitting (2.7). We assume that \(h^{\mathcal{E}}\) and \(\nabla^{\mathcal{E}}\) are \(G\)-invariant.
_Notation 2.1_.: We denote by
\[\underline{\pi}:=(\pi,T^{H}W,g^{TZ}),\quad\underline{\mathcal{E}}:=(\mathcal{E },h^{\mathcal{E}},\nabla^{\mathcal{E}}) \tag{2.10}\]
the corresponding geometric data of the fibration \(\pi\) and the Clifford module \(\mathcal{E}\) introduced above.
On Clifford module \(\mathcal{E}\), we define a family of Dirac operators over \(S\):
\[D^{\mathcal{E}}_{Z}:=\sum_{i=1}^{\dim Z}c(e_{i})\nabla^{\mathcal{E}}_{e_{i}}, \tag{2.11}\]
for \(\{e_{i}\}_{i=1}^{\dim Z}\) a local orthonormal frame of \((TZ,g^{TZ})\). This definition is independent of the choice of \(\{e_{i}\}_{i=1}^{\dim Z}\).
Let \(\mathscr{E}_{b}\) be the space of smooth sections of \(\mathcal{E}\) over \(Z_{b}\), \(b\in S\), equipped with the \(L^{2}\)-inner product
\[\langle\cdot,\cdot\rangle_{\mathscr{E}_{b}}:=\int_{Z_{b}}h^{\mathcal{E}}( \cdot,\cdot)dv_{Z}. \tag{2.12}\]
As in [4], we take \((\mathscr{E},\langle\cdot,\cdot\rangle_{\mathscr{E}})\) as an infinite dimensional vector bundle over \(S\). Let \(\nabla^{\mathcal{E}}\) be the \(\langle\cdot,\cdot\rangle_{\mathscr{E}}\)-preserving connection on \(\mathscr{E}\) with respect to \(\nabla^{\mathcal{E}}\) defined in [7, (1.7)].
Let \(\{f_{p}\}\) be a local frame of \(TS\) and \(\{f^{p}\}\) be its dual. By (2.6), we denote by
\[c(\mathcal{T})=\frac{1}{2}c(\mathcal{T}(f^{H}_{p},f^{H}_{q}))f^{p}\wedge f^{q} \wedge=-\frac{1}{2}c(P^{TX}[f^{H}_{p},f^{H}_{q}])f^{p}\wedge f^{q}\wedge. \tag{2.13}\]
Let \(B\) be the Bismut superconnection defined by (cf. [2, P.336])
\[B:=D^{\mathcal{E}}_{Z}+\nabla^{\mathcal{E}}-\frac{c(\mathcal{T})}{4}, \tag{2.14}\]
which only depends on geometric data \((T^{H}W,g^{TZ},\nabla^{\mathcal{E}})\). For \(u>0\), we denote \(\delta_{u}\) the operator on \(\Lambda^{i}(T^{*}S)\widehat{\otimes}\mathscr{E}\) by multiplying differential forms by \(u^{i/2}\). Then for \(u>0\), we define the rescaled Bismut superconnection
\[B_{u}:=u\delta_{u}\circ B\circ\delta_{u}^{-1}=\sqrt{u}D_{Z}^{\mathcal{E}}+ \nabla^{\mathscr{E}}-\frac{c(\mathcal{T})}{4\sqrt{u}}. \tag{2.15}\]
Under the conditions above, we see that \(D_{Z}^{\mathcal{E}}\) commutes with the \(G\)-action. Then for any \(b\in S\), \(\ker D_{Z_{b}}^{\mathcal{E}}\) is a finite dimensional \(G\)-representation. We assume that \(\{\ker D_{Z_{b}}^{\mathcal{E}}\}_{b\in S}\) forms a vector bundle over \(S\). Then \(\langle\cdot,\cdot\rangle_{\mathscr{E}}\) induces a \(G\)-invariant metric on \(\ker D_{Z}^{\mathcal{E}}\). Let \(P^{\ker D_{Z}^{\mathcal{E}}}:\mathscr{E}\to\ker D_{Z}^{\mathcal{E}}\) be the orthogonal projection with respect to \(\langle\cdot,\cdot\rangle_{\mathscr{E}}\). We define
\[\nabla^{\ker D_{Z}^{\mathcal{E}}}:=P^{\ker D_{Z}^{\mathcal{E}}}\circ\nabla^{ \mathscr{E}}\circ P^{\ker D_{Z}^{\mathcal{E}}}, \tag{2.16}\]
which is a \(G\)-invariant Hermitian connection on \(\ker D_{Z}^{\mathcal{E}}\).
For a trace class element \(P\in\Lambda(T^{*}S)\widehat{\otimes}\mathrm{End}(\mathscr{E})\), we denote by \(\mathrm{Tr}^{\mathrm{odd/even}}[P]\) the part of \(\mathrm{Tr}_{s}[P]\) which take values in odd or even forms. Set
\[\widetilde{\mathrm{Tr}}[P]:=\begin{cases}\mathrm{Tr}_{s}[P],&\text{if }\dim Z \text{ is even;}\\ \mathrm{Tr}^{\mathrm{odd}}[P],&\text{if }\dim Z\text{ is odd.}\end{cases} \tag{2.17}\]
Here \(\mathrm{Tr}_{s}[P]\) denotes the supertrace of \(P\) as in [2, SS1.3].
For \(\alpha\in\Omega^{i}(S)\), set
\[\psi_{S}(\alpha)=\begin{cases}\left(\frac{1}{2\pi\sqrt{-1}}\right)^{\frac{i}{2 }}\cdot\alpha,&\text{$i$ is even;}\\ \frac{1}{\sqrt{\pi}}\left(\frac{1}{2\pi\sqrt{-1}}\right)^{\frac{i-1}{2}}\cdot \alpha,&\text{$i$ is odd,}\end{cases} \tag{2.18}\]
and
\[\widetilde{\psi}_{S}(\alpha)=\begin{cases}\frac{1}{\sqrt{\pi}}\psi_{S}\alpha,& \text{$i$ is even;}\\ \frac{1}{2\sqrt{-1}\sqrt{\pi}}\psi_{S}\alpha,&\text{$i$ is odd.}\end{cases} \tag{2.19}\]
For \(\beta\in\Omega^{\bullet}(B\times[0,1]_{u})\), if we write \(\beta=\beta_{0}+du\wedge\beta_{1}\), with \(\beta_{0},\beta_{1}\in\Omega(T^{*}S)\), we set
\[[\beta]^{du}:=\beta_{1}. \tag{2.20}\]
For \(g\in G\), let \(W^{g}\) be the fixed point set of \(g\)-action on \(W\). Then \(\pi|_{W^{g}}:W^{g}\to S\) is a fiber bundle with fiber \(Z^{g}\). We assume that \(TZ^{g}\) is oriented.
**Definition 2.2** ([13, Definition 2.3]).: For \(g\in G\), the _equivariant Bismut-Cheeger eta form_\(\widetilde{\eta}_{g}(\underline{\pi},\underline{\mathcal{E}})\in\Omega^{ \bullet}(S)\) is defined by
\[\begin{split}\widetilde{\eta}_{g}(\underline{\pi},\underline{ \mathcal{E}})&:=-\int_{0}^{+\infty}\left\{\psi_{S}\widetilde{ \mathrm{Tr}}\left[g\exp\left(-\left(B_{u}+du\wedge\frac{\partial}{\partial u} \right)^{2}\right)\right]\right\}^{du}du\\ &=\begin{cases}\int_{0}^{+\infty}\widetilde{\psi}_{S}\mathrm{Tr}^{ \mathrm{even}}\left[g\frac{\partial B_{u}}{\partial u}\exp(-B_{u}^{2})\right] du\in\Omega^{\mathrm{even}}(B;\mathbb{C}),&\text{if}\dim Z\text{ is odd;}\\ \int_{0}^{+\infty}\widetilde{\psi}_{S}\mathrm{Tr}_{s}\left[g\frac{\partial B _{u}}{\partial u}\exp(-B_{u}^{2})\right]du\in\Omega^{\mathrm{odd}}(B;\mathbb{C }),&\text{if}\dim Z\text{ is even.}\end{cases}\end{split} \tag{2.21}\]
### Finite dimensional eta form
In this subsection, we introduce the definition of equivariant finite dimensional eta form.
Let \(E\to S\) be a \(G\)-equivariant \(\mathbb{Z}_{2}\)-graded vector bundle with Hermitian metric \(h^{E}\), which preserves the \(\mathbb{Z}_{2}\)-grading. We assume that the \(G\)-action on \(S\) is trivial and \(h^{E}\) is \(G\)-invariant. Let \(\nabla^{E}\) be a \(G\)-invariant connection preserving \(h^{E}\). Take \(V\in\mathrm{End}(E)\) commuting with the \(G\)-action. We assume that \(V\) either commutes or anticommutes with the \(\mathbb{Z}_{2}\)-grading. Set
\[E^{\prime}:=\ker V. \tag{2.22}\]
We assume that \(\dim\ker V\) is locally constant. Then \(\ker V\to S\) is a \(G\)-equivariant \(\mathbb{Z}_{2}\)-graded vector bundle. We define the equivariant geometric data on \(E^{\prime}\) by the orthogonal projection \(P^{\ker V}:E\to E^{\prime}\) as
\[h^{E^{\prime}}:=P^{\ker V}\circ h^{E}\circ P^{\ker V},\quad\nabla^{E^{\prime}}:=P ^{\ker V}\circ\nabla^{E}\circ P^{\ker V}. \tag{2.23}\]
Then \(\nabla^{E^{\prime}}\) is a connection preserving \(h^{E^{\prime}}\).
Based on Quillen's work [22], we define the superconnection as follows.
**Definition 2.3**.: Let \(\sigma\) be a quantity super-commutes with \(\Omega^{\bullet}(S)\). We define a _superconnection_\(L:\Omega^{\bullet}(S,E)\to\Omega^{\bullet}(S,E)\) by:
\[L:=\begin{cases}\nabla^{E}+V,&\text{if $V$ anticommutes with the $\mathbb{Z}_{2}$-grading};\\ \nabla^{E}+\sigma V,&\text{if $V$ commutes with the $\mathbb{Z}_{2}$-grading}.\end{cases} \tag{2.24}\]
_Remark 2.4_.: In the sequel, in order to simplify the notations, when \(V\) commutes with the \(\mathbb{Z}_{2}\)-grading, we also usually omit \(\sigma\) and regard \(V\) as a quantity commutes with differential forms of even degree and anti-commutes with differential forms of odd degree.
For \(u>0\), set
\[L_{u}:=\sqrt{u}\delta_{u}\circ L\circ\delta_{u}^{-1}=\sqrt{u}V+\nabla^{E}. \tag{2.25}\]
**Definition 2.5**.: For \(g\in G\), we define the _equivariant finite dimensional eta form_ by
\[\begin{split}\widetilde{\eta}_{g}(E,E^{\prime},\nabla^{E}, \nabla^{E^{\prime}})&:=-\int_{0}^{+\infty}\left\{\psi_{S}{\rm Tr }_{s}\left[g\exp\left(-\left(L_{u}+du\wedge\frac{\partial}{\partial u}\right) ^{2}\right)\right]\right\}^{du}du\\ &=\int_{0}^{+\infty}\widetilde{\psi}_{S}{\rm Tr}_{s}\left[g \frac{\partial L_{u}}{\partial u}\exp(-L_{u}^{2})\right]du.\end{split} \tag{2.26}\]
The legitimacy of the definition follows from the equivariant version of [2, Theorem 9.7]. Moreover, by the equivariant version of [2, (9.2)],
\[d\widetilde{\eta}_{g}(E,E^{\prime},\nabla^{E},\nabla^{E^{\prime}})={\rm ch}_ {g}(E^{\prime},\nabla^{E^{\prime}})-{\rm ch}_{g}(E,\nabla^{E}). \tag{2.27}\]
## 3. Functoriality of eta forms
We will present our main result in this section. In Section 3.1, we investigate the geometry of a composition of fibrations, then define the Dirac operators. In Section 3.2, we construct a series of bundles for the composition of fibrations by means of Dirac operators in Section 3.1, which is an analogy of [3, (6.9)]. We also define the equivariant eta forms in this part. In Section 3.3, we rescale the Bismut superconnection and related operators to use the method of adiabatic limit. In Section 3.4 we state our main result, which can be seen as the relation of the equivariant Bismut-Cheeger and finite dimensional eta forms associated with the composition of fibrations under some assumptions.
### Composition of fibrations
We revisit the geometric setting in Section 1 to make the definition clear. Let \(W\), \(V\), \(S\) be closed \(G\)-manifolds. Let \(\pi_{X}:W\to V\), \(\pi_{Y}:V\to S\) be equivariant submersions with closed oriented fibers \(X\), \(Y\). Then \(\pi_{Z}=\pi_{Y}\circ\pi_{X}:W\to S\) is an equivariant submersion with closed oriented fiber \(Z\). Assume that \(G\) acts on \(S\) trivially.
We denote by \(TX\), \(TY\), \(TZ\) the corresponding relative tangent bundles for \(\pi_{X}\), \(\pi_{Y}\), \(\pi_{Z}\), and \(T^{H}_{X}W\), \(T^{H}_{Y}V\), \(T^{H}_{Z}W\) the horizontal \(G\)-equivariant subbundles respectively. For \(U\in TS\), \(U^{\prime}\in TV\), we shall denote by \(U^{\prime H}_{X}\in T^{H}_{X}W\), \(U^{\prime}_{Y}\in T^{H}_{Y}V\), \(U^{H}_{Z}\in T^{H}_{Z}W\) the corresponding horizontal lifts of \(U^{\prime}\), \(U\), \(U\) such that \(\pi_{X,*}(U^{\prime H})=U^{\prime}\), \(\pi_{Y,*}(U^{\prime H}_{Y})=U\), \(\pi_{Z,*}(U^{H}_{Z})=U\). We assume that \(T^{H}_{Z}W\subset T^{H}_{X}W\). Let \(T^{H}Z:=T^{H}_{X}W\cap TZ\). We have a splitting \(TZ=T^{H}Z\oplus TX\) such that \(T^{H}Z\simeq\pi_{X}^{*}TY\).
Let \(g^{TX}\), \(g^{TY}\) be two \(G\)-invariant Euclidean metrics on relative tangent bundles \(TX\), \(TY\) respectively. We define \(g^{TZ}:=\pi_{X}^{*}g^{TY}\oplus g^{TX}\) on \(TZ\), which is also \(G\)-invariant. Let \(\nabla^{TX}\), \(\nabla^{TY}\), \(\nabla^{TZ}\) be \(G\)-invariant connections defined in (2.3) on \(TX\), \(TY\), \(TZ\) respectively. Let \({}^{0}\nabla^{TZ}\) be the connection
\[{}^{0}\nabla^{TZ}:=\pi_{X}^{*}\nabla^{TY}\oplus\nabla^{TX}. \tag{3.1}\]
In this paper, we write \(\{g_{\alpha}\}\), \(\{e_{i}\}\), \(\{f_{p}\}\) the local orthonormal frames of \((TS,g^{TS})\), \((TX,g^{TX})\), \((TY,g^{TY})\) correspondingly, and \(\{g^{\alpha}\}\), \(\{e^{i}\}\), \(\{f^{p}\}\) the dual frames.
Let \(\mathcal{E}_{X}=(\mathcal{E}_{X},h^{\mathcal{E}_{X}},\nabla^{\mathcal{E}_{X}})\) (resp. \(\mathcal{E}_{Y}=(\mathcal{E}_{Y},h^{\mathcal{E}_{Y}},\nabla^{\mathcal{E}_{Y}})\)) be a \(\mathbb{Z}_{2}\)-graded \(G\)-equivariant self-adjoint \(\operatorname{\mathrm{Cl}}(TX)\)-module over \(W\) (resp. \(\operatorname{\mathrm{Cl}}(TY)\)-module over \(V\)) with a \(G\)-invariant Clifford connection. Then \(\mathcal{E}_{X}\) is a \(G\)-equivariant vector bundle over \(W\) and \(\mathcal{E}_{Y}\) is a \(G\)-equivariant vector bundle over \(V\). Set
\[\mathcal{E}:=\pi_{X}^{*}\mathcal{E}_{Y}\otimes\mathcal{E}_{X}. \tag{3.2}\]
Then \(\mathcal{E}\) is a \(\mathbb{Z}_{2}\)-graded \(G\)-equivariant self-adjoint Clifford module of \(\operatorname{\mathrm{Cl}}(TZ)\simeq\pi_{X}^{*}\operatorname{\mathrm{Cl}}( TY)\widehat{\otimes}\operatorname{\mathrm{Cl}}(TX)\) with Hermitian metric \(h^{\mathcal{E}}:=\pi_{X}^{*}h^{\mathcal{E}_{Y}}\otimes h^{\mathcal{E}_{X}}\). For \(U\in TY\), the Clifford action \(c(U)\) on \(\mathcal{E}_{Y}\) are lifted on \(\pi_{X}^{*}\mathcal{E}_{Y}\) as \(c(U_{X}^{H})\). Set
\[{}^{0}\nabla^{\mathcal{E}}:=\pi_{X}^{*}\nabla^{\mathcal{E}_{Y}}\otimes 1+1 \otimes\nabla^{\mathcal{E}_{X}}. \tag{3.3}\]
From [13, (4.3)],
\[\nabla^{\mathcal{E}}:={}^{0}\nabla^{\mathcal{E}}+\frac{1}{2}\langle\mathcal{S }_{X}(\cdot)e_{i},f_{p,X}^{H}\rangle c(e_{i})c(f_{p})+\frac{1}{4}\langle \mathcal{S}_{X}(\cdot)f_{p,X}^{H},f_{q,X}^{H}\rangle c(f_{p})c(f_{q}) \tag{3.4}\]
is a \(G\)-invariant Clifford connection on \((\mathcal{E},h^{\mathcal{E}})\) associated with \(\nabla^{TZ}\). Here \(\mathcal{S}_{X}\) is the tensor in (2.5) associated with \(\pi_{X}\).
Let \(D_{X}^{\mathcal{E}_{X}}\) and \(D_{Z}^{\mathcal{E}}\) be the family Dirac operators with respect to \((g^{TX},\nabla^{\mathcal{E}_{X}})\) and \((g^{TZ},\nabla^{\mathcal{E}})\) respectively. Denote by \(\mathscr{E}_{X}\) the infinite dimensional vector bundle over \(V\) associated with \(\mathcal{E}_{X}\). We shall denote by \(\langle\cdot,\cdot\rangle_{\mathscr{E}_{X}}\) the \(L^{2}\)-inner product on \(\mathscr{E}_{X}\) and \(\nabla^{\mathscr{E}_{X}}\) the \(G\)-invariant \(\langle\cdot,\cdot\rangle_{\mathscr{E}_{X}}\)-preserving connection as before. We define the inner product \(\langle\cdot,\cdot\rangle_{\mathscr{E}_{Y}\otimes\mathscr{E}_{X}}\), based on \(h^{\mathcal{E}_{Y}}\) and \(\langle\cdot,\cdot\rangle_{\mathscr{E}_{X}}\). Set
\[\nabla^{\mathcal{E}_{Y}\otimes\mathscr{E}_{X}}:=\nabla^{\mathcal{E}_{Y}} \otimes 1+1\otimes\nabla^{\mathscr{E}_{X}} \tag{3.5}\]
on the bundle \(\mathcal{E}_{Y}\otimes\mathscr{E}_{X}\to V\).
We assume that \(\ker D_{X}^{\mathcal{E}_{X}}\) forms a vector bundle over \(V\). Let
\[P^{\ker D_{X}^{\mathcal{E}_{X}}}:\mathcal{E}_{Y}\otimes\mathscr{E}_{X}\to \mathcal{E}_{Y}\otimes\ker D_{X}^{\mathcal{E}_{X}} \tag{3.6}\]
be the orthogonal projection with respect to \(\langle\cdot,\cdot\rangle_{\mathcal{E}_{Y}\otimes\mathscr{E}_{X}}\). It is clear that \(P^{\ker D_{X}^{\mathcal{E}_{X}}}\) induces metric, denoted by \(\langle\cdot,\cdot\rangle_{\mathcal{E}_{Y}\otimes\ker D_{X}^{\mathcal{E}_{X}}}\), and connection, denoted by \(\nabla^{\mathcal{E}_{Y}\otimes\ker D_{X}^{\mathcal{E}_{X}}}\), on \(\mathcal{E}_{Y}\otimes\ker D_{X}^{\mathcal{E}_{X}}\). Note that all these data constructed on \(\mathcal{E}_{Y}\otimes\ker D_{X}^{\mathcal{E}_{X}}\) are \(G\)-invariant.
Set
\[D_{H}:=\sum_{p=1}^{\dim Y}c(f_{p})\nabla^{\mathcal{E}_{Y}\otimes\mathscr{E}_{X }}_{f_{p}^{H}}. \tag{3.7}\]
Let \(D_{Y}^{\mathcal{E}_{Y}\otimes\ker D_{X}^{\mathcal{E}_{X}}}\) be the fiberwise Dirac operator associated with \((g^{TY},\nabla^{\mathcal{E}_{Y}\otimes\ker D_{X}^{\mathcal{E}_{X}}})\). Then it is clear that
\[D_{Y}^{\mathcal{E}_{Y}\otimes\ker D_{X}^{\mathcal{E}_{X}}}=P^{\ker D_{X}^{ \mathcal{E}_{X}}}\circ D_{H}\circ P^{\ker D_{X}^{\mathcal{E}_{X}}}. \tag{3.8}\]
By [2, Theorem 10.19], we have
\[D_{Z}^{\mathcal{E}}=D_{X}^{\mathcal{E}_{X}}+D_{H}+C, \tag{3.9}\]
where
\[C=-\frac{1}{8}\langle\mathcal{T}_{X}(f_{p,X}^{H},f_{q,X}^{H}),e_{i}\rangle c(e _{i})c(f_{p})c(f_{q}). \tag{3.10}\]
Here \(\mathcal{T}_{X}\) is the torsion tensor associated with the fibration \(\pi_{X}\).
### The bundles of spectral sequence
With all these geometric data prepared, in this subsection, we will define a series of bundles over \(S\), denoted by \(\{\mathscr{E}_{r}\}_{r=0,1,\cdots,\infty}\).
Compared with [3, (6.9)] and [20, (2.13)], we make the following definition.
**Definition 3.1**.: For \(b\in S\), \(r\in\mathbb{Z}_{+}\), we define
\[\begin{split}\mathscr{E}_{r,b}:=\Big{\{}s_{0}\in\mathscr{E}_{b} :&\text{There exist }s_{1},\cdots,s_{r-1}\in\mathscr{E}_{b},\text{ such that}\\ D_{X}^{\mathscr{E}_{X}}s_{0}=0,\ D_{H}s_{0}+D_{X}^{\mathscr{E}_ {X}}s_{1}=0,\ Cs_{0}+D_{H}s_{1}+D_{X}^{\mathscr{E}_{X}}s_{2}=0,\\ \cdots,Cs_{r-3}+D_{H}s_{r-2}+D_{X}^{\mathscr{E}_{X}}s_{r-1}=0 \Big{\}}.\end{split} \tag{3.11}\]
In what follows, for \(s_{0}\in\mathscr{E}_{r}\) we shall write \(\varphi_{r}(s_{0})=(s_{1},\cdots,s_{r-1})\), which are the elements in (3.11).
Note that \(\mathscr{E}_{1,b}=C^{\infty}(Y_{b},\mathscr{E}_{Y}\otimes\ker D_{X}^{\mathscr{ E}_{X}})\), hence \(\mathscr{E}_{2,b}=\ker D_{Y_{b}}^{\mathscr{E}_{Y}\otimes\ker D_{X}}\) is a finite dimensional vector space. For \(r>2\), \(\mathscr{E}_{r,b}\subset\mathscr{E}_{2,b}\). So for \(r\geq 2\), \(\dim\mathscr{E}_{r,b}<+\infty\). We assume that \(\{\mathscr{E}_{r,b}\}_{b\in S}\) constitute complex vector bundles for \(r\geqslant 2\). Then they should be \(G\)-equivariant and \(\mathbb{Z}_{2}\)-graded.
_Remark 3.2_.: When the Dirac operator is the Dolbeault operator, \(\mathscr{E}_{r}\) reduces to [3, (6.9)] and for signature operator, \(\mathscr{E}_{r}\) becomes the term \(\mathcal{E}_{r}\) in [20, (2.13)] and [9, Theorems 5.9, 5.10]. For Dolbeault and signature operator, \(\mathscr{E}_{r}\) can be interpreted as terms of Leray spectral sequences (see [3, Theorem 6.1] and [20, Proposition 2.1]). However, in our general case, there is no topological meaning for \(\mathscr{E}_{r},r\geqslant 2\). Hence we need the assumption above.
Now we will construct the geometric data and the equivariant eta form for \(\mathscr{E}_{r}\).
For \(r=0\), set \(\mathscr{E}_{0}=\mathscr{E}\), the infinite dimensional bundle over \(S\) whose fiber is the space of the smooth sections of \(\mathcal{E}\) over \(Z\). By abusing the notation, we write \(h_{0}\) for \(\langle\cdot,\cdot\rangle_{\mathscr{E}}\), the metric defined in (2.12) on \(\mathscr{E}\). We write \(\nabla^{0}=\nabla^{\mathscr{E}}\) and \(D_{0}=D_{X}^{\mathscr{E}_{X}}\). Let \(B_{0}\) be the Bismut superconnection associated with \((T_{X}^{H}W,g^{TX},\nabla^{\mathscr{E}_{X}})\).
For any \(g\in G\), the equivariant Bismut-Cheeger eta form
\[\widetilde{\eta}_{g}(\underline{\pi_{X}^{g}},\underline{\mathscr{E}_{X}})\in \Omega^{\bullet}(V^{g}), \tag{3.12}\]
is well-defined as in (2.2). Here \(\underline{\pi_{X}^{g}}\) stands for
\[\underline{\pi_{X}^{g}}:=\left(\pi_{X}|_{W^{g}},T_{X}^{H}(W|_{V^{g}}):=T_{X}^ {H}W|_{V^{g}}\cap T(W|_{V^{g}}),g^{TX}\right). \tag{3.13}\]
For \(r=1\), \(\mathscr{E}_{1}=\ker D_{0}=\ker D_{X}^{\mathscr{E}_{X}}\). Set \(p_{1}=P^{\ker D_{X}^{\mathscr{E}_{X}}}:\mathscr{E}_{0}\to\ker D_{0}\). Let \(h_{1}\) be the metric on \(\mathscr{E}_{1}\) induced from \(h_{0}\). Let
\[\nabla^{1}:=p_{1}\circ\nabla^{0}\circ p_{1} \tag{3.14}\]
be the connection on \(\mathscr{E}_{1}\) preserving \(h_{1}\). We denote by \(p_{1}^{\perp}:=1-p_{1}\).
Let \(D_{1}:=D_{Y}^{\mathscr{E}_{Y}\otimes\ker D_{X}^{\mathscr{E}_{X}}}\) be the Dirac operator defined in (3.8). Let \(B_{1}\) be the Bismut superconnection associated with \((T_{Y}^{H}V,g^{TX},\nabla^{\mathscr{E}_{Y}\otimes\ker D_{X}^{\mathscr{E}_{X}}})\). For \(g\in G\), we have
\[\widetilde{\eta}_{g}(\underline{\pi_{Y}},\underline{\mathscr{E}_{Y}\otimes \ker D_{X}^{\mathscr{E}_{X}}})\in\Omega^{\bullet}(S). \tag{3.15}\]
For \(r\geqslant 2\), let
\[p_{r}:\mathscr{E}_{0}\to\mathscr{E}_{r} \tag{3.16}\]
be the orthogonal projection with respect to \(h_{0}\) and \(p_{r}^{\perp}:=1-p_{r}\). Let \(h_{r}\) be the metric on \(\mathscr{E}_{r}\) induced from \(h_{0}\) and
\[\nabla^{r}:=p_{r}\circ\nabla^{0}\circ p_{r}. \tag{3.17}\]
Then \(\nabla^{r}\) preserves \(h_{r}\).
Comparing with [3, (6.10)] and [20, (2.14)], we define \(D_{r}\) on \(\mathscr{E}_{r}\) by
\[D_{r}s_{0}:=p_{r}(D_{H}s_{r-1}+Cs_{r-2}), \tag{3.18}\]
where \(s_{r-1},s_{r-2}\in\mathscr{E}\) are the elements in (3.11). Then \(D_{r}\) commutes (resp. anticommutes) with the \(\mathbb{Z}_{2}\)-grading on \(\mathcal{E}_{r}\) when \(\dim Z\) is odd (resp. even).
**Lemma 3.3**.: _The operator \(D_{r}\) in (3.18) is well-defined. That is, it is independent of the choice of \(s_{1},\cdots,s_{r-1}\)._
Proof.: When \(r=1\), \(D_{1}=D_{X}^{\mathcal{E}_{X}}\) is well-defined. We assume that \(D_{r^{\prime}}\) is well-defined for any \(r^{\prime}\leqslant r\). We shall prove that \(D_{r+1}\) is legitimate. Suppose that \(\varphi_{r}(s_{0})=(s_{1},\cdots,s_{r-1}),\varphi_{r}^{\prime}(s_{0})=(s_{1}^ {\prime},\cdots,s_{r-1}^{\prime})\). We will show that \(D_{r}s_{0}=p_{r}(D_{H}s_{r-1}+Cs_{r-2}),D_{r}^{\prime}s_{0}=p_{r}(D_{H}s_{r-1} ^{\prime}+Cs_{r-2}^{\prime})\) define the same operator.
We claim that the following isomorphism holds for \(r\):
\[\ker D_{r-1}\simeq\mathscr{E}_{r}. \tag{3.19}\]
Then we define \(t_{k}:=s_{k+1}-s_{k+1}^{\prime},k=0,\cdots,r-2\). So we have that \(D_{X}^{\mathcal{E}_{X}}t_{0}=0\), \(D_{H}t_{0}+D_{X}^{\mathcal{E}_{X}}t_{1}=0\), \(\cdots\), \(Ct_{r-4}+D_{H}t_{r-3}+D_{X}^{\mathcal{E}_{X}}t_{r-2}=0\), which means that \(t_{1},\cdots,t_{r-2}\) make \(t_{0}\) in \(\mathscr{E}_{r-1}\). We have
\[(D_{r}-D_{r}^{\prime})s_{0}=p_{r}(D_{H}t_{r-2}+Ct_{r-3})=p_{r}p_{r-1}(D_{H}t_ {r-2}+Ct_{r-3})=p_{r}D_{r-1}t_{0}. \tag{3.20}\]
By (3.19), \(D_{r-1}t_{0}\in\operatorname{im}D_{r-1}=(\mathscr{E}_{r})^{\perp}\). Thus \((D_{r}-D_{r}^{\prime})s_{0}=p_{r}D_{r-1}t_{0}=0\). So \(D_{r}=D_{r}^{\prime}\), which proves the Lemma 3.3.
Now we prove (3.19). For \(r=1\), \(\mathscr{E}_{1}\simeq\ker D_{0}\) follows from the definition. We assume that \(\ker D_{r^{\prime}-1}\cong\mathscr{E}_{r^{\prime}}\) for \(r^{\prime}\leqslant k,k<r\). We only need to prove that it holds for \(k+1\).
On one hand, for \(s_{0}\in\mathscr{E}_{k+1}\), Let \(\varphi_{k+1}(s_{0})=(s_{1},\cdots,s_{k})\). By (3.18),
\[D_{k}s_{0}=p_{k}(D_{H}s_{k-1}+Cs_{k-2})=-p_{k}D_{X}^{\mathcal{E}_{X}}s_{k}. \tag{3.21}\]
Since \(\mathscr{E}_{1}=\ker D_{X}^{\mathcal{E}_{X}}\), we know that \(D_{X}^{\mathcal{E}_{X}}s_{k}\in(\ker D_{X}^{\mathcal{E}_{X}})^{\perp}=( \mathscr{E}_{1})^{\perp}\subset(\mathscr{E}_{k})^{\perp}\). So \(D_{k}s_{0}=0\), which implies that \(\mathscr{E}_{k+1}\subset\ker D_{k}\).
On the other hand, we need to show that
\[\ker D_{k}\subset\mathscr{E}_{k+1}. \tag{3.22}\]
Suppose that \(s_{0}\in\mathscr{E}_{k}\) satisfying
\[D_{k}s_{0}=p_{k}(D_{H}s_{k-1}+Cs_{k-2})=0. \tag{3.23}\]
When \(k=1\), by setting
\[s_{1}=-(D_{X}^{\mathcal{E}_{X}})^{-1}p_{1}^{\perp}D_{H}s_{0}, \tag{3.24}\]
we fulfill the equation below
\[D_{X}^{\mathcal{E}_{X}}s_{1}+D_{H}s_{0}=0. \tag{3.25}\]
Now by assuming that (3.19) holds for any \(k^{\prime}\leqslant k\) and the assumption (3.23), we may write
\[\begin{split} D_{H}s_{k-1}+Cs_{k-2}&=D_{k-1}s_{0}^{( 1)}+p_{k-1}^{\perp}(D_{H}s_{k-1}+Cs_{k-2})\\ &=D_{k-1}s_{0}^{(1)}+D_{k-2}s_{0}^{(2)}+p_{k-2}^{\perp}(D_{H}s_{k -1}+Cs_{k-2})\\ &=\cdots=D_{k-1}s_{0}^{(1)}+D_{k-2}s_{0}^{(2)}+\cdots+D_{X}^{ \mathcal{E}_{X}}s_{0}^{(k)},\end{split} \tag{3.26}\]
where \(s_{0}^{(i)}\in\mathscr{E}_{k-i}\) with \(\varphi_{k-i}(s_{0}^{(i)})=(s_{1}^{(i)},\cdots,s_{k-i-1}^{(i)})\).
Note that under the assumption (3.19),
\[\begin{split} D_{k-1}s_{0}^{(1)}&=p_{k-1}(D_{H}s_{ k-2}^{(1)}+Cs_{k-3}^{(1)})=D_{H}s_{k-2}^{(1)}+Cs_{k-3}^{(1)}-p_{k-1}^{\perp}(D_{H}s_{ k-2}^{(1)}+Cs_{k-3}^{(1)})\\ &=D_{H}s_{k-2}^{(1)}+Cs_{k-3}^{(1)}+D_{k-2}s_{0}^{(1,1)}+p_{k-3}^{ \perp}(D_{H}s_{k-2}^{(1)}+Cs_{k-3}^{(1)})\\ &=\cdots=D_{H}s_{k-2}^{(1)}+Cs_{k-3}^{(1)}+D_{k-2}s_{0}^{(2)^{ \prime}}+\cdots+D_{X}^{\mathcal{E}_{X}}s_{0}^{(k-1)^{\prime}},\end{split} \tag{3.27}\]
which means that by taking \(\varphi_{k}(s_{0})=(s_{1},s_{1}-s_{1}^{(1)},\cdots,s_{k-1}-s_{k-1}^{(1)})=:(s_{1}^ {\prime},\cdots,s_{k-1}^{\prime})\), we may write
\[D_{H}s_{k-1}^{\prime}+Cs_{k-2}^{\prime}=D_{k-2}s_{0}^{(1,1)}+\cdots+D_{X}^{ \mathcal{E}_{X}}s_{0}^{(1,k-1)}. \tag{3.28}\]
At the end, we may find \(\varphi_{r}(s_{0})=(\widetilde{s}_{1},\cdots,\widetilde{s}_{k-1})\), which makes
\[D_{H}\widetilde{s}_{k-1}+C\widetilde{s}_{k-2}=-D_{X}^{\mathcal{E}_{X}}\widetilde {s}_{k}, \tag{3.29}\]
for some \(\widetilde{s}_{k}\in\mathscr{E}\). So that \(\varphi_{k+1}(s_{0})=(\widetilde{s}_{1},\cdots,\widetilde{s}_{k})\) is well-defined and \(s_{0}\in\mathscr{E}_{k+1}\)
**Corollary 3.4**.: _For \(r\geqslant 0\), we have_
\[\ker D_{r}\simeq\mathscr{E}_{r+1}. \tag{3.30}\]
By assumption, for \(r\geq 2\), \(\mathscr{E}_{r}\) is a \(G\)-equivariant \(\mathbb{Z}_{2}\)-graded vector bundle over \(S\). By Definition 2.3, we define the superconnection
\[B_{r}:=\nabla^{r}+D_{r},\quad\text{ for }r\geqslant 2. \tag{3.31}\]
By Lemma 3.3 and Definition 2.5, for any \(g\in G\), we could define
\[\widetilde{\eta}_{g}(\mathscr{E}_{r},\mathscr{E}_{r+1},\nabla^{r},\nabla^{r+1 })\in\Omega^{\bullet}(S),\quad r\geqslant 2. \tag{3.32}\]
As \(\mathscr{E}_{r}\supset\mathscr{E}_{r+1}\) and \(\dim\mathscr{E}_{r}<+\infty\) for \(r\geq 2\), there exists \(r_{0}\) such that for \(r\geqslant r_{0}\), \(\mathscr{E}_{r}\simeq\mathscr{E}_{r_{0}}\). We denote by \(\mathscr{E}_{\infty}\) the convergent one.
We assume that \(\ker D^{\mathcal{E}}_{Z}\) forms a vector bundle over \(S\). Let
\[p_{\infty}:\mathscr{E}_{0}\to\mathscr{E}_{\infty},\quad p:\mathscr{E}_{0}\to \ker D^{\mathcal{E}}_{Z} \tag{3.33}\]
be the orthogonal projections associated with \(h_{0}\). We have the natural connection on \(\ker D^{\mathcal{E}}_{Z}\):
\[\nabla^{\ker D^{\mathcal{E}}_{Z}}:=p\circ\nabla^{0}\circ p. \tag{3.34}\]
### The family of adiabatic limit
In this subsection, we will study a family of adiabatic limits over the base manifold \(S\).
We define the \(G\)-invariant metrics over \(TZ\) and \(TW\) for \(T\geq 1\):
\[g^{TZ}_{T}:=\pi^{\star}_{X}g^{TY}\oplus\frac{1}{T^{2}}g^{TX},\quad g^{TW}_{T}: =\pi^{\star}_{Z}g^{TS}\oplus g^{TZ}_{T}. \tag{3.35}\]
Let \(\operatorname{Cl}_{T}(TZ)\) be the Clifford algebra bundle associated with \(g^{TZ}_{T}\), the Clifford multiplication of which is denoted by \(c_{T}\). It is easy to see that the map \((\operatorname{Cl}_{T}(TZ),g^{TZ}_{T})\to(\operatorname{Cl}(TZ),g^{TZ})\), defined by \(c_{T}(f_{p})\mapsto c(f_{p}),c_{T}(Te_{i})\mapsto c(e_{i})\), is an isomorphism of Clifford algebras. Then we could regard \(\mathcal{E}\) as a Clifford module of \(\operatorname{Cl}_{T}(TZ)\) through this isomorphism. Let \(\nabla^{TZ,T}\) be the connection defined by (2.3) associated with \((T^{H}_{Z}W,g^{TZ}_{T})\). Then from [13, (4.3)], as in (3.4),
\[\nabla^{\mathcal{E},T}:={}^{0}\nabla^{\mathcal{E}}+\frac{1}{2T}\langle \mathcal{S}_{X}(\cdot)e_{i},f^{H}_{p,X}\rangle c(e_{i})c(f_{p})+\frac{1}{4T^{2 }}\langle\mathcal{S}_{X}(\cdot)f^{H}_{p,X},f^{H}_{q,X}\rangle c(f_{p})c(f_{q}) \tag{3.36}\]
is a \(G\)-invariant Clifford connection associated with \(\nabla^{TZ,T}\).
Let \(B_{T}\) be the Bismut superconnection associated with \((T^{H}_{Z}W,g^{TZ}_{T},\nabla^{\mathcal{E},T})\). Let
\[B_{u^{2},T}:=u\delta_{u^{2}}\circ B_{T}\circ\delta_{u^{2}}^{-1}. \tag{3.37}\]
Let \({}^{0}\nabla^{\mathcal{E}}\) be the connection on \(\mathscr{E}\) with respect to \({}^{0}\nabla^{\mathcal{E}}\) defined in [7, (1.7)], which preserves \(\langle\cdot,\cdot\rangle_{\mathscr{E}}\) in (2.12).
**Theorem 3.5**.: _[_13_, Proposition 5.5]_ _For \(T>0,u>0\),_
\[B_{u^{2},T}=uTD^{\mathcal{E}_{X}}_{X}+{}^{0}\nabla^{\mathcal{E} }+uD_{H}-\frac{1}{4u}\langle\mathcal{S}_{X}(f^{H}_{p,X})g^{H}_{\alpha,Z},g^{H }_{\beta,Z}\rangle c(f_{p})g^{\alpha}\wedge g^{\beta}\wedge\\ -\frac{u}{4T}\langle\mathcal{S}_{X}(e_{i})f^{H}_{p,X},f^{H}_{q,X} \rangle c(e_{i})c(f_{p})c(f_{q})-\frac{1}{4uT}\langle\mathcal{S}_{Z}(e_{i})g^{ H}_{\alpha,Z},g^{H}_{\beta,Z}\rangle c(e_{i})g^{\alpha}\wedge g^{\beta}\wedge\\ -\frac{1}{2T}\langle\mathcal{S}_{X}(e_{i})f^{H}_{p,X},g^{H}_{ \alpha,Z}\rangle c(e_{i})c(f_{p})g^{\alpha}\wedge. \tag{3.38}\]
_Here \(\mathcal{S}_{Z}\) is the tensor in (2.5) associated with \(\pi_{Z}\)._
Let \(D^{\mathcal{E}}_{Z,T}\) be the fiberwise Dirac operator associated with \((g^{TZ}_{T},\nabla^{\mathcal{E},T})\). By taking the \(0\)-form part on \(S\) in (3.38), we have
\[D^{\mathcal{E}}_{Z,T}=TD^{\mathcal{E}_{X}}_{X}+D_{H}+\frac{1}{T}C. \tag{3.39}\]
Let \(\nabla^{\mathcal{E},T}\) be the connection on \(\mathscr{E}\) with respect to \(\nabla^{\mathcal{E},T}\) defined in [7, (1.7)], which preserves \(\langle\cdot,\cdot\rangle_{\mathscr{E}}\) in (2.12). By taking the \(1\)-form part on \(S\) in (3.38), we see that
\[\nabla^{\mathcal{E},T}={}^{0}\nabla^{\mathcal{E}}-\frac{1}{2T}\langle \mathcal{S}_{X}(e_{i})f^{H}_{p,X},g^{H}_{\alpha,Z}\rangle c(e_{i})c(f_{p})g^{ \alpha}\wedge. \tag{3.40}\]
The following lemma will be proved in Corollary 5.3.
**Lemma 3.6**.: _We assume that \(\ker D^{\mathcal{Z}}_{T}\) is a \(G\)-equivariant vector bundle over \(S\times[1,+\infty)_{T}\). Let \(p^{T}:\mathscr{E}_{0}\to\ker D^{\mathcal{E}}_{Z,T}\) be the orthogonal projection associated with \(h_{0}\). Then there exists \(C>0\), \(T_{0}\geq 1\), such that for any \(T\geq T_{0}\), \(s\in\mathscr{E}_{0}\),_
\[\|p^{T}s-p_{\infty}s\|\leq\frac{C}{T}\|s\|. \tag{3.41}\]
_Therefore, we have_
\[\mathscr{E}_{\infty}\simeq\ker D^{\mathcal{E}}_{Z}. \tag{3.42}\]
Compared with (3.34), we write
\[\nabla^{\ker D^{\mathcal{E}}_{Z,T}}:=p^{T}\circ\nabla^{\mathscr{E},T}\circ p^ {T}, \tag{3.43}\]
From (3.40), \(\lim_{T\to+\infty}\nabla^{\ker D^{\mathcal{E}}_{Z,T}}\) exists. We denote the limit by
\[\nabla^{\infty}={p_{\infty}}^{0}\nabla^{\mathscr{E}}p_{\infty}. \tag{3.44}\]
We rearrange the parameter by \(s=T^{-1}\). Then \(\ker D^{\mathcal{E}}_{Z,1/s}\) is a \(G\)-equivariant vector bundle over \(S\times[0,1]_{s}\). Then we can define the equivariant Chern-Simons forms \(\widetilde{\mathrm{ch}}_{g}(\ker D^{\mathcal{E}}_{Z},\nabla^{\infty},\nabla^{ \ker D^{\mathcal{E}}_{Z}})\) as in [16, (1.29)]. Moreover (see e.g., [16, (1.30)]),
\[d\,\widetilde{\mathrm{ch}}_{g}(\ker D^{Z},\nabla^{\infty},\nabla^{\ker D^{ \mathcal{E}}_{Z}})=\mathrm{ch}_{g}(\ker D^{\mathcal{E}}_{Z},\nabla^{\ker D^{ \mathcal{E}}_{Z}})-\mathrm{ch}_{g}(\mathscr{E}_{\infty},\nabla^{\infty}). \tag{3.45}\]
### The main result
**Assumption 3.7**.: For the main result of this paper, we make the following assumptions:
* \(T^{H}_{Z}W\subset T^{H}_{X}W,\quad g^{TZ}=g^{TX}\oplus\pi_{X}^{*}g^{TY}\);
* \(\ker D^{\mathcal{E}}_{X}\) is a vector bundle over \(V\);
* \(\mathscr{E}_{r},r\geqslant 2\) are vector bundles over \(S\);
* \(\ker D^{\mathcal{E}}_{Z,T}\) is a vector bundle over \(S\times[1,+\infty)_{T}\).
* For \(g\in G\), \(TX^{g}\) and \(TY^{g}\) are oriented. So \(TZ^{g}\) is also oriented.
**Theorem 3.8**.: _For \(g\in G\), under the assumptions above, modulo exact forms on \(S\), we have_
\[\widetilde{\eta}_{g}(\overline{\pi_{Z}},\underline{\mathcal{E}}) =\widetilde{\eta}_{g}(\overline{\pi_{Y}},\underline{\mathcal{E}}_{Y} \otimes\ker D^{\mathcal{E}_{X}}_{X})+\int_{Y^{g}}\widehat{\mathrm{A}}_{g}(TY, \nabla^{TY})\mathrm{ch}_{g}(\mathcal{E}_{Y}/\mathcal{S},\nabla^{\mathcal{E}_{ Y}})\widetilde{\eta}_{g}(\underline{\pi_{X}^{g}},\underline{\mathcal{E}}_{X})\\ -\int_{Z^{g}}\widetilde{\widetilde{\mathrm{A}}}_{g}(TZ,\nabla^{TZ },\,{}^{0}\nabla^{TZ})\mathrm{ch}_{g}(\mathcal{E}/\mathcal{S},\nabla^{ \mathcal{E}})\\ +\sum_{r=2}^{\infty}\widetilde{\eta}_{g}(\mathscr{E}_{r},\mathscr{ E}_{r+1},\nabla^{r},\nabla^{r+1})+\widetilde{\mathrm{ch}}_{g}(\ker D^{ \mathcal{E}}_{Z},\nabla^{\infty},\nabla^{\ker D^{\mathcal{E}}_{Z}}). \tag{3.46}\]
## 4. **The proof of Theorem 3.8**
In this section, we will prove our main result Theorem 3.8. In Section 4.1, we define a second layer of adiabatic limit, then obtain a 1-form on \(\mathbb{R}\times\mathbb{R}\). In Section 4.2, we follow the same strategy as in [13, SS4] combined with the author's work in [19] to prove the main theorem. There is one intermediate theorem we need to prove, which is left to the next section.
### The fundamental form
Let \(\widehat{S}:=\mathbb{R}_{+,T}\times\mathbb{R}_{+,u}\times S\) and fibration \(\widehat{\pi}_{Z}:\widehat{W}\to\widehat{S}\) with fiber \(Z\). We define the metric \(\widehat{g}^{TZ}\) of \(\widehat{\pi}_{Z}\) such that
\[\widehat{g}^{TZ}|_{(T,u)}=u^{-2}g_{T}^{TZ}. \tag{4.1}\]
Let \(\widehat{P}_{W}:\widehat{W}\to W\) be the natural projection and \(\widehat{\mathcal{E}}:=\widehat{P}_{W}^{*}\mathcal{E}\). Let \(\nabla^{\widehat{\mathcal{E}}}\) be the connection on \(\widehat{\mathcal{E}}\) such that \(\nabla^{\widehat{\mathcal{E}}}|_{(T,u)}=\nabla^{\mathcal{E},T}\). Then the Bismut superconnection \(\widehat{B}\) with respect to \((\widehat{\pi}_{Z},\widehat{g}^{TZ},\nabla^{\widehat{\mathcal{E}}})\) can be written as (cf. [13, (4.4)])
\[\widehat{B}|_{(T,u)}=B_{u^{2},T}+dT\wedge\frac{\partial}{\partial T}+du\wedge \frac{\partial}{\partial u}-\frac{n}{2u}du-\frac{n-m}{2T}dT. \tag{4.2}\]
**Definition 4.1**.: Let \(\gamma:=du\wedge\gamma^{u}+dT\wedge\gamma^{T}\) be the part of \(\psi_{\widetilde{S}}\widetilde{\mathrm{Tr}}[g\exp(-\widehat{B}^{2})]\) of degree one with respect to the coordinate \((T,u)\) with functions \(\gamma^{u},\gamma^{T}:\mathbb{R}_{+,T}\times\mathbb{R}_{+,u}\to\Omega^{ \bullet}(S)\).
It follows from [13, Proposition 4.2] that there exists a smooth family \(\alpha:\mathbb{R}_{+,T}\times\mathbb{R}_{+,u}\to\Omega^{\bullet}(S)\) such that
\[\left(du\wedge\frac{\partial}{\partial u}+dT\wedge\frac{\partial}{\partial T} \right)\gamma=dT\wedge du\wedge d^{S}\alpha. \tag{4.3}\]
We take \(\varepsilon,A,T_{0}\in\mathbb{R}\) such that \(0\leqslant\varepsilon\leqslant A<+\infty,1\leqslant T_{0}<+\infty\). Set \(\Gamma=\Gamma_{\varepsilon,A,T_{0}}\), the contour in \(\mathbb{R}_{+,T}\times\mathbb{R}_{+,u}\) with four parts \(\Gamma_{1}\), \(\Gamma_{2}\), \(\Gamma_{3}\), \(\Gamma_{4}\) and \(\mathcal{U}\) the domain enclosed by \(\Gamma\), as in the Figure 1.
We denote by \(I^{0}_{i}:=\int_{\Gamma_{i}}\gamma\). Then by (4.3) and Stokes formula,
\[\sum_{i=1}^{4}I^{0}_{i}=\int_{\mathcal{U}}\gamma=d^{S}\left(\int_{\mathcal{U}} \alpha dT\wedge du\right). \tag{4.4}\]
We take the limits \(A\to+\infty,T\to+\infty\) and then \(\varepsilon\to 0\) in the indicated order. Let \(I^{k}_{i}\), \(1\leqslant i\leqslant 4\), \(1\leqslant k\leqslant 3\) denote the value of the part \(I^{0}_{i}\) after taking the \(k\)-th limit. Then by [12, SS22, Theorem 17],
\[\sum_{i=1}^{4}I^{3}_{i}\equiv 0\mod d^{S}\Omega^{\bullet}(S). \tag{4.5}\]
### Intermediate results
With all these superconnections \(B_{r}\), \(r\geqslant 0\), we can build the bridge between the fundamental form \(\gamma\) and equivariant eta forms \(\widetilde{\eta}_{g}\).
For \(r=0\), we consider the fibration \(\widetilde{W}|_{V^{g}}:=\mathbb{R}_{+,t}\times W|_{V^{g}}\to\widetilde{V}^{g}: =\mathbb{R}_{+,t}\times V^{g}\) with fiber \(X\). Let \(\widetilde{P}_{W}:\widetilde{W}|_{V^{g}}\to W|_{V^{g}}\) be the natural projection. Set \(T^{H}_{X}(\widetilde{W}|_{V^{g}})=T(\mathbb{R}_{+})\oplus\widetilde{P}^{*}_{W }(T^{H}_{X}W|_{V^{g}})\), \(\widetilde{g}^{TX}|_{\{t\}}=t^{-2}g^{TX}\). Set \(\widetilde{\mathcal{E}}_{X}:=\widetilde{P}^{*}_{W}\mathcal{E}_{X}\). Let \(\nabla^{\widetilde{\mathcal{E}}_{X}}\) be the connection on \(\widetilde{\mathcal{E}}_{X}\) such that \(\nabla^{\widetilde{\mathcal{E}}_{X}}|_{\{t\}}=\nabla^{\mathcal{E}_{X}}\). Let \(\widetilde{B}_{0}\) be the Bismut superconnection associated with \((T^{H}_{X}(\widetilde{W}|_{V^{g}}),\widetilde{g}^{TX},\nabla^{\widetilde{ \mathcal{E}}_{X}})\). We decompose
\[\psi_{\widetilde{V}^{g}}\widetilde{\mathrm{Tr}}[g\exp(-\widetilde{B}_{0}^{2}) ]=dt\wedge\gamma_{0}(t)+r_{0}(t), \tag{4.6}\]
where \(\gamma_{0}(t),r_{0}(t)\in\Omega^{\bullet}(V^{g})\). By [13, (4.18)], we have
\[\int_{0}^{+\infty}\gamma_{0}(t)dt=-\widetilde{\eta}_{g}(\underline{\pi}^{g}_{X },\underline{\mathcal{E}}_{X}). \tag{4.7}\]
For \(r=1\), we consider the fibration \(\widetilde{V}:=\mathbb{R}_{+,t}\times V\to\widetilde{S}:=\mathbb{R}_{+,t}\times S\) with fiber \(Y\). Let \(\widetilde{P}_{V}:\widetilde{V}\to V\) be the natural projection. Set \(T^{H}_{Y}\widetilde{V}=T(\mathbb{R}_{+})\oplus\widetilde{P}^{*}_{V}(T^{H}_{X}V)\), \(\widetilde{g}^{TY}|_{\{t\}}=t^{-2}g^{TX}\). Set \(\widetilde{\mathcal{E}}_{Y}:=\widetilde{P}^{*}_{V}\mathcal{E}_{Y}\). Let \(\nabla^{\widetilde{\mathcal{E}}_{Y}}\) be the connection on \(\widetilde{\mathcal{E}}_{Y}\) such that \(\nabla^{\widetilde{\mathcal{E}}_{Y}}|_{\{t\}}=\nabla^{\mathcal{E}_{Y}}\). Under the Assumption 3.7, \(\ker D^{\widetilde{\mathcal{E}}_{X}}_{X}\) is a vector bundle over \(\widetilde{V}\). Let \(h^{\ker D^{\widetilde{\mathcal{E}}_{X}}_{X}}\) and \(\nabla^{\ker D^{\widetilde{\mathcal{E}}_{X}}_{X}}\) be the corresponding induced metric and connection. Let \(\widetilde{B}_{1}\) be the Bismut superconnection with respect to \((T^{H}_{Y}\widetilde{V},\widetilde{g}^{TY},\nabla^{\widetilde{\mathcal{E}}_{Y} \otimes\ker D^{\widetilde{\mathcal{E}}_{X}}_{X}})\). We decompose
\[\psi_{\widetilde{S}}\widetilde{\mathrm{Tr}}[g\exp(-\widetilde{B}_{1}^{2})]=dt \wedge\gamma_{1}(t)+r_{1}(t), \tag{4.8}\]
with \(\gamma_{1}(t),r_{1}(t)\in\Omega^{\bullet}(S)\). Then by [13, (4.12)],
\[\int_{0}^{+\infty}\gamma_{1}(t)dt=-\widetilde{\eta}_{g}(\underline{\pi_{Y}}, \underline{\mathcal{E}_{Y}\otimes\ker D_{X}^{\mathcal{E}_{X}}}). \tag{4.9}\]
For \(r\geqslant 2\), we denote by \(\widetilde{\mathcal{E}_{r}}\to\widetilde{S}\) the lift of \(\mathscr{E}_{r}\) on \(\widetilde{S}\). Let \(\widetilde{D}_{r}\) be the operator on \(\widetilde{\mathcal{E}_{r}}\) such that \(\widetilde{D}_{r}|_{\{t\}\times S}=tD_{r}\). As in (2.24), we define the superconnection \(\widetilde{B}_{r}\). We decompose
\[\psi_{\widetilde{S}}\mathrm{Tr}_{s}[g\exp(-\widetilde{B}_{r}^{2})]=dt\wedge \gamma_{r}(t)+r_{r}(t), \tag{4.10}\]
with \(\gamma_{r}(t),r_{r}(t)\in\Omega^{\bullet}(S)\). By Definition 2.5, we have
\[\int_{0}^{+\infty}\gamma_{r}(t)dt=-\widetilde{\eta}_{g}(\mathscr{E}_{r}, \mathscr{E}_{r+1},\nabla^{r},\nabla^{r+1}). \tag{4.11}\]
Since \(\widehat{\Lambda}_{g}(TX,\nabla^{TX})\) only depends on \(g\in G\) and \(R^{TZ}\), we can denote it by \(\widehat{\Lambda}_{g}(R^{TX})\). Let \(R^{TZ}_{T}\) be the curvature of \(\nabla^{TZ,T}\). We define (cf. [13, (4.19)]),
\[\gamma_{A}(T):=-\left.\frac{\partial}{\partial s}\right|_{s=0}\widehat{ \Lambda}_{g}\left(R^{TZ}_{T}+s\frac{\partial\nabla^{TZ,T}}{\partial T}\right). \tag{4.12}\]
By [13, Proposition 4.5], when \(T\to+\infty\), \(\gamma_{A}(T)=\mathrm{O}(T^{-2})\), and modulo exact forms on \(W^{g}\),
\[\widetilde{\widetilde{\Lambda}}_{g}(TZ,\nabla^{TZ},{}^{0}\nabla^{TZ})=-\int _{1}^{+\infty}\gamma_{A}(T)dT. \tag{4.13}\]
The following two theorems are proved in [13].
**Theorem 4.2**.: _[_13_, Theorem 4.3]_
1. _For any_ \(u>0\)_,_ (4.14) \[\lim_{T\to+\infty}\gamma^{u}(T,u)=\gamma_{1}(u).\]
2. _For fixed_ \(0<u_{1}<u_{2}<+\infty\)_, there exists_ \(C>0\) _such that, for_ \(u\in[u_{1},u_{2}],T\geqslant 1\)_, we have_ (4.15) \[|\gamma^{u}(T,u)|\leqslant C.\]
**Theorem 4.3**.: _[_13_, Theorem 4.6]_
1. _For fixed_ \(0<u_{1}<u_{2}<+\infty\)_, there exist_ \(\delta\in(0,1],C>0\) _and_ \(T_{0}\geqslant 1\)_, such that for any_ \(u\in[u_{1},u_{2}],T\geqslant T_{0}\)_, we have_ (4.16) \[|\gamma^{T}(T,u)|\leqslant\frac{C}{T^{1+\delta}}.\]
2. _For any_ \(T\geqslant 0\)_, we have_ (4.17) \[\lim_{\varepsilon\to 0}\varepsilon^{-1}\gamma^{T}(T\varepsilon^{-1}, \varepsilon)=\int_{Y^{g}}\widehat{\Lambda}_{g}(TY,\nabla^{TY})\wedge\gamma_{0 }(T).\]
3. _There exists_ \(C>0\)_, such that for_ \(\varepsilon\in(0,1]\)_,_ \(\varepsilon\leqslant T\leqslant 1\)_,_ (4.18) \[\varepsilon^{-1}\left|\gamma^{T}(T\varepsilon^{-1},\varepsilon)+\int_{Z^{g}} \gamma_{A}(T\varepsilon^{-1})\right|\leqslant C.\]
Compared with [13, Theorem 4.4], by (3.40), (3.43) and the equivariant version of [2, Theorem 9.19], we have
**Theorem 4.4**.: _For \(T\geqslant 1\),_
\[\lim_{u\to+\infty}\gamma^{T}(T,u)=\widetilde{\psi}_{S}\mathrm{Tr}_{s}\left[g \frac{\partial\nabla^{\ker D_{\underline{x},T}^{\mathcal{E}_{T}}}}{\partial T }\exp(-\nabla^{\ker D_{\underline{x},T}^{\mathcal{E}}})\right]. \tag{4.19}\]
The last section will be devoted to prove the following theorem. Comparing with [13, Theorem 4.3 (iii)],
**Theorem 4.5**.: _We have the following identity:_
\[\lim_{T\to+\infty}\int_{1}^{+\infty}\gamma^{u}(T,u)du=\int_{1}^{+\infty}\gamma_{1 }(u)du-\sum_{r=2}^{\infty}\widetilde{\eta}_{g}(\mathscr{E}_{r},\mathscr{E}_{r+1 },\nabla^{r},\nabla^{r+1}). \tag{4.20}\]
By Theorems 4.2 - 4.4, we can calculate \(I_{i}^{3}\) in (4.5) to prove the main result Theorem 3.8. From the same approach in [13, SS4.3], we have
\[I_{1}^{3}=-\widetilde{\eta}_{g}(\underline{\pi}_{Y},\underline{\mathcal{E}}_ {Y}\otimes\ker D_{X}^{\mathcal{E}_{X}})-\sum_{r=2}^{\infty}\widetilde{\eta}_{ g}(\mathscr{E}_{r},\mathscr{E}_{r+1},\nabla^{r},\nabla^{r+1}), \tag{4.21}\]
\[I_{3}^{3}=\widetilde{\eta}_{g}(\underline{\pi}_{Z},\underline{\mathcal{E}}), \tag{4.22}\]
and
\[I_{4}^{3}=-\int_{Y^{g}}\widehat{\Lambda}_{g}(TY,\nabla^{TY})\widetilde{\eta} _{g}(\underline{\pi}_{\underline{X}}^{g},\underline{\mathcal{E}}_{X})+\int_ {Z^{g}}\widetilde{\widetilde{\Lambda}}_{g}(TZ,\nabla^{TZ},{}^{0}\nabla^{TZ} )\mathrm{ch}_{g}(\mathcal{E}/\mathcal{S},\nabla^{\mathcal{E}}). \tag{4.23}\]
By Theorem 4.4 and (3.44),
\[I_{2}^{3} =-\lim_{A\to+\infty}\lim_{u\to+\infty}\int_{1}^{A}\gamma^{T}(T,u )dT\] \[=-\lim_{A\to+\infty}\int_{1}^{A}\psi_{S}\widetilde{\mathrm{Tr}} \left[g\frac{\partial\nabla^{\ker D_{\mathbb{Z},T}^{\mathcal{E}}}}{\partial T }\exp(-\nabla^{\ker D_{\mathbb{Z},T}^{\mathcal{E}}})\right]dT\] \[=-\lim_{A\to+\infty}\widetilde{\mathrm{ch}}_{g}(\ker D_{Z}^{ \mathcal{E}},\nabla^{\ker D_{\mathbb{Z},A}^{\mathcal{E}}},\nabla^{\ker D_{ \mathbb{Z}}^{\mathcal{E}}})\] \[=-\widetilde{\mathrm{ch}}_{g}(\ker D_{Z}^{\mathcal{E}},\nabla^{ \infty},\nabla^{\ker D_{Z}^{\mathcal{E}}}). \tag{4.24}\]
## 5. The Proof of Theorems 4.5
The purpose of this section is to prove the Theorems 4.5. In Section 5.1, we analyze the resolvents of Dirac operators \(D_{Z,T}^{\mathcal{E}}\) and \(D_{r}\) to establish the relations. In Section 5.2, we follow what Ma have done in [19] for the functoriality of holomorphic analytic torsions to give the proof.
### Limits of resolvent
For \(v\in V,b\in S\), we set
\[\begin{split}\mathbb{E}_{v}&:=C^{\infty}(X_{v},\pi_{ Z}^{*}\Lambda(T^{*}S)\widehat{\otimes}\mathcal{E}_{X}),\qquad\qquad\qquad \mathbb{E}_{0,b}:=C^{\infty}(Z_{b},\pi_{Z}^{*}\Lambda(T^{*}S)\widehat{\otimes} \mathcal{E}_{X}),\\ \mathbb{E}_{1,b}&:=C^{\infty}(Y_{b},\pi_{Y}^{*} \Lambda(T^{*}S)\widehat{\otimes}\mathcal{E}_{Y}\otimes\ker D_{X}^{\mathcal{E }_{X}}),\qquad\mathbb{E}_{r}:=C^{\infty}(S,\Lambda(T^{*}S)\widehat{\otimes} \mathscr{E}_{r}),\quad r\geqslant 2.\end{split} \tag{5.1}\]
For \(\mu\in\mathbb{R}\), we could make use of geometric structures to define Sobolev spaces \(\mathbb{E}_{v}^{\mu}\), \(\mathbb{E}_{0,b}^{\mu}\), \(\mathbb{E}_{1,b}^{\mu}\), \(\mathbb{E}_{r}^{\mu}\), \(r\geqslant 2\) of order \(\mu\) respectively. We shall denote by \(\|\cdot\|_{X,\mu}\), \(\|\cdot\|_{\mu}\), \(\|\cdot\|_{Y,\mu}\) and \(\|\cdot\|_{r,\mu}\) the corresponding Sobolev norms. We still use the notations \(p^{T}:\mathbb{E}_{0}\to\Omega^{\bullet}(S)\widehat{\otimes}\ker D_{Z,T}^{ \mathcal{E}}\) and \(p_{r}:\mathbb{E}_{0}\to\mathbb{E}_{r}\), \(r\geqslant 1\) as the orthogonal projections. We have the projections \(p^{T,\perp}:=1-p^{T}\) and \(p_{r}^{\perp}:=1-p_{r}\), \(r\geqslant 1\).
Taking \(c>0\), for \(r\geqslant 2\), let
\[U_{r}:=\left\{\lambda\in\mathbb{C}:\inf_{\mu\in\mathrm{Sp}(D_{r})}|\lambda-\mu |\geqslant c\right\}. \tag{5.2}\]
The proof of the following theorem is almost the same as [3, Theorem 6.2].
**Theorem 5.1**.: _For \(r\geqslant 2\), \(\lambda\in U_{r}\), there exist linear maps_
\[\varphi_{r,\lambda}:\mathbb{E}_{0}\to\mathbb{E}_{0}^{r+1}, \tag{5.3}\]
_such that for \(s\in\mathbb{E}_{0}\), we write \(\varphi_{r,\lambda}(s)=(s_{0},\cdots,s_{r})\), which satisfies_
\[D_{X}^{\mathcal{E}_{X}}s_{0} =0,\] \[D_{H}s_{0}+D_{X}^{\mathcal{E}_{X}}s_{1} =0,\] \[\vdots\] \[D_{X}^{\mathcal{E}_{X}}s_{r-1}+D_{H}s_{r-2}+Cs_{r-3} =0,\] \[-D_{X}^{\mathcal{E}_{X}}s_{r}-D_{H}s_{r-1}-Cs_{r-2}+\lambda s_{0} =s. \tag{5.4}\]
_And for any \(\mu\in\mathbb{R}\), \(\varphi_{r,\lambda}\) can be extended to a bounded linear map from \(\mathbb{E}_{0}^{\mu}\) to \((\mathbb{E}_{0}^{\mu})^{r+1}\). Moreover, we have \(s_{0}\in\mathscr{E}_{r}\), which can be given by_
\[s_{0}=(\lambda-D_{r})^{-1}p_{r}s. \tag{5.5}\]
Let \(\alpha_{T}\), \(T\in[1,+\infty]\) be a family of tensors or differential operators. We denote by \(a^{(i)}\) the derivative of \(\alpha_{T}-\alpha_{\infty}\) of order \(i\). If for any \(p\in\mathbb{N}\), there exists \(C>0\) such that when \(T\geqslant 1\), \(\sup\|a^{(i)}\|\leqslant C/T^{k},i=0,1,\cdots,p\), we write \(\alpha_{T}=\alpha_{\infty}+\operatorname{O}\left(\frac{1}{T^{k}}\right).\)
We take \(c_{1},c_{2}\) such that
\[\bigcup_{r\geqslant 2}\operatorname{Sp}(D_{r}^{2,>0})\subset(c_{1},c_{2}), \quad(0,2c_{1})\bigcap\bigcup_{b\in S}\operatorname{Sp}(D_{Y_{b}}^{\mathcal{E }_{X}\oplus\ker D_{X}^{\mathcal{E}_{X}}})=\emptyset. \tag{5.6}\]
Set
\[U_{0}:=\left\{\lambda\in\mathbb{C}:\frac{\sqrt{c_{1}}}{2}\leqslant|\lambda| \leqslant\sqrt{c_{1}},\text{ or }\sqrt{c_{1}}\leqslant|\lambda|\leqslant 2\sqrt{c_{2}} \right\}. \tag{5.7}\]
Comparing with [3, Theorem 6.5], by Assumption 3.7, we have the following relation of resolvents of \(D_{Z,T}^{\mathcal{E}}\) and \(D_{r}\). The proof of this theorem is the same as that of [3, Theorem 6.5]. We write it here for the completeness.
**Theorem 5.2**.: _Given \(r\geqslant 2\), for \(\lambda\in U_{0}\), \(s\in\mathbb{E}_{0}^{0}\), there exists \(C\in\mathbb{R}\), such that when \(T\to+\infty\),_
\[\|(\lambda-T^{r-1}D_{Z,T}^{\mathcal{E}})^{-1}s-p_{r}(\lambda-D_{r})^{-1}p_{r}s \|\leqslant\frac{C}{T}\|s\|. \tag{5.8}\]
Proof.: Set
\[M_{r,T}:\mathbb{E}_{0}^{r+1} \to\mathbb{E}_{0},\] \[(s_{0},\cdots,s_{r}) \mapsto s_{0}+\frac{s_{1}}{T}+\cdots+\frac{s_{r}}{T^{r}}. \tag{5.9}\]
Define
\[N_{r,T}=M_{r,T}\circ\varphi_{r,\lambda}, \tag{5.10}\]
where \(\varphi_{r,\lambda}\) is defined in (5.3). By (5.4), we get
\[(\lambda-T^{r-1}D_{Z,T}^{\mathcal{E}})N_{r,T}s=(\lambda-T^{r}D_{ X}^{\mathcal{E}_{X}}-T^{r-1}D_{H}-T^{r-2}C)\left(s_{0}+\frac{s_{1}}{T}+ \cdots+\frac{s_{r}}{T^{r}}\right)\\ =s+\lambda\left(\frac{s_{1}}{T}+\cdots+\frac{s_{r}}{T^{r}}\right) -\frac{1}{T}D_{H}s_{r}-\frac{1}{T^{2}}Cs_{r}. \tag{5.11}\]
Hence
\[(\lambda-T^{r-1}D_{Z,T}^{\mathcal{E}})^{-1}s=N_{r,T}s+(\lambda-T^{r-1}D_{Z,T}^ {\mathcal{E}})^{-1}\left(-\lambda\left(\frac{s_{1}}{T}+\cdots+\frac{s_{r}}{T^ {r}}\right)+\frac{1}{T}D_{H}s_{r}+\frac{1}{T^{2}}Cs_{r}\right). \tag{5.12}\]
Since \(\|(\lambda-T^{r-1}D_{Z,T}^{\mathcal{E}})^{-1}\|\) is uniformly bounded, by (5.5), (5.9), we obtain (5.8).
**Corollary 5.3**.: _There exist \(C>0\), \(T_{0}\geq 1\), such that for any \(T\geq T_{0}\), \(s\in\mathscr{E}_{0}\),_
\[\|p^{T}s-p_{\infty}s\|\leq\frac{C}{T}\|s\|. \tag{5.13}\]
_Moreover, for \(T\geq 1\),_
\[\ker D_{Z,T}^{\mathcal{E}}\simeq\mathscr{E}_{\infty}. \tag{5.14}\]
Proof.: Set
\[\begin{split} P_{r,T}&:=\frac{1}{2\pi\sqrt{-1}}\int_{ \{\lambda\in\mathbb{C};|\lambda|<\sqrt{\epsilon_{1}}\}}(\lambda-T^{r-1}D^{ \mathcal{E}}_{Z,T})^{-1}d\lambda\\ &=\frac{1}{2\pi\sqrt{-1}}\int_{\{\lambda\in\mathbb{C};|\lambda|< \frac{\sqrt{\epsilon_{1}}}{T^{r-1}}\}}(\lambda-D^{\mathcal{E}}_{Z,T})^{-1}d \lambda.\end{split} \tag{5.15}\]
By Lemma 3.3 and (5.6),
\[P_{r}:=\frac{1}{2\pi\sqrt{-1}}\int_{\{\lambda\in\mathbb{C};|\lambda|<\sqrt{ \epsilon_{1}}\}}p_{r}(\lambda-D_{r})^{-1}p_{r}d\lambda=p_{r+1}. \tag{5.16}\]
Recall that \(r_{0}\) is the index from which \(\mathscr{E}_{r}\) converges. Hence \(P_{r_{0}-1}=P_{r_{0}}=\cdots=P_{\infty}\).
By Theorem 5.2, when \(T\gg 1\),
\[\|P_{r_{0}-1,T}s-P_{r_{0}-1}s\|\leqslant\frac{C}{T}\|s\|. \tag{5.17}\]
Note that when \(n>r_{0}-1\), \(P_{n,T}=P_{r_{0}-1,T}\). As \(n\to+\infty\), by (5.15), \(P_{r_{0}-1,T}=p^{T}\). So we get (5.13).
By Assumption 3.7, \(\dim\ker D^{\mathcal{E}}_{Z,T}\) is independent of \(T\). According to (5.17), \(\dim\operatorname{im}P_{r_{0}}=\dim\ker D^{\mathcal{E}}_{Z,T}\). So \(\dim\ker D^{\mathcal{E}}_{Z,\infty}=\dim\mathscr{E}_{\infty}\). Under Assumption 3.7, \(\ker D^{\mathcal{E}}_{Z,T}\cong\ker D^{\mathcal{E}}_{Z}\) are isomorphic vector bundles. Hence \(\ker D^{\mathcal{E}}_{Z}\cong\mathscr{E}_{\infty}\).
The proof of Corollary 5.3 is completed.
Theorem 5.2 is essential for our analysis. It tells us that all eigenvalues of \(D^{\mathcal{E}}_{Z,T}\) which satisfy \(\operatorname{O}\left(1/T^{r-1}\right)\) obey \(\frac{1}{T^{r-1}}\left(\operatorname{Sp}(D_{r})+\operatorname{O}(1/T)\right)\). We depict the contours \(\delta_{0},\Delta_{0}\) in Figure 2:
By (5.6), we know that when \(T\gg 1\),
\[\operatorname{Sp}(D^{\mathcal{E},2}_{Z,T})\bigcap[0,2c_{1}]\subset\frac{ \mathcal{U}_{1}}{T^{2(r_{0}-1)}}\bigcup\bigcup_{r=2}^{r_{0}}\frac{\mathcal{U} _{2}}{T^{2(r-1)}}. \tag{5.18}\]
Comparing with [9, Proposition 6.12], we show that Theorem 3.8 really extends Dai's adiabatic limit formula to the family case.
**Proposition 5.4**.: If \(S\) is a point, set
\[A_{r}:=\left\{\lambda\in\operatorname{Sp}(D^{\mathcal{E}}_{Z,T}):\lambda= \operatorname{O}\left(\frac{1}{T^{r-1}}\right)\right\}. \tag{5.19}\]
Then when \(T\to+\infty\),
\[\widetilde{\eta}_{e}(\mathscr{E}_{r},\mathscr{E}_{r+1},\nabla^{r},\nabla^{r+1 })=\sum_{\lambda\in A_{r}/A_{r+1}}\operatorname{sgn}(\lambda). \tag{5.20}\]
Proof.: By Definition 2.5, when \(S\) is a point,
\[\widetilde{\eta}_{e}(\mathscr{E}_{r},\mathscr{E}_{r+1},\nabla^{r},\nabla^{r+1 })=\int_{0}^{+\infty}\widetilde{\psi}_{\{\text{pt}\}}\text{Tr}_{s}\left[\frac{ D_{r}}{2\sqrt{t}}\exp(-tD^{2}_{r})\right]dt. \tag{5.21}\]
By Gauss integral, as
\[\frac{1}{\sqrt{\pi}}\int_{0}^{+\infty}\lambda\mathrm{e}^{-t\lambda^{2}}\frac{dt}{ 2\sqrt{t}}=\mathrm{sgn}(\lambda),\]
we have
\[\widetilde{\eta}_{e}(\mathscr{E}_{r},\mathscr{E}_{r+1},\nabla^{r},\nabla^{r+1})= \sum_{\lambda\in\mathrm{Sp}(D_{r})}\mathrm{sgn}(\lambda).\]
According to Theorem 5.2, we see that \(\widetilde{\eta}_{e}(\mathscr{E}_{r},\mathscr{E}_{r+1},\nabla^{r},\nabla^{r+1} )=\sum_{\lambda\in A_{r}/A_{r+1}}\mathrm{sgn}(\lambda)\).
### Proof of Theorem 4.5
We start from the definitions of \(\gamma^{u}(T,u),\gamma^{T}(T,u)\) and \(\gamma_{r}(u)\). Set
\[\begin{split}\mathcal{B}_{T}&:=B_{T}^{2}+du\wedge \delta_{u^{2}}^{-1}\frac{\partial B_{u^{2},T}}{\partial u}\delta_{u^{2}},\\ \mathcal{B}_{u,T}&:=B_{u^{2},T}^{2}+du\wedge\frac{ \partial B_{u^{2},T}}{\partial u},\\ \mathcal{B}_{r}&:=B_{r}^{2}+du\wedge\delta_{u^{2}}^{ -1}\frac{\partial B_{r,u^{2}}}{\partial u}\delta_{u^{2}}\quad r\geqslant 1, \end{split} \tag{5.22}\]
where \(B_{r,u^{2}}=u\,\delta_{u^{2}}\circ B_{r}\circ\delta_{u^{2}}^{-1}\). Then
\[\begin{split}\gamma^{u}(T,u)&=\left\{\psi_{S} \widetilde{\mathrm{Tr}}[g\exp(-\mathcal{B}_{u^{2},T})]\right\}^{du}=\left\{u^{ -2}\psi_{S}\delta_{u^{2}}\widetilde{\mathrm{Tr}}[g\exp(-u^{2}\mathcal{B}_{T}) ]\right\}^{du},\\ \gamma_{r}(u)&=\left\{u^{-2}\psi_{S}\delta_{u^{2}} \widetilde{\mathrm{Tr}}[g\exp(-u^{2}\mathcal{B}_{r})]\right\}^{du},\quad r \geqslant 1.\end{split} \tag{5.23}\]
Set
\[\mathcal{B}_{r,u,T}:=\mathcal{B}_{T^{r-1}u,T}. \tag{5.24}\]
The proof of the following theorem is the same as [5, Theorem 9.2] (see also [13, Lemma 5.8]).
**Theorem 5.5**.: _For \(u>0\), \(T\geq 1\), we have_
\[\begin{split}\mathrm{Sp}(\mathcal{B}_{u,T})&= \mathrm{Sp}(u^{2}\mathcal{B}_{1,T})=\mathrm{Sp}(u^{2}D_{Z,T}^{\mathcal{E},2}), \quad\mathrm{Sp}(\mathcal{B}_{r,u,T})=\mathrm{Sp}(T^{2(r-1)}u^{2}D_{Z,T}^{ \mathcal{E},2}),\\ \mathrm{Sp}(\mathcal{B}_{1})&=\mathrm{Sp}(D_{Y}^{ \mathcal{E}_{Y}\times D_{X}^{\mathcal{E}_{X}},2}),\quad\mathrm{Sp}(\mathcal{B }_{r})=\mathrm{Sp}(D_{r}^{2}),\quad r\geqslant 2.\end{split} \tag{5.25}\]
Set
\[\begin{split} F_{r,u,T}&:=u^{-2}\psi_{S}\delta_{u^{2 }}\widetilde{\mathrm{Tr}}\left[\int_{\Delta_{0}}\mathrm{e}^{-u^{2}\lambda}( \lambda-\mathcal{B}_{r,1,T})^{-1}d\lambda\right].\quad r\geq 2;\\ F_{r,u,\infty}&:=u^{-2}\psi_{S}\delta_{u^{2}} \widetilde{\mathrm{Tr}}\left[\int_{\Delta_{0}}\mathrm{e}^{-u^{2}\lambda}( \lambda-\mathcal{B}_{r})^{-1}d\lambda\right],\quad r\geq 2;\\ G_{r,u,T}&:=u^{-2}\psi_{S}\delta_{u^{2}}\widetilde{ \mathrm{Tr}}\left[\int_{\delta_{0}}\mathrm{e}^{-u^{2}\lambda}(\lambda- \mathcal{B}_{r,1,T})^{-1}d\lambda\right],\quad r\geq 1;\\ G_{r,u,\infty}&:=u^{-2}\psi_{S}\delta_{u^{2}} \widetilde{\mathrm{Tr}}\left[\int_{\delta_{0}}\mathrm{e}^{-u^{2}\lambda}( \lambda-\mathcal{B}_{r})^{-1}d\lambda\right],\quad r\geq 1.\end{split} \tag{5.26}\]
Note that \(\mathcal{B}_{1,1,T}=\mathcal{B}_{1,T}=\mathcal{B}_{T}\). Set
\[F_{1,u,T}=\psi_{S}\widetilde{\mathrm{Tr}}[g\exp(-\mathcal{B}_{u,T})]-G_{1,u,T},\quad F_{1,u,\infty}=\psi_{S}\widetilde{\mathrm{Tr}}[g\exp(-\mathcal{B}_{1,u ^{2}})]-G_{1,u,\infty}. \tag{5.27}\]
By (5.23), for \(T\gg 1\),
\[\begin{split}\gamma^{u}(T,u)&=\left\{F_{1,u,T} \right\}^{du}+\left\{G_{1,u,T}\right\}^{du}\\ \gamma_{r}(u)&=\left\{F_{r,u,\infty}\right\}^{du}+ \left\{G_{r,u,\infty}\right\}^{du},\quad r\geqslant 2.\end{split} \tag{5.28}\]
When \(T\gg 1\), we have
\[\begin{split} G_{1,u,T}=&\sum_{r=2}^{r_{0}}\psi_{S} \delta_{u^{2}}\widetilde{\operatorname{Tr}}\left[\int_{\frac{\Delta_{0}}{T^{2(r -1)}}}\operatorname{e}^{-u^{2}\lambda}(\lambda-\mathcal{B}_{1,T})d\lambda \right]\\ &+\psi_{S}\delta_{u^{2}}\widetilde{\operatorname{Tr}}\left[\int_{ \frac{\delta_{0}}{T^{2(r_{0}-1)}}}\operatorname{e}^{-u^{2}\lambda}(\lambda- \mathcal{B}_{1,T})d\lambda\right]\\ =&\sum_{r=2}^{r_{0}}F_{r,T^{1-r}u,T}+G_{r_{0},T^{1-r _{0}u,T}}.\end{split} \tag{5.29}\]
The proof of the following lemma is same with that in [20, (2.98) and (2.105)] and [19, SS2.e]. Note that in our situation, the Dirac operator cannot be decomposed into the sum of two nilpotent operators. But the term \(p_{r,T}\) in [19, SS2.e] is the same as \(p_{r}\) in our case, which is independent of \(T\). This is also the case in [20]. Hence we can prove this lemma for general Dirac operators.
**Lemma 5.6**.: _(1) There exist \(\delta,c,C,T_{0}>0\) such that for any \(u\geqslant 1,T\geqslant T_{0},r\geqslant 1\),_
\[|F_{r,u,T}-F_{r,u,\infty}|\leqslant\frac{C}{T^{\delta}}\operatorname{e}^{-cu},\quad|G_{r,u,T}-G_{r,u,\infty}|\leqslant\frac{C}{T^{\delta}}. \tag{5.30}\]
_(2) There exist \(C,\delta>0\), such that for any \(u\in\mathbb{C},|u|\leqslant 1,T\geqslant T_{0}\),_
\[|F_{r,u,T}-F_{r,u,\infty}|\leqslant\frac{C}{T^{\delta}},\quad|G_{r,u,T}-G_{r,u,\infty}|\leqslant\frac{C}{T^{\delta}}. \tag{5.31}\]
Set
\[\begin{split} f_{r,u,T}&=\{F_{r,u,T}\}^{ds}|_{s=1}=u \{F_{r,u,T}\}^{du},\\ f_{r,u,\infty}&=\{F_{r,u,\infty}\}^{ds}|_{s=1}=u\{F_ {r,u,\infty}\}^{du},\\ g_{r,u,T}&=\{G_{r,u,T}\}^{ds}|_{s=1}=u\{G_{r,u,T}\}^{ du},\\ g_{r,u,\infty}&=\{G_{r,su,\infty}\}^{ds}|_{s=1}=u\{G_ {r,u,\infty}\}^{du}.\end{split} \tag{5.32}\]
By (5.28) and (5.32),
\[\begin{split} f_{1,u,T}+g_{1,u,T}&=u\gamma^{u}(T,u),\\ f_{r,u,\infty}+g_{r,u,\infty}&=u\gamma_{r}(u),\quad r \geq 2.\end{split} \tag{5.33}\]
By (5.30), when \(r\geqslant 1\),
\[|f_{r,u,T}-f_{r,u,\infty}|\leqslant\frac{Cu}{T^{\delta}}\operatorname{e}^{- cu},\quad|g_{r,u,T}-g_{r,u,\infty}|\leqslant\frac{Cu}{T^{\delta}}. \tag{5.34}\]
Note that when \(r=1\), the corresponding result is given by Theorem 4.2(1).
By (5.29), (5.32)-(5.34) and the dominated convergence theorem,
\[\lim_{T\to+\infty}\int_{1}^{+\infty}\gamma^{u}(T,u)du=\lim_{T\to+ \infty}\int_{1}^{+\infty}\left\{\gamma^{u}(T,u)-\frac{g_{1,u,T}}{u}\right\} du+\lim_{T\to+\infty}\int_{1}^{+\infty}g_{1,u,T}\frac{du}{u}\\ =\int_{1}^{+\infty}\left\{\gamma_{1}(u)-\frac{g_{1,u,\infty}}{u} \right\}du+\lim_{T\to+\infty}\sum_{r=2}^{r_{0}}\int_{1}^{+\infty}f_{r,T^{1-r }u,T}\frac{du}{u}+\lim_{T\to+\infty}\int_{1}^{+\infty}g_{r_{0},T^{1-r_{0}u,T} }\frac{du}{u}\\ =\int_{1}^{+\infty}\gamma_{1}(u)du+\lim_{T\to+\infty}\sum_{r=2}^ {r_{0}}\int_{T^{1-r}}^{+\infty}f_{r,u,T}\frac{du}{u}+\lim_{T\to+\infty}\int_{T ^{1-r_{0}}}^{+\infty}g_{r_{0},u,T}\frac{du}{u}-\int_{1}^{+\infty}g_{1,u,\infty} \frac{du}{u}\\ =\int_{1}^{+\infty}\gamma_{1}(u)du+\lim_{T\to+\infty}Q_{1,T}+\lim _{T\to+\infty}Q_{2,T}-\int_{1}^{+\infty}g_{1,u,\infty}\frac{du}{u},\end{split} \tag{5.35}\]
where
\[\begin{split} Q_{1,T}&:=\sum_{r=2}^{r_{0}}\int_{1}^{+ \infty}f_{r,u,T}\frac{du}{u}+\int_{1}^{+\infty}g_{r_{0},u,T}\frac{du}{u},\\ Q_{2,T}&:=\sum_{r=2}^{r_{0}}\int_{T^{1-r}}^{1}f_{r, u,T}\frac{du}{u}+\int_{T^{1-r_{0}}}^{1}g_{r_{0},u,T}\frac{du}{u}.\end{split} \tag{5.36}\]
By (5.34) and the dominated convergence theorem, when \(T\to+\infty\),
\[Q_{1,T}\to Q_{1,\infty}=\sum_{r=2}^{r_{0}}\int_{1}^{+\infty}f_{r,u,\infty}\frac {du}{u}+\int_{1}^{+\infty}g_{r_{0},u,\infty}\frac{du}{u}. \tag{5.37}\]
Then Theorem 4.5 follows directly from (4.11), (5.33), (5.35)-(5.37) and the following lemma.
**Lemma 5.7**.: _When \(T\to+\infty\),_
\[\lim_{T\to+\infty}Q_{2,T}=\sum_{r=2}^{r_{0}}\int_{0}^{1}\gamma_{r}(u)du+\sum_ {r=1}^{r_{0}-1}\int_{1}^{+\infty}g_{r,u,\infty}\frac{du}{u}. \tag{5.38}\]
Proof.: When \(u\to 0\), by (5.26), there exists \(N\in\mathbb{N}\), such that for \(T\in[1,+\infty]\), \(r\geq 1\), \(F_{r,u,T},G_{r,u,T}\) have asymptotic expansions
\[F_{r,u,T}=\sum_{i=-N-1}^{-1}A_{r,i,T}u^{i}+\mathrm{O}(1),\quad G_{r,u,T}=\sum _{i=-N-1}^{-1}B_{r,i,T}u^{i}+\mathrm{O}(1). \tag{5.39}\]
For \(T\in[1,+\infty]\), \(r\geq 1\), set
\[a_{r,i,T}=\{A_{r,i-1,T}\}^{du},\quad b_{r,i,T}=\{B_{r,i-1,T}\}^{du}. \tag{5.40}\]
Then by (5.32),
\[f_{r,u,T}=\sum_{i=-N}^{0}a_{r,i,T}u^{i}+\mathrm{O}(u),\quad g_{r,u,T}=\sum_{i =-N}^{0}b_{r,i,T}u^{i}+\mathrm{O}(u). \tag{5.41}\]
From (5.31), when \(T\to+\infty\), the functions \(\{f_{r,u,T},g_{r,u,T}\}\) are uniformly bounded holomorphic functions on \(\{u\in\mathbb{C}:|u|\leqslant 1\}\). Hence they have uniform expansions in the domain of \(u\). By (5.31) and Cauchy formula, the coefficients of expansions of \(f\) and \(g\) are convergent in the sense of \(\mathrm{O}(\frac{1}{T^{\delta}})\) when \(T\to+\infty\). So \(a_{r,i,T},b_{r,i,T}\in\Omega^{\bullet}(S)\) and depend smoothly on \(T\in[1,+\infty]\). Moreover there exists \(\delta>0\), for \(T\to+\infty\),
\[a_{r,i,T}=a_{r,i,\infty}+\mathrm{O}\left(\frac{1}{T^{\delta}}\right),\quad b_ {r,i,T}=b_{r,i,\infty}+\mathrm{O}\left(\frac{1}{T^{\delta}}\right). \tag{5.42}\]
Note that by (5.26) and (5.32),
\[f_{r,u,\infty}+g_{r,u,\infty}=\left\{\psi_{S}\widetilde{\mathrm{Tr}}\left[g \exp(-\mathcal{B}_{r,u^{2}})\right]\right\}^{du}. \tag{5.43}\]
So for \(i\leq 0\),
\[a_{r,i,\infty}+b_{r,i,\infty}=0. \tag{5.44}\]
From the equivariant version of [2, Theorem 9.7],
\[\lim_{u\to+\infty}(f_{r,u,T}+g_{r,u,T})=0. \tag{5.45}\]
By the definition of \(f_{r,u,T}\), \(\lim_{u\to+\infty}f_{r,u,T}=0\). Combined with (5.45), \(\lim_{u\to+\infty}g_{r,u,T}=0\), we have
\[g_{r,u,\infty}=\sum_{-N}^{-1}b_{r,i,\infty}u^{i}. \tag{5.46}\]
By the definition of \(c_{1},c_{2}\) (5.6) and relation (5.18), in the region \(\mathcal{U}_{1}\), \(0\) is the only eigenvalue of \(\mathcal{B}_{r_{0},u,T}\). By (5.26),
\[G_{r_{0},u,T}:=u^{-2}\psi_{S}\delta_{u^{2}}\widetilde{\mathrm{Tr}}\left[\int_{ \delta_{0}}\mathrm{e}^{-\lambda}(\lambda-u^{2}\mathcal{B}_{r_{0},1,T})^{-1}d \lambda\right]. \tag{5.47}\]
By the same argument in the proof of [5, Theorem 9.29], we obtain that \(b_{r_{0},i,T}=0\) for \(i\geq 0\). That is,
\[g_{r_{0},u,T}=\sum_{i=-N}^{-1}b_{r_{0},i,T}u^{i}. \tag{5.48}\]
By (5.29),
\[b_{r_{0},i,T}T^{(1-r_{0})i}+\sum_{r=2}^{r_{0}}a_{r,i,T}T^{(1-r)i}=-a_{1,i,T} \quad i<0. \tag{5.49}\]
So we may write \(Q_{2,T}\) as:
\[\begin{split} Q_{2,T}=&\sum_{r=2}^{r_{0}}\int_{T^{1 -r}}^{1}\left\{f_{r,u,T}-\sum_{i=-N}^{0}a_{r,i,T}u^{i}\right\}\frac{du}{u}+ \sum_{i=-N}^{-1}\frac{1}{i}\sum_{r=2}^{r_{0}}\left(a_{r,i,T}-a_{r,i,T}T^{(1-r) i}\right)\\ &+\sum_{r=2}^{r_{0}}\int_{T^{1-r}}^{1}a_{r,0,T}\frac{du}{u}+\sum_ {i=-N}^{-1}\frac{1}{i}\left(b_{r_{0},i,T}-b_{r_{0},i,T}T^{(1-r)i}\right)\\ =&\sum_{r=2}^{r_{0}}\int_{T^{1-r}}^{1}\left\{f_{r,u, T}-\sum_{i=-N}^{0}a_{r,i,T}u^{i}\right\}\frac{du}{u}+\sum_{i=-N}^{-1}\frac{1}{i} \left(b_{r_{0},i,T}+\sum_{r=1}^{r_{0}}a_{r,i,T}\right)\\ &+\sum_{r=2}^{r_{0}}(r-1)a_{r,0,T}\log T.\end{split} \tag{5.50}\]
So when \(T\to+\infty\),
\[\begin{split} Q_{2,T}\to Q_{2,\infty}&=\sum_{r=2}^{r _{0}}\int_{0}^{1}\left\{f_{r,u,\infty}+\sum_{i=-N}^{0}b_{r,i,\infty}u^{i} \right\}\frac{du}{u}+\sum_{r=1}^{r_{0}-1}\sum_{i=-N}^{-1}\frac{1}{i}a_{r,i, \infty}\\ &=\sum_{r=2}^{r_{0}}\int_{0}^{1}\left\{f_{r,u,\infty}+g_{r,u, \infty}\right\}\frac{du}{u}-\sum_{r=1}^{r_{0}-1}\int_{1}^{+\infty}\left\{\sum_ {i=-N}^{-1}a_{r,i,\infty}u^{i}\right\}\frac{du}{u}\\ &=\sum_{r=2}^{r_{0}}\int_{0}^{1}u\gamma_{r}(u)\frac{du}{u}+\sum_ {r=1}^{r_{0}-1}\int_{1}^{+\infty}g_{r,u,\infty}\frac{du}{u}.\end{split} \tag{5.51}\]
The proof of Lemma 5.7 is completed.
## Acknowledgements
This paper is a condensed form of the second author's Ph.D thesis, we would like to thank Professers Xianzhe Dai and Hang Wang for helpful discussions. B. L. is partially supported by Science and Technology Commission of Shanghai Municipality (STCSM), grant No. 22DZ2229014, and NSFC No.11931007, No.12225105.
|
2305.19623 | Point-GCC: Universal Self-supervised 3D Scene Pre-training via
Geometry-Color Contrast | Geometry and color information provided by the point clouds are both crucial
for 3D scene understanding. Two pieces of information characterize the
different aspects of point clouds, but existing methods lack an elaborate
design for the discrimination and relevance. Hence we explore a 3D
self-supervised paradigm that can better utilize the relations of point cloud
information. Specifically, we propose a universal 3D scene pre-training
framework via Geometry-Color Contrast (Point-GCC), which aligns geometry and
color information using a Siamese network. To take care of actual application
tasks, we design (i) hierarchical supervision with point-level contrast and
reconstruct and object-level contrast based on the novel deep clustering module
to close the gap between pre-training and downstream tasks; (ii)
architecture-agnostic backbone to adapt for various downstream models.
Benefiting from the object-level representation associated with downstream
tasks, Point-GCC can directly evaluate model performance and the result
demonstrates the effectiveness of our methods. Transfer learning results on a
wide range of tasks also show consistent improvements across all datasets.
e.g., new state-of-the-art object detection results on SUN RGB-D and S3DIS
datasets. Codes will be released at https://github.com/Asterisci/Point-GCC. | Guofan Fan, Zekun Qi, Wenkai Shi, Kaisheng Ma | 2023-05-31T07:44:03Z | http://arxiv.org/abs/2305.19623v2 | # Point-GCC : Universal Self-supervised 3D Scene Pre-training via Geometry-Color Contrast
###### Abstract
Geometry and color information provided by the point clouds are both crucial for 3D scene understanding. Two pieces of information characterize the different aspects of point clouds, but existing methods lack an elaborate design for the discrimination and relevance. Hence we explore a 3D self-supervised paradigm that can better utilize the relations of point cloud information. Specifically, we propose a universal 3D scene pre-training framework via **Ge**ometry-**C**olor **C**ontrast (Point-GCC), which aligns geometry and color information using a Siamese network. To take care of actual application tasks, we design (i) hierarchical supervision with point-level _contrast and reconstruct_ and object-level _contrast_ based on the novel deep clustering module to close the gap between pre-training and downstream tasks; (ii) architecture-agnostic backbone to adapt for various downstream models. Benefiting from the object-level representation associated with downstream tasks, Point-GCC can directly evaluate model performance and the result demonstrates the effectiveness of our methods. Transfer learning results on a wide range of tasks also show consistent improvements across all datasets. _e.g._, new state-of-the-art object detection results on SUN RGB-D and S3DIS datasets. Codes will be released at [https://github.com/Asterisci/Point-GCC](https://github.com/Asterisci/Point-GCC).
## 1 Introduction
3D Self-supervised learning (SSL) has received abundant attention recently because of remarkable improvement on various downstream tasks. 3D scene datasets are tiny compared to the 2D field because 3D point cloud labeling is time-consuming and labor-intensive, which dramatically impedes the improvements of supervised methods. Hence many works [18; 31; 43; 49] explore pre-training models out of 3D labeled data to transfer knowledge for downstream tasks. The goal of self-supervised learning can be summarized as learning rich representations from unlabeled data and helping to improve performance on downstream tasks with labeled data. Most existing works follow the paradigm in the previous 2D field, such as contrastive learning [20; 21; 43; 51] and masked autoencoder (MAE) [28; 31; 47; 50]. After standing on the shoulders of giants in the 2D field, we could further see the particularity of 3D representation learning as follows:
* **Unique information.** 3D scene point cloud contains various information such as geometry and color, which makes 3D point cloud data different from 2D image data. Most existing methods [20; 43; 51] treat all information of each point as an entirety in model architecture design. We argue that directly concatenating all information can not adapt the model to discriminately learn different aspects of point clouds. Although some works [40; 46] propose the two-stream architecture that encodes point cloud by 3D network and images by 2D network, it needs extra 2D data, and 3D network can not clearly learn the discrimination between different information. Considering these additional differences may be beneficial for effective representation learning.
* **Mismatch between pre-training and downstream tasks.** Previous pre-training works [20; 28; 43; 50] design their self-supervised point-level tasks, such as contrast and reconstructing between specific points. However, 3D scene downstream tasks mostly focus the object representations such as object detection and instance segmentation. The gap in supervision level between pre-training and downstream tasks may hinder the improvements of 3D self-supervised learning.
* **Architecture diversity.** The 3D point cloud field has grown rapidly in recent years [13; 25; 30; 32; 36], and the popular architecture appears changeable and specific for downstream tasks. Hence a universal pre-training framework is important that can implement various existing methods for all kinds of tasks and is easy to adapt for future architecture.
To mitigate the aforementioned problems, we explore a 3D self-supervised paradigm that can better utilize the relations of point cloud information. Most 3D scene datasets [1; 16; 38] provide geometry and color information, representing different aspects of the point cloud. Geometry information describes the outline of objects and can easily distinguish between them, while color information refines the internal characteristics of objects and gives a more accurate view of each object. What's more, different information has inherent relevance. For instance, we can roughly infer the geometric structure of the object from a color photo and vice versa. Motivated by the difference and relevance inherent in the information, we propose a self-supervised 3D scene pre-training framework via **Ge**ometry-**C**olor **C**ontrast (Point-GCC), which uses a Siamese network to extract representations and implements elaborate hierarchical supervision. To bridge the gap between pre-training and downstream tasks, the hierarchical supervision contains point-level supervision that aims to align point-wise features and object-level supervision based on a novel deep clustering module to provide better object-level representations strongly associated with downstream tasks. Additionally, the universal Siamese network is designed as an architecture-agnostic backbone so that various downstream models can easily be adapted in a plug-and-play way.
In extensive experiments, we directly perform a fully unsupervised semantic segmentation task without fine-tuning to evaluate the quality of the pre-training model. The result outperforms the previous method with +7.8% mIoU on ScanNetV2, which proves that Point-GCC has learned rich object representations through our paradigm. Furthermore, we choose a broad downstream task to demonstrate our generality: object detection, semantic segmentation and instance segmentation on ScanNetV2 [16], SUN RGB-D [38] and S3DIS [1] datasets. Remarkably, our results indicate general improvements across all tasks and datasets. For example, we achieves new state-of-the-art results with 69.7% AP\({}_{25}\), 54.0% AP\({}_{50}\) on SUN RGB-D and 75.1% AP\({}_{25}\), 56.7% AP\({}_{50}\) on S3DIS datasets. Compared with previous pre-training methods, our method achieves higher AP\({}_{50}\) by +3.1% on ScanNetV2 and +1.1% on SUN RGB-D. Our contributions can be summarized as follows:
* We propose a new universal self-supervised 3D scene pre-training framework, called Point-GCC, which aligns geometry and color information via a Siamese network with hierarchical supervision. To the best of our knowledge, this is the first study to explore the alignment between geometry and color information of point cloud via the pre-training approach.
* We design a novel deep clustering module to generate object pseudo-labels based on the inherent feature consistency of the two pieces of information. The result demonstrates that Point-GCC has learned rich object representations by clustering.
* Extensive experiments show that Point-GCC is a general pre-training framework with an architecture-agnostic backbone, significantly improving performance on a wide range of downstream tasks and achieving new state-of-the-art on multiple datasets.
## 2 Related Work
### 3D Scene Understanding
Most 3D scene understanding works are still specially designed for downstream tasks, such as object detection [25; 27; 29; 36; 37], semantic segmentation [9; 30; 34; 41], and instance segmentation [11; 22; 23; 39]. The model architecture can be summarized as a backbone module extracting the features of point clouds, and a downstream head adapting for the special task. According to the processing method, these works can be roughly divided into two categories: point-based methods and voxel-based methods. Point-based methods [9; 25; 29] is widely used in point clouds thanks to the effectiveness of PointNet++ [30], which alternately use farthest point algorithm and multi-layer perceptron to sample and extract the features of point. Voxel-based methods [11; 22; 23; 36; 37; 39] is recently popular because of the better performance and efficiency on many downstream tasks than point
based methods, which operate 3D sparse convolution on regular voxels transformed from irregular point clouds. We pre-train on both point-based PointNet++ and voxel-based 3D sparse convolution backbone and fine-tune on multiple downstream methods to give a comprehensive view of our work.
### 3D Self-supervised Learning
Compared to 2D vision or natural language, the 3D vision has a more serious problem of data scarcity [18] which limits the downstream performance of 3D tasks. To solve the raising problem, 3D self-supervised learning (SSL) has gotten more attention in recent years. The mainstream SSL methods can be roughly divided into two categories: contrastive learning and reconstructive learning. Contrastive learning is motivated to learn the invariant representation from different paired carriers such as view augmentation [12; 43] or different data formats [33; 45]. Reconstructive learning is designed to reconstruct the disturbed data to learn geometry knowledge between patches [4; 17]. Motivated by the success of masked autoencoder [19] in 2D, the MAE-style self-supervised method became popular in point cloud [28; 50]. Recently, some works find that the _pattern difference_ between the two methods in attention area [44] and scaling performance [31]. Based on previous work, we consider the color and geometry of scene point clouds as two views for contrastive learning, and use a _swapped reconstruct_ strategy for reconstructive learning. Therefore, Point-GCC achieves the integration of two methods and derives benefits from both of them.
### Deep Clustering for Self-supervised Learning
Deep Clustering [3; 5; 7; 10; 26; 42; 48] aims to learn better features and discover data labels using deep neural networks, which has been broadly applied in self-supervised and semi-supervised learning. DeepCluster [6] uses the off-the-shelf K-means algorithm pseudo-labels as supervision which learns comparative representations for self-supervised learning. SeLa [2] proposes a simultaneous clustering and representation learning method using the Shinkhorn-Knop algorithm to generate pseudo-labels with equal partitions quickly. SwAV [8] combines contrastive learning and deep clustering, which enforces consistency between cluster assignments from different views of the same image. In this work, we attempt to apply deep clustering in 3D self-supervised learning field, which generates pseudo-labels based on the inherent feature consistency of the geometry and color information of the point cloud.
## 3 Point-GCC: Pre-training via Geometry-Color Contrast
Existing methods mainly focus on geometric information, but our goal is to enhance the 3D representation capability by better utilizing all the information discriminately in scene point clouds. Therefore, a novel _Geometry-Color Contrast_ method is proposed to address this motivation. Figure 1 illustrates the overall framework of Point-GCC. We first perform a Siamese backbone to extract the
Figure 1: **Overview of our Point-GCC framework**. Point-GCC utilizes the Siamese network to extract the features of geometry and color with positional embedding respectively. Then we implement the hierarchical supervision on extracted features which contains point-level _contrast and reconstruct_ and object-level _contrast_ based on the deep clustering module.
features of the geometry and color information respectively in Section 3.1. To carefully align the features belonging to different information, we propose the point-level supervision via combining the contrastive and reconstructive learning in Section 3.2, then we design an unsupervised deep clustering module to generate object pseudo-labels and perform object-level contrastive learning between high-confidence object samples in Section 3.3. The final hierarchical supervision is described in Section 3.4. In Section 3.5, we propose a new method directly evaluating the pre-training model on unsupervised semantic segmentation to demonstrate the effectiveness of our method.
### Siamese Architecture
**Information split and embedding.** In 3D scene datasets, a point \(\mathbf{p}\) is usually associated geometry information represented by the coordinates \(\mathbf{p}_{geo}\) and color information represented by RGB value \(\mathbf{p}_{color}\). Different from previous pre-training methods regarding a single point as an atom unit, we split the point cloud into two parts, the geometry and color respectively. Then we project them to universal embedding space \(\mathbf{e}\) by Equation 1. Additionally, to distinguish similar colors in different coord, we add an extra weakly positional embedding \(\mathbf{e}_{pos}\) to the color embedding with the Euclidean norm of coord. Note that we remove all embedding modules in fine-tuning stage to keep our framework plug-and-play in order that more existing methods can benefit from ours.
**Siamese architecture-agnostic backbone.** We use a symmetric Siamese network \(\mathcal{F}(\cdot)\) to separately encode geometry features \(\mathbf{f}_{geo}\) and color features \(\mathbf{f}_{color}\). Since we attempt to help more existing architectures learn better representations from the combination of geometry and color information, we do not modify any backbone architecture. So that we can directly reuse the core module for standard segmentation with any backbone architecture. In other words, the backbone encodes input \(\mathbf{x}\in R^{N\times\mathcal{C}_{i}}\) and extracts feature \(\mathbf{y}\in R^{N\times C_{2}}\). To align the two information, Siamese backbone \(\mathcal{F}(\cdot)\) encodes the geometry embedding \(\mathbf{e}_{geo}\) and color embedding \(\mathbf{e}_{color}\) with weakly positional embedding \(\mathbf{e}_{pos}\) to geometry features \(\mathbf{f}_{geo}\) and color features \(\mathbf{f}_{color}\) respectively:
\[\mathbf{e}_{geo}=\mathcal{E}_{geo}(\mathbf{p}_{geo}),\quad\mathbf{e}_{color }=\mathcal{E}_{geo}(\mathbf{p}_{color}),\quad\mathbf{e}_{pos}=\mathcal{E}_{pos}(\| \mathbf{p}_{geo}\|_{2}^{2}), \tag{1}\] \[\mathbf{f}_{geo}=\mathcal{F}(\mathbf{e}_{geo}),\qquad\mathbf{f}_{color}= \mathcal{F}(\mathbf{e}_{geo}+\mathbf{e}_{pos}), \tag{2}\]
where \(\mathcal{E}\) is corresponding linear layer of each embedding, \(\mathcal{F}(\cdot)\) is the Siamese network.
### Point-level Supervision
Inspired by the success of associating contrastive learning and reconstructive learning in recent work [31], We propose our point-level supervision elaborately designed for our Siamese architecture, which first contrast and then _swapped reconstruct_ the features to benefit from different paradigms.
**Contrastive learning.** The geometry features \(\mathbf{f}_{geo}\) and color feature \(\mathbf{f}_{color}\) are point-wise aligned because they are split from the same point cloud \(\mathbf{p}\) and extracted by the Siamese segmentation-style backbone network. We apply the InfoNCE loss aiming to pull positive pairs close, and push negative pairs away across the geometry features and color features:
\[\mathcal{L}_{pc}=-\sum_{i}^{N}\log\frac{\exp\left(\mathbf{z}_{geo}^{iT}\cdot\mathbf{z }_{color}^{i}/\tau\right)}{\sum_{j}^{N}\exp\left(\mathbf{z}_{geo}^{iT}\cdot\mathbf{z }_{color}^{j}/\tau\right)}, \tag{3}\]
where \(\tau\) is the temperature hyper-parameter, we follow the previous works [43] to set it as 0.4. \(\mathbf{z}_{geo}^{i}\) and \(\mathbf{z}_{color}^{i}\) correspond to matched \(\ell_{2}\)-normalized feature \(\mathbf{f}_{geo}^{i}\) and \(\mathbf{f}_{color}^{i}\) from same point \(\mathbf{p}^{i}\), which represent a pair of positive sample. And \(\mathbf{z}_{geo}^{i}\) with other \(\mathbf{z}_{color}^{j}\) except \(\mathbf{z}_{color}^{i}\) represent negative pairs.
**Reconstructive learning.** Based on our Siamese architecture, we apply the reconstructive learning by _swapped reconstruct_ strategy instead of mask strategy, which solves the raising problem about the distribution mismatch between training and testing data in masked autoencoding for point cloud [24]. Specifically, we simply project the geometry features \(\mathbf{f}_{geo}\) and color features \(\mathbf{f}_{color}\) to reconstruct color \(\hat{\mathbf{p}}_{geo}\) and geometry \(\hat{\mathbf{p}}_{color}\). The reconstructive loss is the mean squared error (MSE) between the reconstructed and original information of each point:
\[\mathcal{L}_{pr}=\frac{1}{N}\sum\|\mathbf{p}_{geo}^{i\prime}-\hat{\mathbf{p}}_{geo}^{ i}\|_{2}^{2}+\frac{1}{N}\sum\|\mathbf{p}_{color}^{i\prime}-\hat{\mathbf{p}}_{color}^{i} \|_{2}^{2}, \tag{4}\]
where \(N\) is the number of points, \(\hat{\mathbf{p}}_{geo}^{i}\) and \(\hat{\mathbf{p}}_{color}^{i}\) represent the reconstruct prediction, \(\mathbf{p}_{geo}^{i\prime}\) and \(\mathbf{p}_{color}^{i\prime}\) represent the reconstruct targets which both scale to between 0 and 1 for stability training loss.
### Object-level Supervision
Point-level supervision is widely applied in 3D self-supervised learning, which provides rich representations for downstream tasks. However, the object representation strongly associated with downstream tasks hasn't been noticed before. We propose our object-level supervision driven by the novel unsupervised deep clustering module. The clustering module generates pseudo-label predictions \(\mathcal{P}_{geo}\) and \(\mathcal{P}_{color}\) for the geometry features \(\mathbf{f}_{geo}\) and color features \(\mathbf{f}_{color}\) respectively, and enforces consistent prediction between geometry prediction \(\mathcal{P}_{geo}\) and color prediction \(\mathcal{P}_{color}\) of same point \(\mathbf{p}\). We argue that the pseudo-labels represent more various object features, which are not restricted by human annotations with fixed object classes. To achieve robust supervision among these object-level pseudo labels, we sample the high-confidence object features based on the prediction confidence score and apply object-level contrastive learning according to pseudo labels.
**Deep clustering via swapped prediction.** We apply the swapped prediction [8] in 2D contrastive learning to our model, which predicts the pseudo label of an image from the clustering result of another view. In our framework, we swap the cluster target of different information features, and predict the pseudo label from the other information feature based on the inherent consistency of the two types of information as shown in Figure 2(a). For pseudo label classes \(K\), we use a learnable matrix \(\mathcal{C}=[\mathbf{c}_{1},\cdots,\mathbf{c}_{K}]\) to represent the cluster centroids, and calculate the similarity \(\mathcal{S}\) between the \(\ell_{2}\)-normalized features \(\mathbf{f}\) and cluster centroids \(\mathbf{c}\). To avoid the degeneration problem that all features collapse into the same prediction, the Sinkhorn-Knopp algorithm [15] is used to generate the equal partition cluster distribution \(\mathcal{Q}\) from the similarity \(\mathcal{S}\) by converting pseudo-label generation to an optimal transport problem. And the learnable prediction \(\mathcal{P}\) is computed by \(softmax(\mathcal{S}/\tau)\), where \(\tau\) is the temperature hyper-parameter. We set all hyper-parameter in swapped prediction same to the previous works [8] in 2D. Finally, The swapped prediction loss is the cross entropy losses between the learnable prediction \(\mathcal{P}\) and swapped equal partition distribution \(\mathcal{Q}\):
\[\mathcal{L}_{clu}=\ell(\mathcal{Q}_{geo},\mathcal{P}_{color})+\ell(\mathcal{ Q}_{color},\mathcal{P}_{geo}), \tag{5}\]
where \(\ell\) is the cross-entropy loss between the prediction and target.
**Object-level contrastive learning.** For the features \(\mathbf{f}\) with corresponding pseudo prediction \(\mathcal{P}\) and confidence score from deep clustering, we pick features with confidence scores higher than the picking threshold to alleviate the noise from unsupervised clustering. Then we compute the mean features of high-confidence samples from geometry and color branches, respectively. We take the two types of mean features with the same pseudo-label as positive pairs, oppositely with different pseudo-label as positive pairs, and apply the InfoNCE loss at object-level:
\[\mathcal{L}_{oc}=-\sum_{i}^{N}\log\frac{\exp\left(\mathbf{z}_{geo}^{iT}\cdot\mathbf{ z}_{color}^{i}/\tau\right)}{\sum_{j}^{N}\exp\left(\mathbf{z}_{geo}^{iT}\cdot\mathbf{ z}_{color}^{j}/\tau\right)}, \tag{6}\]
Figure 2: (a) The deep clustering module obtains pseudo prediction for different features and enforces consistent with the swapped partition distribution from the Sinkhorn-Knop algorithm. (b) Point-GCC generates the pseudo-labels by utilizing cluster prediction from both branches and projects to ground-truth labels for unsupervised semantic segmentation using Hungarian matching alignment.
where \(\tau\) is the temperature hyper-parameter, we set it to 0.4 following the above-mentioned setting. \(\mathbf{z}^{i}\) is the \(\ell_{2}\)-normalized mean feature with pseudo-label \(i\). \(\mathbf{z}^{i}_{geo}\) and \(\mathbf{z}^{i}_{color}\) represent a pair of positive sample with same pseudo-label \(i\). And \(\mathbf{z}^{i}_{geo}\) with \(\mathbf{z}^{j}_{color}\) corresponding different pseudo-label \(j\) represent negative samples.
### Overall Hierarchical Loss
Our framework contains hierarchical supervision in point-level and object-level, and the final loss is a combination of the four losses above-mentioned:
\[\mathcal{L}_{over} = \mathcal{L}_{pc}+\alpha\mathcal{L}_{pr}+\beta\mathcal{L}_{clu}+ \gamma\mathcal{L}_{oc}, \tag{7}\]
where \(\alpha\), \(\beta\) and \(\gamma\) are the loss weight hyper-parameters, we set them to 100, 100 and 1 respectively to balance the magnitude of losses.
### Adapt to unsupervised semantic segmentation
Due to the pseudo-label from object-level supervision, Point-GCC can adapt to unsupervised downstream tasks without fine-tuning. Meanwhile, previous pre-training methods evaluate the performance by transfer learning on downstream tasks. The results can be greatly affected by the fine-tuning setting and are not intuitive between different baselines. As shown in Figure 2(b), we generate the final pseudo-labels by utilizing cluster prediction from geometry and color branch. During the evaluation stage, we use the Hungarian matching alignment [48] to project the pseudo-labels to ground-truth labels because we are agnostic to the ground truth in pre-training. Although our method is not specifically designed for unsupervised downstream tasks, we find that the process is more intuitive and fair for evaluating the performance of pre-training methods.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{ScanNetV2} & \multicolumn{2}{c}{SUN RGB-D} & \multicolumn{2}{c}{S3DIS} \\ \cline{2-7} & AP\({}_{25}\) & AP\({}_{50}\) & AP\({}_{25}\) & AP\({}_{50}\) & AP\({}_{25}\) & AP\({}_{50}\) \\ \hline \hline \multicolumn{7}{c}{**Supervised Only**} \\ \hline VoteNet [29] & 58.6 & 33.5 & 57.7 & - & - & - \\ GroupFree-3D [25] & 66.3 & 47.8 & - & - & - & - \\ PCAF3D [36] & 71.5 & 57.3 & 64.2 & 48.9 & 66.7 & 45.9 \\ TR3D [37] & 72.9 & 59.3 & 67.1 & 50.4 & 74.5 & 51.7 \\ \hline \hline \multicolumn{7}{c}{**Self-supervised Pre-training**} \\ \hline VoteNet [29] & 58.6 & 33.5 & 57.7 & - & - & - \\ + PointContrast [43] & 59.2 & 38.0 & 57.5 & 34.8 & - & - \\ + DepthContrast [51] & 62.1 & 39.1 & 60.4 & 35.4 & - & - \\ + CSC [20] & - & 39.3 & - & 36.4 & - & - \\ + Ponder [21] & 63.6 & 41.0 & 61.0 & 36.6 & - & - \\ + Point-GCC\({}^{*}\) & 65.3 (+3.0) & 44.1 (+3.5) & 61.3 (+1.5) & 32.7 (+2.6) & - & - \\ \hline VoteNet+FF [37] & - & - & 64.5 & 39.2 & - & - \\ + Point-GCC & - & - & 64.9 (+0.4) & 41.3 (+2.1) & - & - \\ \hline GroupFree-3D [25] & 66.3 & 47.8 & - & - & - & - \\ + Point-GCC & 68.1 (+1.8) & 49.2 (+1.0) & - & - & - & - \\ \hline TR3D [37] & 72.9 & 59.3 & 67.1 & 50.4 & 74.5 & 51.7 \\ + Point-GCC & **73.1** (+0.2) & **59.6** (+0.3) & 67.7 (+0.6) & 51.0 (+0.6) & 74.9 (+0.6) & 53.2 (+1.5) \\ + Point-GCC\({}^{\dagger}\) & - & - & - & - & **75.1** (+0.6) & **56.7** (+0.6) \\ \hline TR3D+FF [37] & - & - & 69.4 & 53.4 & - & - \\ + Point-GCC & - & - & **69.7** (+0.3) & **54.0** (+0.6) & - & - \\ \hline \hline \end{tabular}
\end{table}
Table 1: 3D Object detection results on ScanNetV2, SUN RGB-D, S3DIS validation set. The overall best results are **bold**, and the best results with the same baseline model are underlined. + means fine-tuning with pre-training on the corresponding dataset. * means that we evaluate the performance on VoteNet with the stronger MMDetection3D implementation for a fair comparison. \({}^{\dagger}\) means with extra training dataset ScanNetV2.
## 4 Experiments
To analyze the 3D representation learned by Point-GCC, we conduct extensive experiments on multiple datasets and tasks described in Section 4.1. First we evaluate on fully unsupervised semantic segmentation tasks to validate the effectiveness of object representation in Section 4.2. Then we expand experiments by transfer learning on multiple downstream tasks and datasets in Section 4.3.
### Experiment setting
**Dataset.** We use three popular indoor scene datasets: ScanNetV2 [16], SUN RGB-D [38], S3DIS [1] in our experiments. **ScanNetV2** is a 3D reconstruction dataset, which provides 1513 indoor scans with a total of 20 classes. **SUN RGB-D** is a monocular RGB-D image dataset, which provides 10335 RGB-D images from four different sensors with a total of 37 classes. **S3DIS** is another 3D indoor scene dataset, which provides 271 point cloud scenes across 6 areas with 13 classes.
**Implementation details.** We implement Point-GCC built upon the MMDetection3D [14] framework. We use the AdamW optimizer with an initial learning rate of 0.001 and weight decay of 0.0001. Other implementation details are followed the default scheme. To ensure fair comparability of results, we refer to selecting downstream models implemented by MMDetection3D. In downstream task experiments, we decay the learning rate by 0.5, and other settings follow the original implementation. The full detail settings are provided in the Appendix.
### Fully unsupervised semantic segmentation
We evaluate our pre-training model on fully unsupervised semantic segmentation tasks using the method in Section 3.5 to validate the effectiveness of object representation. As shown in Table 2, our method surpasses previous unsupervised methods by a huge margin and is closer to the weakly-supervised method, despite Point-GCC being not specifically designed for unsupervised downstream tasks. With the same backbone PointNet++, Point-GCC surpasses previous work SL3D [9] by +9.8% mIoU, and +7.8% mIoU compared with more powerful Point Transformer on ScanNetV2 dataset. The result proves that Point-GCC has learned rich object representation in unsupervised pre-training.
**Fine-tuning semantic segmentation.** Additionally, we fine-tune the pre-training model for semantic segmentation to verify the consistent improvement of our method. With supervised fine-tuning, the model gains significant improvements by +5.4% mIoU on ScanNetV2 dataset, which proves that our method has learned intrinsic representations of the point cloud.
### Transfer learning on downstream tasks
**3D Object detection.** For 3D object detection task, we pre-train the PointNet++ [30] backbone for VoteNet [29], VoteNet+FF [37] and GroupFree-3D [25] and the MinkResNet [13] backbone for TR3D [37], TR3D++F [37] respectively. Table 1 shows the results on ScanNetV2, SUN-RGBD, and S3DIS datasets. Our method gains stable and significant improvements for various settings. Compared with previous 3D self-supervised methods with the common baseline model VoteNet, our method achieves higher AP\({}_{50}\) than the previous highest model Ponder [21] by +3.1% on ScanNetV2
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Method & Supervision & Backbone & Pseudo Classes & mIoU \\ \hline \multicolumn{5}{c}{**Unsupervised Method**} \\ \hline SL3D [9] & unsupervised & PointNet++ & 400 & 8.5 \\ SL3D [9] & unsupervised & Point Transformer & 800 & 10.5 \\ Point-GCC & unsupervised & PointNet++ & 20 & 18.3 \\ \hline \multicolumn{5}{c}{**Weakly-supervised Method**} \\ \hline WyPR [34] & scene-level & PointNet++ & 20 & 29.6 \\ MPRM [41] & subcloud-level & KPConv & 20 & 41.0 \\ \hline \multicolumn{5}{c}{**Supervised Method**} \\ \hline PointNet++(SSG) [30] & supervised & PointNet++ & 20 & 54.4 \\ + Point-GCC & supervised & PointNet++ & 20 & **59.8** (+5.4) \\ \hline \hline \end{tabular}
\end{table}
Table 2: 3D semantic segmentation results on ScanNetV2 dataset by different level of supervision. The overall best results are **bold**. + means fine-tuning with pre-training on the corresponding dataset.
and +1.1% on SUN RGB-D. For more recent models, our model also significantly boosts VoteNet++FF, GroupFree-3D, TR3D, TR3D+FF on multiple datasets and achieves new state-of-the-art results with 69.7% AP\({}_{25}\), 54.0% AP\({}_{50}\) on SUN RGB-D and 75.1% AP\({}_{25}\), 56.7% AP\({}_{50}\) on S3DIS datasets.
**3D Instance segmentation.** For 3D instance segmentation task, we pre-train the MinkResNet backbone for TD3D [23] on ScanNet and S3DIS datasets. Table 3 shows the results on ScanNetV2 and S3DIS validation sets. Downstream models gain remarkable performance by +1.1% AP on ScanNetV2, +1.9% on S3DIS and +1.5% on S3DIS with extra train data, demonstrating our method's general improvement across multiple settings.
Interestingly, the improvements for the PointNet++ backbone widely surpass the MinkResNet backbone. We guess that sparse convolution architecture implicitly aligns the color information from features and the geometry information from fine-grained sparse voxel operation. It may be a kind of explanation for why 3D sparse convolution has better performance and efficiency on various tasks.
### Ablation study And Discussion
To analyze the effectiveness of our approach, we further explore additional experiments to measure the contribution of each component to the final representation quality. For efficiency, all ablation experiments are implemented with VoteNet setting on pre-training and object detection.
**Hierarchical supervision.** To further explore the improvement of our hierarchical supervision, we conduct ablation studies with different components. Table 4 shows the unsupervised semantic segmentation results with pre-training and object detection results with fine-tuning. The results show that both contrastive learning and reconstructive learning in point-level contribute to the final results. Even though just with point-level supervision, our method has achieved higher AP\({}_{25}\) and AP\({}_{50}\) than the previous best model Ponder by +1.2% and +2.0%. Furthermore, the swapped prediction and object-level contrastive learning also provide remarkable improvements for AP\({}_{50}\) and AP\({}_{25}\), especially AP\({}_{50}\). Intuitively, the improvement of AP\({}_{50}\) is more significant than AP\({}_{25}\) demonstrating that object-level supervision improves the model with a more precise view of objects.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \multicolumn{2}{c}{Point-level} & \multicolumn{2}{c}{Object-level} & \multicolumn{2}{c}{Unsupervised Segmentation} & \multicolumn{2}{c}{Object} \\ \cline{2-7} \multicolumn{1}{c}{} & & & & & & \\ \hline Contra. & Recon. & Cluster. & Contra. & mIoU & AP\({}_{25}\) & AP\({}_{50}\) \\ \hline ✓ & ✓ & ✓ & ✓ & 18.27 & 65.3 & 44.1 \\ ✓ & ✓ & ✓ & ✗ & 16.07 & 65.0 & 43.6 \\ ✓ & ✓ & ✗ & ✗ & - & 64.8 & 43.0 \\ ✓ & ✗ & ✗ & ✗ & - & 64.4 & 42.8 \\ ✗ & ✓ & ✗ & ✗ & - & 63.3 & 42.7 \\ ✗ & ✗ & ✗ & ✗ & - & 62.3 & 40.8 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ablation study of the hierarchical supervision. - means the model can’t perform the unsupervised segmentation task due to the lack of the pseudo-label.
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{ScanNetV2} & \multicolumn{3}{c}{S3DIS} \\ \cline{2-9} & AP & AP\({}_{50}\) & AP\({}_{25}\) & AP & AP\({}_{50}\) & Prec\({}_{50}\) & Rec\({}_{50}\) \\ \hline \multicolumn{9}{c}{**Supervised Only**} \\ \hline PointGroup [22] & 34.8 & 56.7 & 71.3 & - & 57.8 & 61.9 & 62.1 \\ HAIS\({}^{\dagger}\)[11] & 43.5 & 64.4 & 75.6 & - & - & 71.1 & 65.0 \\ SoftGroup\({}^{\dagger}\)[39] & 45.8 & 67.6 & 78.9 & 51.6 & 66.1 & 73.6 & 66.6 \\ \hline \multicolumn{9}{c}{**Self-supervised Pre-training**} \\ \hline TD3D [23] & 46.2 & 71.1 & 81.3 & 48.6 & 65.1 & 74.4 & 64.8 \\ + Point-GCC & **47.3** (41.1) & **71.3** (42.2) & **81.6** (48.3) & 50.5 (41.9) & 65.4 (40.3) & 75.5 (41.3) & 65.9 (41.3) \\ \hline TD3D\({}^{\dagger}\)[23] & - & - & - & 52.1 & 67.2 & 75.2 & 68.7 \\ \hline + Point-GCC\({}^{\dagger}\) & - & - & - & **53.6** (41.5) & **68.4** (41.2) & **76.6** (41.6) & **69.5** (40.8) \\ \hline \hline \end{tabular}
\end{table}
Table 3: 3D instance segmentation results on ScanNetV2 and S3DIS dataset. The overall best results are **bold**, and the best results with the same baseline model are underlined. + means fine-tuning with pre-training on the corresponding dataset. \({}^{\dagger}\) means with extra training dataset ScanNetV2.
**Geometry-Color Contrast.** To verify the importance of our Geometry-Color Contrast approach, we compare the results with a single reconstruction branch setting. Table 5 shows object detection results with different pre-training branches. The results show that the performance with a single branch of whether geometry or color reconstruction obviously declines, which proves our Geometry-Color Contrast plays an essential role in the significant performance.
**Object sampling strategy.** The result in table 4 shows that object-level supervision provides the most obvious boost for AP\({}_{50}\). We compare the results with different object sampling strategies to analyze the object samples used in object-level contrastive learning. The results in table 6 show that the more confident object samples are, the greater performance we achieve. However, only using the maximum score sample, the performance decays because of over-fitting.
### Visualization
Figure 3 shows the visualization of geometry and color reconstruction results from our method. The results show that our method can generate high-quality complement from one type of information in the point cloud consistently. The method may contain potential applications such as depth estimation and texture generation.
## 5 Conclusions
In this paper, we propose a new universal self-supervised 3D scene pre-training framework via **Ge**ometry-**C**olor **C**ontrast (Point-GCC), which utilizes an architecture-agnostic Siamese network with hierarchical supervision. Extensive experiments show that Point-GCC significantly improves performance on unsupervised tasks without fine-tuning and a wide range of downstream tasks, especially achieving new state-of-the-art results on multiple datasets.
To the best of our knowledge, Point-GCC is the first study to explore the self-supervised paradigm that can better utilize the relations of different point cloud information, hence we elaborately design our plug-and-play pre-training framework to help improve various existing downstream methods, instead of directly designing a new architecture. We hope our work could attract more attention about the discriminative information of point cloud, which may inspire future point cloud representation learning works.
\begin{table}
\begin{tabular}{c c c} \hline \hline Object Picking & \multicolumn{2}{c}{Object Detection} \\ \cline{2-3} Threshold & AP\({}_{25}\) & AP\({}_{50}\) \\ \hline only max score & 64.5 & 43.0 \\
2.0 / class num. & 65.3 & 44.1 \\
1.8 / class num. & 64.4 & 43.7 \\
1.5 / class num. & 64.2 & 43.2 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Ablation study of the Object sampling strategy.
Figure 3: The visualization of reconstruction results from Point-GCC. Note that we decrease the point size in geometry reconstruction to avoid the block from noisy points.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Color & Geometry & \multicolumn{2}{c}{Object Detection} \\ \cline{2-3} Branch & Branch & AP\({}_{25}\) & AP\({}_{50}\) \\ \hline ✓ & ✓ & 64.8 & 43.0 \\ ✗ & ✓ & 62.4 & 40.9 \\ ✓ & ✗ & 62.5 & 39.4 \\ ✗ & ✗ & 62.3 & 40.8 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Ablation study of the Geometry-Color Contrast approach. |
2305.00372 | The core compactly generated topology | M. Escard\'o et al. asked whether the core compactly generated topology of a
sober space is again sober and the sobrification of a core compactly generated
space again core compactly generated. In this note, we answer the problem by
displaying a counterexample, which reveals that the core compactly generated
spaces are not closed under sobrifications. Meantime, we obtain that the core
compactly generated spaces are closed under {\omega}-well-filterifications and
D-completions. Furthermore, we find that the core compactly generated topology
of the Smyth power space of a well-filtered space coincides with the Scott
topology. Finally, we provide a characterization for the core-compactness of
core compactly generated spaces. | Qingguo Li, Hualin Miao | 2023-04-30T02:29:09Z | http://arxiv.org/abs/2305.00372v1 | # The core compactly generated topology +
###### Abstract
M. Escardo et al. asked whether the core compactly generated topology of a sober space is again sober and the sobrification of a core compactly generated space again core compactly generated. In this note, we answer the problem by displaying a counterexample, which reveals that the core compactly generated spaces are not closed under sobrifications. Meantime, we obtain that the core compactly generated spaces are closed under \(\omega\)-well-filterifications and \(D\)-completions. Furthermore, we find that the core compactly generated topology of the Smyth power space of a well-filtered space coincides with the Scott topology. Finally, we provide a characterization for the core-compactness of core compactly generated spaces.
keywords: Core compactly generated topology; Sobriety; \(\omega\)-well-filteredness; Monotone convergence; Product topology Msc: 18A05; 18B30; 06A06, 06B35
## 1 Introduction
Sobriety, well-filteredness and monotone convergence have been motivating a widespread concern and being the most important and useful properties in Non-Hausdorff topology. Moreover, they have been extensively investigated in [5],[7],[8],[11],[14],[15],[16],[17],[20],[21],[22],[27],[28] and so on.
In [2], Day considered an enlarged category of topological spaces which is cartesian closed, and showed this category can be reflected onto the category of \(\mathcal{C}\)-generated spaces. Based on this fact and the general categorical reflection theorem, he deduced that the category of \(\mathcal{C}\) generated spaces is cartesian closed.
Afterwards, M. Escardo et al. presented a simple and uniform proof of Cartesian closedness for the category of \(\mathcal{C}\)-generated spaces, including the category of compactly generated spaces and core compactly generated spaces ([4]).
In [1], O. Battenfeld et al. presented the result that the \(D\)-completion of a compactly generated space is again compactly generated. Furthermore, G. Gruenhage and T. Streicher reached the conclusion that the sobrification of a compactly generated space may not be compactly generated [9]. Other scholars also considered the category of compactly generated spaces ([6], [19]).
As for the core compactly generated spaces, M. Escardo et al. posed the following problem [4]:
**Problem 1.1**.: Is the core compactly generated topology of a sober space again sober? Dually, is the sobrification of a core compactly generated space again core compactly generated?
In this note, we give a counterexample to illustrate that the core compactly generated topology of a sober space may not be sober and the sobrification of a core compactly generated space may not be core compactly generated. However, we discover that the core compactly generated topology of a \(\omega\)-well-filtered space (resp. a monotone convergence space) is again \(\omega\)-well-filtered (resp. monotone convergence) and the \(\omega\)-well-filterification (resp. \(D\)-completion) of a core compactly generated space is also core compactly generated. Moreover, we find that the core compactly generated topology of a monotone convergence space is contained in the Scott topology. Naturally, we wonder on which class of topological spaces, the core compactly generated topologies are exactly the Scott topologies determined by the specialization orders on those spaces. In Section 5, we confirm that the Smyth power space of each well-filtered space can satisfy that its core compactly generated topology and Scott topology coincide.
In [4], M. Escardo et al. derived that if a topology space \(X\) is core compact, then the topological product of \(X,Y\) is also core compactly generated for any core compactly generated space \(Y\). We conclude the paper by showing that a core compactly generated space \(X\) is core compact iff the topological product of \(X,Y\) is also core compactly generated for any core compactly generated space \(Y\).
## 2 Preliminaries
Let \(P\) be a poset. A subset \(D\) of \(P\) is _directed_ provided it is nonempty and every finite subset of \(D\) has an upper bound in \(D\). A poset \(P\) is a _dcpo_ if every directed subset \(D\) has a supremum. A subset \(U\) of a poset \(P\) is _Scott open_ if (i) it is an upper set (\(U=\uparrow U=\{x\in P:u\leq x\text{ for some }u\in U\}\)) and (ii) for every directed subset \(D\) of \(P\) with \(\sup D\) existing and \(\sup D\in U\), it follows that \(D\cap U\neq\emptyset\). The complement of a Scott open set is called _Scott closed_. Let \(\sigma(P)\) denote the set of all Scott open sets and \(\Gamma(P)\) the set of all Scott closed sets. The space \((P,\sigma(P))\) called the Scott space of \(P\) is written as \(\Sigma P\). Without further references, the posets mentioned here are all endowed with the Scott topology.
Given a \(T_{0}\) space \((X,\tau)\), we denote the closure of \(A\) by \(cl_{\tau}(A)\) for any subset \(A\subseteq X\). We define \(x\leq y\) iff \(x\in cl(y)\). Hence, \(X\) with its specialization order is a poset. We denote the set of all open sets by \(\mathcal{O}(X)\). The set \(A\) is called _\(d\)-closed_ if \(D\subseteq A\) implies that \(\sup D\in A\) for any directed subset \(D\) of \(A\). Let \(cl_{d}(A)\) denote the closure of \(A\) in the \(d\)-topology. The set \(A\) is called a _\(d\)-dense_ subset of \(X\) if the \(d\)-closure of \(A\) is \(X\), i.e., \(cl_{d}(A)=X\). The set of all irreducible closed subsets of \(X\) is denoted by \(\mathbf{IRR}(X)\). A set \(K\) of a topological space is called _saturated_ if it is the intersection of all open sets containing \(K\) (\(K=\uparrow K\) in its specialization order). For a topological space \(X\), the set of all non-empty compact saturated subsets of \(X\) are denoted by \(Q(X)\).
## 3 The core compactly generated topology of sober spaces
From [4], we have that the core compactly generated topology of a \(T_{0}\) space \(X\) enjoys the same specialization order as \(X\) and a dcpo endowed with the Scott topology is core compactly generated.
The goal of this section is to give a negative answer to the problem posed by M. Escardo et al. [4] mentioned in the introduction.
**Remark 3.1**.: Let \(L\) be a dcpo. If an upper set \(U\) of \(L\) has the property, that \(\sup C\in U\) suggests that \(C\cap U\neq\emptyset\), for any well-ordered chain \(C\) of \(L\), then \(U\) is Scott open.
Proof.: Let \(U\) be an upper set, with the property that \(\sup C\in U\) indicates that \(C\cap U\neq\emptyset\) for any well-ordered chain \(C\) of \(L\). We use transfinite induction on the cardinality of the directed set \(D\).
If \(D\) is finite, and \(\sup D\in U\), then \(\sup D\in D\cap U\).
Now assume that \(D\) is infinite and \(E\cap U\neq\emptyset\) for any directed set \(E\) with cardinality smaller than \(|D|\) and \(\sup E\in U\). From the theorem of Iwomura [13], \(\sup D=\sup_{\alpha<|D|}\sup D_{\alpha}\) and \(\{\sup D_{\alpha}\mid\alpha<|D|\}\) is a well-ordered chain with \(D_{\alpha}\subseteq D\), which yield that there is \(\alpha<|D|\) such that \(\sup D_{\alpha}\in U\). By induction hypothesis, \(D_{\alpha}\cap U\neq\emptyset\), whence \(D\cap U\neq\emptyset\).
Recall that a \(T_{0}\) space is a _monotone convergence space_ if and only if the closure of every directed set (in the specialization order) is the closure of a unique point. A subset \(V\) of a topological space \((X,\tau)\) is _open in the core compactly generated topology_ on \(X\) if, for every core compact space \(C\) and continuous function \(\rho:C\to X\), the preimage \(p^{-1}(V)\) is open in \(C\). We write \(\mathcal{C}X\) for the core compactly generated topology of \(X\), and we say that \(X\) is _core compactly generated_ if \(\tau=\mathcal{C}X\). Note that it is direct that \(\tau\subseteq\mathcal{C}X\) by the above concept.
**Lemma 3.2**.: _Let \((X,\tau)\) be a monotone convergence space. Then the core compactly generated topology \(\mathcal{C}X\) of \(X\) is contained in the Scott topology of \(X\) under the specialization order of \(X\)._
Proof.: Due to [4, Lemma 4.6], we know that \(U\) is an upper set for any \(U\in\mathcal{C}X\). Now let \(C\) be a well-ordered chain of \(X\) with \(\sup C\in U\). Define \(C^{{}^{\prime}}=\{\sup E\mid E\subseteq C\}\). One sees immediately that \(C^{{}^{\prime}}\) is a subdcpo of \(X\) and \(C^{{}^{\prime}}\) is a complete chain. This reveals that the inclusion map \(i:\Sigma C^{{}^{\prime}}\to(X,\tau)\) is continuous in the light of the monotone convergence of \(X\). Note that \(\Sigma C^{{}^{\prime}}\) is core compact. We conclude that \(i^{-1}(U)\) is Scott open in \(C^{{}^{\prime}}\) since \(U\) is core compactly generated. The fact that \(\sup C\in i^{-1}(U)\) deduces that \(C\cap U\neq\emptyset\) with the help of \(C\subseteq C^{{}^{\prime}}\). By applying Remark 3.1, the set \(U\) is Scott open.
We shall say that a \(T_{0}\) space \(X\) is _well-filtered_ if for each filter basis \(\mathcal{C}\) of compact saturated sets and each open set \(U\) with \(\bigcap\mathcal{C}\subseteq U\), there is a \(K\in\mathcal{C}\) with \(K\subseteq U\). An arbitrary nonempty subset \(A\) of a \(T_{0}\) space \(X\) is _irreducible_ if \(A\subseteq B\cup C\) for closed subsets \(B\) and \(C\) implies \(A\subseteq B\) or \(A\subseteq C\). A topological space \(X\) is _sober_ if it is \(T_{0}\) and every irreducible closed subset of \(X\) is the closure of a (unique) point. A sobrification of a \(T_{0}\) space \(X\) consists of a sober space \(Y\) and a continuous map \(\eta:X\to Y\) which enjoys the following universal property: For every continuous map \(f\) from \(X\) to a sober space \(Z\), there is a unique continuous map \(\overline{f}:Y\to Z\) such that \(f=\overline{f}\circ\eta\).
A standard construction for the sobrification of a \(T_{0}\) space \(X\) is to set
\[X^{s}:=\{A\subseteq X:A\in\mathbf{IRR}(X)\}\]
topologized by open sets \(U^{s}:=\{A\in X^{s}:A\cap U\neq\emptyset\}\) for each open subset \(U\) of \(X\). If we define \(\eta_{X}^{s}:X\to X^{s}\) by \(\eta_{X}^{s}(x)=cl(\{x\})\), then we obtain a sobrification [7, Exercise V-4.9], which we call the standard sobrification.
**Proposition 3.3**.: _Let \(\Sigma P\) be a well-filtered dcpo which is not sober and \(\mathbf{IRR}(P)=\{\downarrow x\mid x\in P\}\cup\{P\}\). Then the core compactly generated topology of the standard sobrification \(P^{s}\) of \(\Sigma P\) is the Scott topology of \((\mathbf{IRR}(P),\subseteq)\)._
Proof.: Through Lemma 3.2, it remains to prove that the Scott topology of \(\mathbf{IRR}(P)\) is contained in the core compactly generated topology. Obviously, we see that \(\sigma(\mathbf{IRR}(P))=\{U^{s}\mid U\in\sigma(P)\}\cup\{\{P\}\}\). So it suffices to show that \(\{P\}\) is core compactly generated open. By [4, Lemma 8.2], we just need to check that \(\{P\}\) is locally compact sober generated open.
To this end, let \(\rho:C\to P^{s}\) be a continuous function from a locally compact sober space \(C\). For any \(x\in\rho^{-1}(\{P\})\). The fact that \(C\) is locally compact implies that \(\uparrow x=\bigcap_{x\in int(K),K\in Q(C)}K\). On account of [4, Lemma 8.1], we derive that \(\bigcap_{x\in int(K),K\in Q(C)}\uparrow\rho(K)=\uparrow\rho(\uparrow x)=\{P\}\). Due to the continuity of \(\rho\), we demonstrate that \(\{\uparrow\rho(K)\mid x\in int(K),K\in Q(C)\}\) is a filtered family of \(Q(P^{s})\).
The assumption that \(\{P\}\) is the greatest element of \(\mathbf{IRR}(P)\) guarantees that \(\{\uparrow\rho(K)\backslash\{P\}\mid x\in int(K),K\in Q(C)\}\) is a filtered family of \(Q(P^{s}\backslash\{P\})\) and \(\bigcap_{x\in int(K),K\in Q(C)}(\uparrow\rho(K)\backslash\{P\})=\emptyset\). What is noteworthy is that \(\Sigma P\) is homeomorphic to the subspace \(P^{s}\backslash\{P\}\) of \(P^{s}\). This means that \(P^{s}\backslash\{P\}\) is well-filtered owing to the assumption that \(\Sigma P\) is well-filtered. Hence, there exists \(K\in Q(C)\) with \(x\in int(K)\) such that \(\uparrow\rho(K)\backslash\{P\}=\emptyset\). In other words, \(x\in int(K)\subseteq K\subseteq\rho^{-1}(\{P\})\). Now we gain our desired result.
Let us consider the well-filtered dcpo \(P=\mathbb{N}\times\mathbb{N}\times(\mathbb{N}\cup\{\infty\})\). The order \((i_{1},j_{1},m_{1})\leqslant(i_{2},j_{2},m_{2})\) is defined as follows:
\(\bullet\)\(i_{1}=i_{2},j_{1}=j_{2},m_{1}\leqslant m_{2}\leqslant\infty\);
\(\bullet\)\(i_{2}=i_{1}+1,m_{1}\leqslant j_{2}\), \(m_{2}=\infty\).
The above example is constructed by Jia [10], we can see that the dcpo \(P\) is not sober.
**Theorem 3.4**.: _Let \(P\) be the dcpo mentioned above. Then \(P^{s}\) is a sober space. But the core compactly generated topology of \(P^{s}\) is not sober. Moreover, \(\Sigma P\) is core compactly generated, but \(P^{s}\) is not core compactly generated._
Proof.: Due to Proposition 3.3, we can have that the core compactly generated topology of the standard sobrification \(P^{s}\) of \(\Sigma P\) is the Scott topology of \((\mathbf{IRR}(P),\subseteq)\). One sees clearly that \(\Sigma P^{s}\) is not sober. Note that \(\Sigma P\) is core compactly generated. Again by applying Proposition 3.3, we know that \(P^{s}\) is not core compactly generated.
## 4 The core compactly generated topology of \(\omega\)-well-filtered spaces
From the above results, we conclude that the category of all core compactly generated sober spaces is not a reflective subcategory of the category of all core compactly generated spaces with continuous functions.
We now move on to the next goal of this note, which pays attention to the core compactly generated topology of \(\omega\)-well-filtered spaces. In this section, we prove that the core compactly generated topology of a \(\omega\)-well-filtered space is again \(\omega\)-well-filtered and the \(\omega\)-well-filterification of a core compactly generated space is again core compactly generated. This states that the category of all core compactly generated \(\omega\)-well-filtered spaces is a reflective subcategory of the category of all core compactly generated spaces with continuous functions.
Recall that a \(T_{0}\) space \(X\) is called \(\omega\)_-well-filtered_, if for any countable filtered family \(\{K_{i}:i<\omega\}\subseteq Q(X)\) and \(U\in\mathcal{O}(X)\), the following condition holds,
\[\bigcap_{i<\omega}K_{i}\subseteq U\Rightarrow\exists i_{0}<\omega,K_{i_{0}} \subseteq U\]
Let \(X\) be a \(T_{0}\) space. A nonempty subset \(A\) of \(X\) is said to have the _countable compactly filtered property_ (\(KF_{\omega}\) property), if there exists a filter family \(\mathcal{K}\) of \(Q(X)\) such that \(cl(A)\) is a minimal closed set that intersects all members of \(\mathcal{K}\), where \(\mathcal{K}\) is a countable subset of \(Q(X)\). We call such a set \(KF_{\omega}\), or a \(KF_{\omega}\)-set. Denote by \(\mathbf{KF}_{\omega}(X)\) the set of all closed \(KF_{\omega}\) subsets of \(X\). The countable compactly filtered property is called \(\omega\)-Rudin property in [23].
**Lemma 4.1**.: _Let \((X,\tau)\) be a \(\omega\)-well-filtered space. Then the core compactly generated topology \(\mathcal{C}X\) of \(X\) is again \(\omega\)-well-filtered._
Proof.: Assume for the sake of a contradiction that the core compactly generated topology \(\mathcal{C}X\) of \(X\) is not \(\omega\)-well-filtered. Via [23, Corollary 6.11], there is a closed \(KF_{\omega}\)-set \(A\) in the core compactly generated topology with \(A\notin\{\downarrow x\mid x\in X\}\). It follows that there exists a countable descending chain \(K_{0}\supseteq K_{1}\supseteq\cdots\supseteq K_{n}\supseteq\cdots\) of compact saturated subsets of \(\mathcal{C}X\) such that \(A\) is a minimal closed set in \(\mathcal{C}X\) that intersects all members of \(\{K_{n}\mid n\in\mathbb{N}\}\). Pick \(x_{n}\in A\cap K_{n}\) for any \(n\in\mathbb{N}\). We define \(H=\{x_{n}\mid n\in\mathbb{N}\}\). Then we have that \(cl_{\mathcal{C}X}(H)=A\) by the minimality of \(A\).
Consider the function \(i:(X,\mathcal{C}X)\rightarrow(X,\tau)\) defined by \(i(x)=x\) for any \(x\in X\). Then the function \(i\) is continuous, which implies that \(A\) is a \(KF_{\omega}\)-set of \((X,\tau)\). Again applying Corollary 6.11 in [23], we draw the conclusion that \(\sup A\in cl_{\tau}(A)\) since \((X,\tau)\) is \(\omega\)-well-filtered.
Claim 1: \(\sup A\in cl_{\tau}(H)\).
For any \(U\in\tau\) with \(\sup A\in U\), we can deduce that \(U\cap A\neq\emptyset\) because of the fact that \(\sup A\in cl_{\tau}(A)\). Choose \(a\in U\cap A\). Note that \(U\in\mathcal{C}X\) and \(cl_{\mathcal{C}X}(H)=A\). Then \(U\cap H\neq\emptyset\).
Claim 2: If \(U\cap H\neq\emptyset\), then \(H\backslash U\) is finite for any \(U\in\tau\).
Now let \(U\in\tau\) with \(U\cap H\neq\emptyset\). Suppose \(H\backslash U\) is infinite. Then \(A\backslash U\cap K_{n}\neq\emptyset\) for any \(n\in\mathbb{N}\). The fact that \(U\in\tau\subseteq\mathcal{C}X\) suggests that \(A\backslash U\) is closed in \(\mathcal{C}X\). We conclude that \(A=A\backslash U\) from the minimality of \(A\). The assumption that \(H\subseteq A\) indicates that \(H\cap U=\emptyset\), which contradicts \(H\cap U\neq\emptyset\). Denote the one-point compactification of the countable discrete space \(\mathbb{N}\) by \(\mathbb{N}^{\infty}\). Consider the function \(\rho:N^{\infty}\rightarrow(X,\tau)\) defined by
\[\rho(x)=\begin{cases}\sup A,&x=\infty\\ &x_{n},&x=n\in\mathbb{N}\end{cases}\]
Claims 1 and 2 yield that \(\rho\) is continuous. Then \(\rho^{-1}(X\backslash A)\) is open in \(N^{\infty}\) due to the generated openness of \(X\backslash A\) and the core compactness of \(\mathbb{N}^{\infty}\). One sees immediately that \(N^{\infty}\in\rho^{-1}(X\backslash A)\). This implies that there is \(n\in\mathbb{N}\) such that \(n\in\rho^{-1}(X\backslash A)\), which is equivalent to saying that \(x_{n}\in X\backslash A\). But \(x_{n}\in A\), a contradiction.
Let us recall some known facts concerning the \(\omega\)-well-filterification of a \(T_{0}\) space. A \(\omega\)-well-filterification of a \(T_{0}\) space \(X\) consists of a \(\omega\)-well-filtered space \(Y\) and a continuous map \(\eta:X\to Y\) which enjoys the following universal property: For every continuous map \(f\) from \(X\) to a \(\omega\)-well-filtered space \(Z\), there is a unique continuous map \(\overline{f}:Y\to Z\) such that \(f=\overline{f}\circ\eta\). A subset \(A\) of a \(T_{0}\) space \(X\) is called a \(\omega\)_-well-filtered determined set_, \(WD_{\omega}\)-set for short, if for any continuous mapping \(f:X\to Y\) to a \(\omega\)-well-filtered space \(Y\), there exists a unique \(y_{A}\in Y\) such that \(cl(f(A))=cl(\{y_{A}\})\). The set of all closed \(\omega\)-well-filtered determined subsets of \(X\) is denoted by \(\mathbf{WD}_{\omega}(X)\).
A standard construction for the \(\omega\)-well-filterification of a \(T_{0}\) space \(X\) is to set
\[X^{\omega-w}:=\{A\subseteq X:A\in\mathbf{WD}_{\omega}(X)\}\]
topologized by the open sets \(U^{\omega-w}:=\{A\in X^{\omega-w}:A\cap U\neq\emptyset\}\) for each open subset \(U\) of \(X\). If we define \(\eta_{X}^{\omega-w}:X\to X^{\omega-w}\) by \(\eta_{X}^{\omega-w}(x)=cl(\{x\})\), then we obtain a \(\omega\)-well-filterification [23, Theorem 6.8], which we call the standard \(\omega\)-well-filterification.
**Proposition 4.2**.: _Let \(X,Y\) be two \(\omega\)-well-filtered space. If \(f:X\to Y\) is a continuous map, then \(f(\sup A)=\sup f(A)\) for any \(A\in WD_{\omega}(X)\)._
Proof.: The topology of \(X\) is denoted by \(\tau\). For any \(A\in WD_{\omega}(X)\), we know that \(cl_{\tau}(A)=\downarrow\sup A\) again by applying Corollary 6.11 in [23] since \(X\) is \(\omega\)-well-filtered. Note that \(f\) is monotone by virtue of being continuous. Then \(f(\sup A)\) is an upper bound of \(f(A)\). Let \(t\) be any other upper bound of \(f(A)\). This means that \(A\subseteq f^{-1}(\downarrow t)\). The fact that \(f\) is continuous implies that \(f^{-1}(\downarrow t)\) is closed. We deduce that \(\sup A\in cl_{\tau}(A)\subseteq f^{-1}(\downarrow t)\). So \(f(\sup A)\leq t\). Therefore, \(f(\sup A)=\sup f(A)\).
**Theorem 4.3**.: _Let \(X\) be a core compactly generated space. Then the \(\omega\)-well-filterification of \(X\) is again a core compactly generated space._
Proof.: Let \((X^{\omega-w},\eta_{X}^{\omega-w})\) be the standard \(\omega\)-well-filterification of \(X\). We want to prove that \(\eta_{X}^{\omega-w}:X\to(X^{\omega-w},\mathcal{C}X^{\omega-w})\) is also continuous. To this end, let \(U\in\mathcal{C}X^{\omega-w}\). Since \(X\) is core compactly generated, it suffices to prove that \((\eta_{X}^{\omega-w})^{-1}(U)\in\mathcal{C}X\).
Assume \(\rho:C\to X\) is a continuous function from a core compact space \(C\). This means that \(\eta_{X}^{\omega-w}\circ\rho\) is also continuous, which mainfests that \((\eta_{X}^{\omega-w}\circ\rho)^{-1}(U)\) is open in C. Now we can gain our desired result that \((\eta_{X}^{\omega-w})^{-1}(U)\in\mathcal{C}X\). From Lemma 4.1, we know that \((X^{\omega-w},\mathcal{C}X^{\omega-w})\) is an \(\omega\)-well-filtered space. It follows that there is a unique continuous function \(\overline{f}:X^{\omega-w}\to(X^{\omega-w},\mathcal{C}X^{\omega-w})\) with \(\overline{f}\circ\eta_{X}^{\omega-w}=\eta_{X}^{\omega-w}\).
It is evident to see that \(A=\sup_{a\in A}\downarrow a\) for any \(A\in\mathbf{WD}_{\omega}(X)\). From the definition of \(\omega\)-well-filtered determined subsets and the continuity of \(\eta_{X}^{\omega-w}\), we have that \(\{\downarrow a\mid a\in A\}=\eta_{X}^{\omega-w}(A)\) is an \(\omega\)-well-filtered determined subset of \(X^{\omega-w}\). In virtue of Proposition 4.2, we demonstrate that \(\overline{f}(A)=\overline{f}(\sup_{a\in A}\downarrow a)=\sup_{a\in A} \overline{f}(\downarrow a)=\sup_{a\in A}\overline{f}\circ\eta_{X}^{\omega-w}(a)= \sup_{a\in A}\eta_{X}^{\omega-w}(a)=A\). It follows that \(\overline{f}\) is an identity map. This suggests that the core compactly generated topology of \(X^{\omega-w}\) is contained in \(\mathcal{O}(X^{\omega-w})\). Hence, \(X^{\omega-w}\) is core compactly generated.
From the definitions of the well-filterification and the \(\omega\)-well-filterification, one wonders whether the core compactly generated topology of a well-filtered space is again well-filtered and the well-filterification of a core compactly generated space is again core compactly generated. We end this section by leaving the above questions open.
## 5 The core compactly generated topology of monotone convergence spaces
In this section, we discuss the core compactly generated topology of monotone convergence spaces. We show that the core compactly generated topology of a monotone convergence space is again a monotone convergence space and the \(D\)-completion of a core compactly generated space is again core compactly generated.
Recall that a monotone convergence space \(Y\) together with a topological embedding \(j:X\to Y\) with \(j(X)\) a \(d\)-dense subset of \(Y\) is called a \(D\)-completion of the \(T_{0}\) space \(X\). A subset \(A\) of a \(T_{0}\) space \(X\) is called tapered if for any continuous function \(f:X\to Y\) mapping into a monotone convergence space \(Y\), \(\sup f(A)\) always exists in \(Y\).
A standard construction for the \(D\)-completion of a \(T_{0}\) space \(X\) is to set
\[X^{d}:=\{A\subseteq X:A\text{ is closed and tarped}\}\]
topologized by open sets \(U^{d}:=\{A\in X^{d}:A\cap U\neq\emptyset\}\) for each open subset \(U\) of \(X\). If we define \(\eta_{X}^{d}:X\to X^{d}\) by \(\eta_{X}^{d}(x)=cl(\{x\})\), then we obtain a \(D\)-completion (27, Theorem 3.10), which we call the standard \(D\)-completion.
The following theorem is an immediate consequence of Lemma 3.2.
**Theorem 5.1**.: _Let \(X\) be a monotone convergence space. Then the core compactly generated topology of \(X\) is also monotone convergence._
With the help of Theorem 5.1, similar to the proof of Theorem 4.3, the next theorem follows directly.
**Theorem 5.2**.: _Let \(X\) be a core compactly generated space. Then the \(D\)-completion of \(X\) is again a core compactly generated space._
The following lemma is a generation of (4, Lemma 8.1). We would mention the following:
**Lemma 5.3**.: _Let \(f:X\to Y\) be continuous, where \(X\) is a well-filtered space. If \(Q\) is the intersection of a filtered family \(\mathcal{K}\) of compact saturated subsets of \(X\), then \(\bigcap_{K\in\mathcal{K}}\uparrow f(K)=\uparrow f(Q)\)._
Proof.: It is easy to see that \(\uparrow f(Q)\subseteq\bigcap_{K\in\mathcal{K}}\uparrow f(K)\). Note that \(\uparrow f(Q)=\bigcap_{U\in\mathcal{U}}U\), where \(\mathcal{U}=\{U\in\mathcal{O}(X)\mid f(Q)\subseteq U\}\). So it suffices to prove that \(\bigcap_{K\in\mathcal{K}}\uparrow f(K)\subseteq U\) for any \(U\in\mathcal{U}\). Let
\(U\in\mathcal{U}\). Then \(Q=\bigcap_{K\in\mathcal{K}}K\subseteq f^{-1}(U)\). From the continuity of \(f\), we can get that \(f^{-1}(U)\) is open. The assumption that \(X\) is well-filtered implies that there exists \(K\in\mathcal{K}\) such that \(K\subseteq f^{-1}(U)\). In other words, \(\uparrow\!f(K)\subseteq U\).
From Lemma 3.2, we obtain the result that the core compactly generated topology of a monotone convergence space is contained in the Scott topology. Naturally, we wonder on which class of topological spaces, the core compactly generated topologies happen to be the Scott topologies determined by the specialization orders on those spaces. Now we show that the Smyth power spaces of well-filtered spaces satisfy the above property.
**Lemma 5.4**.: _Let \(X\) be a well-filtered space. Then the topology determined by the continuous functions from locally compact sober spaces agrees with the core compactly generated topology._
Proof.: In the light of [12, Theorem 3.1], we have that a core compact well-filtered space is locally compact sober. With the help of well-filterification, we can deduce the lemma by a similar proof of [4, Lemma 8.2].
**Theorem 5.5**.: _Let \(X\) be a well-filtered space. Then the core compactly generated topology of the Smyth power space \(P_{S}(X)\) of \(X\) equals the Scott topology under its specialization order._
Proof.: By applying [25, Theorem 5], we know that \(P_{S}(X)\) is well-filtered. Thus \(P_{S}(X)\) is a monotone convergence space. It follows that the core compactly generated topology of \(P_{S}(X)\) is contained in the Scott topology. Conversely, to this end, let \(\mathcal{U}\in\sigma(P_{S}(X))\). Due to Lemma 5.4, we only need to prove that \(\mathcal{U}\) is contained in the topology determined by the continuous functions from locally compact sober spaces.
Assume \(\rho:C\to X\) be a continuous function, where \(C\) is a locally compact sober space. It remains to prove that \(\rho^{-1}(\mathcal{U})\) is open. Now let \(c\in\rho^{-1}(\mathcal{U})\). Since \(C\) is locally compact, we have that \(\uparrow\!c=\bigcap_{K\in\mathcal{R}}K\), where \(\mathcal{R}=\{K\in Q(X)\mid c\in int(K)\}\). Owing to [4, Lemma 8.1], we can conclude that \(\uparrow\!\rho(c)=\bigcap_{K\in\mathcal{R}}\uparrow\!\rho(K)\subseteq\mathcal{U}\).
Note that \(\rho(K)\) is compact in \(P_{S}(X)\). Then \(\bigcup\rho(K)\) is a compact subset of \(X\).
We claim that \(\rho(c)=\bigcap_{K\in\mathcal{R}}(\bigcup\rho(K))\).
For any \(x\in\rho(c)\), it turns out that \(\uparrow\!x\in\uparrow\!\rho(c)=\bigcap_{K\in\mathcal{R}}\uparrow\!\rho(K)\). This means that there is \(k\in K\) such that \(\uparrow\!x\subseteq\rho(k)\) for any \(K\in\mathcal{R}\). Therefore, \(x\in\rho(k)\subseteq\bigcup\rho(K)\) for any \(K\in\mathcal{R}\), which is equivalent to saying that \(x\in\bigcap_{K\in\mathcal{R}}(\bigcup\rho(K))\).
Conversely, let \(x\in\bigcap_{K\in\mathcal{R}}(\bigcup\rho(K))\). This implies that \(x\in\bigcup\rho(K)\) for any \(K\in\mathcal{R}\), which guarantees the existence of \(k\in K\) such that \(x\in\rho(k)\) for any \(K\in\mathcal{R}\). It follows that \(\uparrow\!x\in\bigcap_{K\in\mathcal{R}}\uparrow\!\rho(K)=\uparrow\!\rho(c)\). So we have \(x\in\rho(c)\).
As we shall see that \(\rho(c)=\bigcap_{K\in\mathcal{R}}(\bigcup\rho(K))\in\mathcal{U}\) and \(\mathcal{R}\) is a directed subset of \(P_{S}(X)\) in its specialization order, which yield that there exits \(K\in\mathcal{R}\) such that \(\bigcup\rho(K)\in\mathcal{U}\). The fact that \(\mathcal{U}\) is an upper set suggests that \(\rho(K)\subseteq\mathcal{U}\). We conclude that \(c\in int(K)\subseteq K\subseteq\rho^{-1}(\mathcal{U})\). Now we can gain our desired result.
By applying the above lemma, we can gain the following corollaries which have been proved in [18] and [26], respectively.
**Theorem 5.6**.: _([18]) If \(X\) is a locally compact sober space, then the upper Vietoris topology and the Scott topology on \(Q(X)\) coincide._
**Theorem 5.7**.: _([26]) If \(X\) is a well-filtered space and its Smyth power space \(P_{S}(X)\) is first-countable, then the upper Vietoris topology agrees with the Scott topology on \(Q(X)\)._
## 6 The product of two core compactly generated spaces
If \(X,Y\) are two core compactly generated spaces, then the topological product \(X\times Y\) need not be a core compactly generated topology, in general. One always has \(\mathcal{O}(X\times Y)\subseteq\mathcal{C}(X\times Y)\), but the containment may be proper [4, Example 5.1]. In this section, we obtain a necessary and sufficient condition for \(\mathcal{O}(X\times Y)=\mathcal{C}(X\times Y)\).
**Theorem 6.1**.: _Let \(X\) be a core compactly generated space. Then the following statements are equivalent._
_(1) \(X\) is core compact._
_(2) For every core compactly generated space \(Y\), one has \(\mathcal{O}(X\times Y)=\mathcal{C}(X\times Y)\)._
Proof.: \((1)\Rightarrow(2)\) follows from [4, Theorem 5.4].
\((2)\Rightarrow(1)\): Let \(Y=\Sigma(\mathcal{O}(X))\). Then \(Y\) is a core compactly generated space by [4, Lemma 4.6]. It follows that \(\mathcal{O}(X\times Y)=\mathcal{C}(X\times Y)\) since \((2)\) holds. Due to [8, Exersice 5.2.7], we can know that \(X\) is core compact iff \(G=\{(x,U)\in X\times\mathcal{O}(X)\mid x\in U\}\) is open in \(X\times\Sigma\mathcal{O}(X)\). So it suffices to prove that \(G\) is core compactly generated open.
Assume \(\rho:C\to X\) be continuous, where \(C\) is a core compact space. It remains to prove that \(\rho^{-1}(G)\) is open in \(C\). For any \(c\in\rho^{-1}(G)\), we have that \(\rho(c)\in G\). We write \(\rho(c)=(x_{c},U_{c})\). Then \(x_{c}\in U_{c}\). Note that \(x_{c}=P_{X}\circ\rho(c)\), where \(P_{X}:X\times\Sigma\mathcal{O}(X)\to X\) is the projection map. This manifests that \(c\in(P_{X}\circ\rho)^{-1}(U_{c})\), \((P_{X}\circ\rho)^{-1}(U_{c})\) is open. In the light of the core compactness of \(C\), we construct inductively a decreasing family of open subsets with
\[x\in V\ll\cdots\ll V_{n+1}\ll V_{n}\ll\cdots\ll V_{1}=(P_{X}\circ\rho)^{-1}(U_ {c}).\]
We define \(A_{n}=\{U\in\mathcal{O}(X)\mid P_{X}\circ\rho(V_{n})\subseteq U\}\) for any \(n\in\mathbb{N}\) and \(R=\bigcup_{n\in\mathbb{N}}A_{n}\).
We claim that \(R\) is Scott open.
Clearly, \(R\) is an upper set. Let \((U_{i})_{i\in I}\) be a directed subset of \(\mathcal{O}(X)\) with \(\bigcup_{i\in I}U_{i}\in R\). Then there is \(n\in\mathbb{N}\) such that \(\bigcup_{i\in I}U_{i}\in A_{n}\), in other words, \(P_{X}\circ\rho(V_{n})\subseteq\bigcup_{i\in I}U_{i}\). The fact that \(V_{n+1}\ll V_{n}\) suggests that there is \(i\in I\) such that \(P_{X}\circ\rho(V_{n+1})\subseteq U_{i}\), that is, \(U_{i}\in A_{n+1}\subseteq R\).
Additionally \(P_{\mathcal{O}(X)}\circ\rho(c)=U_{c}\in R\), where \(P_{\mathcal{O}(X)}:X\times\mathcal{O}(X)\to\mathcal{O}(X)\) is the projection map. This yields that there exists an open set \(U\) of \(X\) such that \(c\in W\subseteq(P_{\mathcal{O}(X)}\circ\rho)^{-1}(R)\).
It suffices to show that \(c\in W\cap V\subseteq\rho^{-1}(G)\).
To this end, let \(a\in W\cap V\). Then \(P_{\mathcal{O}(X)}\circ\rho(a)\in R\). The construction of \(R\) guarantees the existence of \(n\in\mathbb{N}\) with \(P_{X}\circ\rho(V_{n})\subseteq P_{\mathcal{O}(X)}\circ\rho(a)\). As we shall see \(a\in V\subseteq V_{n}\), which means that \(P_{X}\circ\rho(a)\subseteq P_{X}\circ\rho(V_{n})\subseteq P_{\mathcal{O}(X)} \circ\rho(a)\). Hence, \(\rho(a)\in G\).
|
2309.07910 | TEMPO: Efficient Multi-View Pose Estimation, Tracking, and Forecasting | Existing volumetric methods for predicting 3D human pose estimation are
accurate, but computationally expensive and optimized for single time-step
prediction. We present TEMPO, an efficient multi-view pose estimation model
that learns a robust spatiotemporal representation, improving pose accuracy
while also tracking and forecasting human pose. We significantly reduce
computation compared to the state-of-the-art by recurrently computing
per-person 2D pose features, fusing both spatial and temporal information into
a single representation. In doing so, our model is able to use spatiotemporal
context to predict more accurate human poses without sacrificing efficiency. We
further use this representation to track human poses over time as well as
predict future poses. Finally, we demonstrate that our model is able to
generalize across datasets without scene-specific fine-tuning. TEMPO achieves
10$\%$ better MPJPE with a 33$\times$ improvement in FPS compared to TesseTrack
on the challenging CMU Panoptic Studio dataset. | Rohan Choudhury, Kris Kitani, Laszlo A. Jeni | 2023-09-14T17:56:30Z | http://arxiv.org/abs/2309.07910v1 | # TEMPO: Efficient Multi-View Pose Estimation, Tracking, and Forecasting
###### Abstract
Existing volumetric methods for predicting 3D human pose estimation are accurate, but computationally expensive and optimized for single time-step prediction. We present TEMPO, an efficient multi-view pose estimation model that learns a robust spatiotemporal representation, improving pose accuracy while also tracking and forecasting human pose. We significantly reduce computation compared to the state-of-the-art by recurrently computing per-person 2D pose features, fusing both spatial and temporal information into a single representation. In doing so, our model is able to use spatiotemporal context to predict more accurate human poses without sacrificing efficiency. We further use this representation to track human poses over time as well as predict future poses. Finally, we demonstrate that our model is able to generalize across datasets without scene-specific fine-tuning. TEMPO achieves 10\(\%\) better MPJPE with a 33\(\times\) improvement in FPS compared to TeseTrack on the challenging CMU Panoptic Studio dataset. Our code and demos are available at [https://rccchoudhury.github.io/tempo2023/](https://rccchoudhury.github.io/tempo2023/).
## 1 Introduction
Estimating the pose of several people from multiple overlapping cameras is a crucial vision problem. Volumetric multi-view methods, which lift 2D image features from each camera view to a feature volume then regress 3D pose, are currently the state of the art [49, 44, 55, 23] in this task. These approaches produce significantly more accurate poses than geometric alternatives, but suffer from two key limitations. First, the most accurate methods use either 3D convolutions [49, 44, 57] or cross-view transformers [51] which are slow and prevent real-time inference. Secondly, most methods are designed for estimating pose at a single timestep and are unable to reason over time, limiting their accuracy and preventing their use for tasks like motion prediction.
We propose TEMPO, a multi-view TEMporal POse estimation method that addresses both of these issues. TEMPO uses _temporal context_ from previous timesteps to produce smoother and more accurate pose estimates. Our model tracks people over time, predicts future pose and runs efficiently, achieving near real-time performance on existing benchmarks. The key insight behind TEMPO, inspired by work in 3D object detection [31, 20], is that recurrently aggregating spatiotemporal context results in powerful learned representations while being computationally efficient. To do this, we decompose the problem into three stages, illustrated in Figure 2. Given an input RGB video from multiple static, calibrated cameras, at a given timestep \(t\) we first detect the locations of each person in the scene by unproject
ing image features from each view to a common 3D volume. We then regress 3D bounding boxes centered on each person, and perform tracking by matching the box centers with the detections from the previous timestep \(t-1\). For each detected person, we compute a spatiotemporal pose representation by recurrently combining features from current and previous timesteps. We then decode the representation into an estimate of the current pose as well as poses at future timesteps. Unlike existing work [49, 55, 44, 57], our method is able to perform temporal tasks like tracking and forecasting without sacrificing efficiency.
We evaluate our method on several pose estimation benchmarks. TEMPO achieves state of the art results on the challenging CMU Panoptic Studio dataset [25] by 10\(\%\), and is competitive on the Campus, Shelf and Human3.6M datasets. We additionally collect our own multi-view dataset consisting of highly dynamic scenes, on which TEMPO achieves the best result by a large margin. We show that our model achieves competitive results in pose tracking and evaluate the pose forecasting quality on the CMU Panoptic dataset. Additionally, multi-view pose estimation methods are almost always evaluated on the same dataset they are trained on, leading to results that are specific to certain scenes and camera configurations. We measure our method's ability to generalize across different datasets and find that our method can transfer without additional fine tuning. To summarize, our key contributions are that:
* We develop the most accurate multi-view, multi-person 3D human pose estimation model. Our model uses temporal context to produce smoother and more accurate poses.
* Our model runs efficiently with no performance degradation.
* Our model tracks and forecasts human pose for every person in the scene.
* We evaluate the generalization of our model across multiple datasets and camera configurations.
## 2 Related Work
**3D Pose Estimation, Tracking and Forecasting**Approaches for recovering and tracking 3D human pose are usually limited to monocular video. Such methods for pose estimation [48, 5, 27, 29] and pose tracking [43, 42] are highly efficient, but perform significantly worse than multi-view methods in 3D pose estimation accuracy due to the inherent ambiguity of monocular input.
Furthermore, methods in human pose forecasting [35] usually predict future motion from ground truth pose histories. Our approach follows [8, 58] and predicts pose directly from a sequence of video frames. Snipper [58] is the closest to our method and uses a spatiotemporal transformer to jointly estimate, track and forecast pose from a monocular video. Our method differs in that it is able to produce highly accurate estimates using multi-view information while running efficiently.
**Multi-View Pose Estimation** Early work in multi-view human pose estimation was limited to the single-person case, with [3, 30, 39, 1] using pictorial structure models to improve over basic triangulation. More recent approaches [19, 41, 23, 7, 26] improve this result by using advanced deep architectures like 3D CNNs and transformers, and others [9, 10] introduce priors on human shape for additional performance. Our method is most similar to [23], which uses 3D CNNs to regress pose directly from a feature volume. We also follow [23, 26] in analyzing our model's transfer to multiple datasets, extending their qualitative, single-person analysis to a quantitative measurement of performance on several multi-person datasets.
In the multi-person setting, early approaches like [3, 12, 13, 54, 6] associate 2D pose estimates from each view, then fuse the matched 2D poses into 3D. Other methods aimed towards multi-person motion capture use Re-ID features [12, 54], 4D graph cuts [56], plane sweep stereo [32], cross-view graph matching [52], or optimize SMPL parameters [14] to produce 3D poses from 2D pose estimates in each view. These methods can generalize across the data sources, but typically have much less accurate predictions compared to _volumetric_ methods. These first unproject learned 2D image features into a 3D volume and regress pose directly from the 3D features with neural networks. Both [23] and [49] use computationally expensive 3D CNNs for the pose estimation step. Follow up work includes Faster VoxelPose [55]which replaces these 3D CNNs with 2D CNNs for a large speedup, and TesseTrack [44], which uses 4D CNNs to reason over multiple timesteps. Our method combines the best of both: we efficiently incorporate spatiotemporal information with only 2D CNNs and a lightweight recurrent network.
**3D Object Detection and Forecasting** Historically, 3D object detection and instance segmentation methods for autonomous driving have led development in using multi-view images. One key similarity to our work is the aggregation of 2D image features into a single 3D volume. While [46, 50] use the same bilinear unprojection strategy as our method, several works [40, 17, 31] propose alternatives such as predicting per-pixel depth. Other works also use temporal information for detection for tracking objects through occlusion and spotting hard-to-see objects; [20, 31, 38] concretely demonstrate the benefits of incorporating spatiotemporal context. In particular, FIERY [20] uses temporal information for future instance prediction and BEVFormer [31] efficiently aggregates temporal information with a recurrent architecture, both of which inspired our method. Furthermore, [24, 16] use per-timestep supervision to track
pixels through occlusion, an idea which we adapt for reasoning about human pose over multiple frames.
## 3 Method
Our method assumes access to calibrated time-synchronized videos from one or more cameras. At training time, we assume access to \(T\) sets of \(N\) RGB images from different cameras, while at inference time, we have a single set of \(N\) images corresponding to the current timestep. In order to enable TEMPO to transfer to new camera configurations and settings, we compute the dimensions of the space and size of the voxel volume directly from the camera matrices. We set the height of the volume to a constant \(2\,\mathrm{m}\), while setting the length and width of the volume to be the bounding box of the camera extrinsics from a top-down view, and center the space at the mean of the camera locations.
### Preliminaries
We briefly review the person detection and pose estimation modules used by VoxelPose [49], Tessetrack [44], and Faster VoxelPose [55] that TEMPO builds upon. We refer the reader to the original papers for further detail.
#### 3.1.1 Person Detection
The detection module aims to estimate the location of the _root joint_ as well as tight 3D bounding box for each person in the scene. Following previous work, we define the root joint as the mid-hip. At a given time \(t\), the detector module takes as input a set of \(N\) images, each corresponding to a different camera view of the same scene at time \(t\). For each image, we extract features with a pretrained backbone, resulting in \(N\) feature maps \(\mathbf{F}_{1}^{t},\mathbf{F}_{2}^{t},\ldots,\mathbf{F}_{N}^{t}\).
Given the camera matrices for each view \(\mathbf{C}_{1}^{t},\mathbf{C}_{2}^{t},\ldots,\mathbf{C}_{N}^{t}\), we use the bilinear sampling procedure from [23, 49, 17]. For a voxel \(v\in V\) with coordinates \(\mathbf{x}\), we have
\[v=\sum_{i=1}^{N}\mathbf{F}_{i}^{t}(\mathbf{C}_{i}\mathbf{x}) \tag{1}\]
where \(\mathbf{F}_{i}^{t}(\mathbf{x})\) is the feature map \(\mathbf{F}_{i}^{t}\)'s value at position \(\mathbf{x}\), obtained by bilinear sampling. We then compute a birds-eye view representation of \(V\) by taking the maximum along the \(z\)-axis:
\[\mathbf{F}_{\mathrm{BEV}}^{t}=\max_{z}V \tag{2}\]
We use a 2D CNN to produce a 2D heatmap of \(\mathbf{H}^{t}\) from \(\mathbf{F}_{\mathrm{BEV}}^{t}\) of the \((x,y)\) locations of every root joint in the scene. We then sample the top \(K\) locations from \(\mathbf{H}^{t}\), yielding proposals \((x_{1},y_{1}),(x_{2},y_{2}),\ldots(x_{K},y_{K})\). For each proposal location, we obtain the corresponding feature column \(V|_{x,y}\) and apply a 1D CNN to regress a 1D heatmap of the root joint's height, denoted \(\mathbf{H}_{k}^{t}\). We then sample the maximum \(z\) coordinate for each \(\mathbf{H}_{k}^{t}\), and combine these to produce a set of detections \(D_{t}=\{(x_{1},y_{1},z_{1}),\ldots(x_{K},y_{k},z_{K})\}\). Fi
Figure 2: The overall model architecture. We begin by (1) extracting features from each image with the backbone network and unprojecting those features to a 3D volume. In step (2), we use the volume to detect each person in the scene, and (3) associate the detections from the current timestep to the previous one. We then (4) fuse the features from each person with our temporal model and produce a final pose estimate.
nally, we regress width, length and centerness from \(\mathbf{F}_{\mathrm{BEV}}^{t}\) with a multi-headed 2D CNN to produce bounding box predictions for each proposal. The loss function for the detection module has three terms. First, \(L_{2D}\) is the distance between the predicted 2D heatmap and the ground truth, given by
\[L_{2D}=\sum_{t=1}^{T}\sum_{(x,y))}\lVert\mathbf{H}^{t}(x,y)-\mathbf{H}_{GT}^{t }(x,y)\rVert \tag{3}\]
We also compute the loss on the predicted 1D heatmap:
\[L_{1D}=\sum_{t=1}^{T}\sum_{k=1}^{K}\sum_{z}\lVert\mathbf{H}_{k}^{t}(z)-\mathbf{ H}_{k,GT}^{t}(z)\rVert \tag{4}\]
Finally, we include the bounding box regression loss
\[L_{\mathrm{bbox}}=\sum_{t=1}^{T}\sum_{(i,j)\in U}\lVert\mathbf{S}(i,j)- \mathbf{S}_{GT}(i,j)\rVert_{1} \tag{5}\]
The total detection loss is the sum of the above terms, with \(L_{det}=L_{2D}+L_{1D}+L_{\mathrm{bbox}}\).
#### 3.1.2 Instantaneous Pose Estimation
For each detection \(D\), we construct a volume of fixed size centered on the detection's center \(c_{i}\), and unproject the backbone features from each camera view into the volume, as in the detection step. As in [55], we mask out all features falling outside the detection's associated bounding box \(B_{i}\), resulting in a feature volume \(V_{i}^{t}\) for person \(i\).
As shown in Figure 3, we project the feature volume \(V_{t}^{i}\) to 2D along each of the \(xy,yz,\) and \(xz\) planes, resulting in three 2D feature maps, denoted by \(\mathbf{P}_{i,xy}^{t}\), \(\mathbf{P}_{i,xz}^{t}\), and \(\mathbf{P}_{i,yz}^{t}\). The intuition behind this step is that we can predict the 2D position of each joint in each plane, and fuse the predicted 2D positions back together to form a 3D skeleton. Each feature map is passed through a 2D CNN to decode a heatmap of joint likelihood for every person joint, in each of the three planes, and the 2D joint predictions from each plane are fused into 3D with a learned weighting network. We define both the loss for a predicted pose as the mean squared loss between the computed and ground truth 2D heatmaps, as well as the \(L_{1}\) loss of the predicted joint locations and ground truth:
\[L_{joint,t}^{k}=\mathrm{MSE}(\mathbf{P}_{xy}^{t},\mathbf{P}_{xy,GT}^{t})+ \mathrm{MSE}(\mathbf{P}_{xz}^{t},\mathbf{P}_{xz,GT}^{t})+\]
\[\mathrm{MSE}(\mathbf{P}_{yz}^{t},\mathbf{P}_{yz,GT}^{t})+\sum_{j=1}^{J}|j_{i, pred}-j_{i,GT}| \tag{6}\]
with \(\mathrm{MSE}\) representing the mean squared error.
### Person Tracking
We now describe how TEMPO uses temporal information. Unlike previous works, TEMPO takes as input a set of \(N\) images at time \(t\) as well as the person detections \(D_{t-1}\) and corresponding 2D pose embeddings \(P^{t-1}\) from the previous timestep.
Each proposed detection from the previous step consists of a body center \(c_{i}\) and a bounding box \(B_{i}^{t}=(h_{i}^{t},w_{i}^{t})\). Given \(K\) proposals, we compute a \(K\times K\) cost matrix \(\mathbf{A}\) based on the IoU between \(B_{i}^{t}\) and \(B_{k}^{t}\) for all detections at time \(t\), resulting in
\[\mathbf{A}[i][j]=\lVert c_{i}-c_{j}\rVert \tag{7}\]
with \(c_{i},c_{j}\) being the associated predicted locations of person \(i\) and person \(j\)'s root joint.
While VoxelTrack [57] computes cost across every joint in each pose, TEMPO uses the top-down view of the associated bounding box for each person. At inference time, we use the SORT [4] tracker, which is fast and uses a simple Kalman filter with no learned Re-ID mechanism. While [44] uses a learned tracker based on SuperGlue [45], we find that SORT is faster and does not result in degraded performance.
### Temporal Pose Estimation and Forecasting
After running the tracker, the input to the pose estimation stage is a set of detection proposals \(D^{t}\), the previous timestep's detection proposals \(D^{t-1}\), and the assignment matrix between the detection sets \(\mathbf{A}^{t}\). We also assume access to pose features \(\mathbf{P}_{i}^{t-1}\) for each person in the previous timestep. Both \(D^{t}\) and \(D^{t-1}\) have \(K\) proposals each.
However, \(\mathbf{P}_{i}^{t-1}\) and \(\mathbf{P}_{i}^{t}\) are centered at \(c_{i}^{t-1}\) and \(c_{i}^{t}\) respectively, and thus the pose features from each volume are not in the same coordinate system due to the motion of person \(i\). To fix this, we follow the standard procedure used in temporal birds-eye view prediction [31, 20] and warp the projected features from \(\mathbf{F}_{i}^{t-1}\) into the coordinate system of \(\mathbf{P}_{i}^{t}\) with a translational warp defined by \(c_{i}^{t}-c_{i}^{t-1}\).
After warping the previous pose features, we run a recurrent network with Spatial Gated Recurrent Units [2] (SpatialGRUs) to produce multiple embeddings: \(\mathbf{F}_{i}^{t}\), representing the current pose, and \(\mathbf{F}_{i}^{t+1},\mathbf{F}_{i}^{t+2},\ldots\), representing the pose in future frames. While [20] and [31] do not propagate gradients through time and only predict object locations and instance segments at time \(T\), we _do_ backpropagate through time by predicting the pose for each person at _every_ timestep. At training time, we recurrently compute the temporal representation at each timestep \(t_{0},t_{0}+1,\ldots t_{0}+T\), decode a pose for every timestep, and compute losses over all the predicted poses simultaneously. Thus, the final train
ing objective is
\[L_{\mathrm{pose}}=\sum_{t=1}^{T}\sum_{i=1}^{K}L_{\mathrm{joint},t}^{i}+L_{ \mathrm{joint},t+1}^{i} \tag{8}\]
where \(L_{\mathrm{joint},t}^{i}\) is the L1 distance between the predictde and ground truth pose at time \(t\) Providing supervision to the network at every timestep allows the network to learn a representation that encodes the motion between consecutive frames while enabling temporal smoothness between predictions. As we show in Section 4.5, this technique is crucial to our model's performance.
While training, we run the network \(T\) times, for each input timestep. However, at inference time, we save the previous embeddings and only receive input images at a single timestep \(t\), significantly reducing the computational burden and allowing our model to use temporal information without sacrificing efficiency.
## 4 Experiments
### Datasets and Metrics
**Panoptic Studio** The CMU Panoptic Studio dataset [25] is a large multi-view pose estimation dataset with several synchronized camera sequences of multiple interacting subjects. Following prior work [49, 55, 51, 32], we use five HD cameras, specifically cameras 3, 6, 12, 13, and 23. We also use the same training and test split as these works, omitting the sequence 160906_band3 due to data corruption.
**Human 3.6M** The Human3.6M dataset [22, 21] consists of videos of a single subject in an indoor studio with four static cameras. Each video has a professional actor performing a specific action. We follow the training-test split of prior works, using subjects 9 and 11 for validation and the others for training, while omitting corrupted sequences.
**Campus and Shelf** The Campus and Shelf datasets[3] contain approximately 4000 frames of a single scene. While these datasets are commonly used for benchmarking in previous work, they are missing many annotations. We follow previous work [49, 55, 51] and adopt a synthetic heatmap-based scheme for training.
**EgoHumans** We include the newly collected EgoHumans multi-view dataset [28]. This benchmark consists of approximately 1 hour of video of up to 5 subjects performing highly dynamic activites, such as playing tag, fencing, or group assembly. It contains videos from up to eight fisheye cameras and includes both egocentric and exocentric camera and pose data.
**Metrics** For the Panoptic Studio, Human3.6M, and EgoHumans datasets, we report the mean per-joint position error (MPJPE). We additionally report the Average Precision (\(\mathrm{AP}_{K}\)) on the Panoptic and EgoHumans datasets and report MPJPE for Human3.6M in line with previous work [39, 36, 23, 44]. On the Shelf and Campus dataset we report the Percentage of Correct Parts (PCP3D). For pose forecasting, we measure the MPJPE between the predicted pose and the ground truth pose 0.33s into the future, matching previous work [58].
Figure 3: A closer look at the temporal representation used by our model. Following [55], we first project the feature volume to each of the three planes, and concatenate the projections channel-wise. We pass this feature map through an encoder network. We use this feature encoding as input to the SpatialGRU module, using the spatially warped pose feature from the previous timestep as a hidden state. We use the SpatialGRU module to produce features at the current and future timesteps, which we decode into human poses with the pose decoder network.
### Implementation Details
Following [49, 55, 58, 51] we use a ResNet-50 backbone pre-trained on the Panoptic dataset and follow [23] by also using a ResNet-50 backbone pre-trained on Human3.6M for the dataset-specific pose estimation results. We use HRNet [47] pre-trained on COCO as the model backbone for the generalization experiments, as done in [44] rather than pose estimation backbones that are trained on multi-view datasets. All methods are trained on 8 NVIDIA A100 GPUs with batch size of 2 per GPU. We use Adam with a learning rate of 3e-4, with weight decay of 1e-4 and a linear decay schedule for 10 epochs. We measure FPS using a single A100 GPU, and our code is based off the MMPose [11] library. Additional architectural details are in the supplement.
### Pose Estimation Results
We first compare the results of our methods other state-of-the art methods. On the Panoptic Studio dataset, we report results following [55, 49, 51] and use \(960\times 512\) images and initialize from a ResNet-50 [18, 53] checkpoint pretrained on the Panoptic dataset. We also evaluate our method using an HRNet [47] backbone pretrained on COCO with \(384\times 384\) images for a fair comparison with TesseTrack. In Table 1 we provide a complete comparison across the state-of-the-art methods, with baselines trained on the same image resolutions and backbones for completeness. TEMPO achieves significantly lower MPJPE across both resolutions and backbones, while running at 29.3 FPS, competitive with Faster VoxelPose. We attribute this performance increase to the the smoother and more temporally consistent skeletons our model produces due to its incorporation of temporal context and temporal supervision. We also show in 2 that our model achieves performance competitive with the state of the art on the Campus and Shelf datasets.
\begin{table}
\begin{tabular}{l l c c c c c c c} \hline \hline Method & Backbone & Resolution & \(\text{AP}_{25}\uparrow\) & \(\text{AP}_{50}\uparrow\) & \(\text{AP}_{100}\uparrow\) & \(\text{AP}_{150}\uparrow\) & MPJPE (mm) \(\downarrow\) & FPS (s)\(\uparrow\) \\ \hline VoxelPose[49] & ResNet-50 & \(960\times 512\) & 83.59 & 98.33 & 99.76 & 99.91 & 17.68 & 3.2 \\ Faster VoxelPose [55] & ResNet-50 & \(960\times 512\) & 85.22 & 98.08 & 99.32 & 99.48 & 18.26 & **31.1** \\ PlaneSweepPose [32] & ResNet-50 & \(960\times 512\) & 92.12 & 98.96 & 99.81 & 99.84 & 16.75 & 4.3 \\ MvP [32] & ResNet-50 & \(960\times 512\) & **92.28** & 96.6 & 97.45 & 97.69 & 15.76 & 3.6 \\ Ours & ResNet-50 & \(960\times 512\) & 89.01 & **99.08** & **99.76** & **99.93** & **14.68** & 29.3 \\ \hline VoxelPose[49] & HRNet & \(384\times 384\) & 82.44 & **98.55** & 99.74 & 99.92 & 17.63 & 2.3 \\ Faster VoxelPose [55] & HRNet & \(384\times 384\) & 81.69 & 98.38 & 99.67 & 99.83 & 18.77 & 22.4 \\ MvP [51] & HRNet & \(384\times 384\) & **90.41** & 96.32 & 97.39 & 97.89 & 16.34 & 2.8 \\ TesseTrack\({}^{\dagger}\)[44] & HRNet & \(384\times 384\) & 86.24 & 98.29 & 99.72 & 99.50 & 16.92 & 0.6 \\ Ours & HRNet & \(384\times 384\) & 89.32 & 98.48 & **99.73** & **99.94** & **15.99** & 20.3 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Pose estimation results on the CMU Panoptic dataset. Our method achieves the best MPJPE and AP while running at speed comparable to Faster VoxelPose. We evaluate our methods at \(384\times 384\) resolution on the Panoptic dataset, as well as the higher resolution used in other methods. We mark TesseTrack [44] with a \(\dagger\) as their reported results are on a different data split, and the results in this table are from our best reproduction of the method, which is not public.
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline & \multicolumn{4}{c}{Shelf} & \multicolumn{4}{c}{Campus} \\ \cline{2-9} Method & Actor-1 & Actor-2 & Actor-3 & Average & Actor-1 & Actor2 & Actor3 & Average \\ \hline Belagiannis et al. [3] & 66.1 & 65.0 & 83.2 & 71.4 & 82.0 & 72.4 & 73.7 & 75.8 \\ Ershadi et al. [15] & 93.3 & 75.9 & 94.8 & 88.0 & 94.2 & 92.9 & 84.6 & 90.6 \\ Dong et al.[12] & 98.8 & 94.1 & 97.8 & 96.9 & 97.6 & 93.3 & 98.0 & 96.3 \\ VoxelPose [49] & 99.3 & 94.1 & 97.6 & 97.0 & 97.6 & 93.8 & 98.8 & 96.7 \\ Faster VoxelPose[55] & 99.4 & 96.0 & 97.5 & 97.6 & 96.5 & 94.1 & 97.9 & 96.2 \\ PlaneSweepPose[32] & 99.3 & **96.5** & 98.0 & 97.9 & 98.4 & 93.7 & **99.0** & 97.0 \\ MvP [51] & 99.3 & 95.1 & 97.8 & 97.4 & 98.2 & 94.1 & 97.4 & 96.6 \\
**TEMPO** (Ours) & 99.0 & 96.3 & **98.2** & **98.0** & 97.7 & **95.5** & 97.9 & **97.3** \\ \hline \hline \end{tabular}
\end{table}
Table 2: PCP3D accuracy on the Campus and Shelf datasets. We follow the protocol of previous methods and train our backbone on synthetic heatmaps of ground-truth poses. Our method achieves results comparable to the state-of-the-art.
We compare pose estimation performance on the Human3.6M and EgoHumans datasets in Table 4. Our results perform comparably to the state-of-the art. Notably [23] uses ground-truth locations and cropped bounding boxes from input views, while our method is able to match performance despite simultaneously detecting people in the scene. Furthermore, our method significantly outperforms others in the more challenging EgoHumans benchmark, suggesting that temporal context is crucial for handling rapid motion.
### Pose Tracking and Forecasting
We compared the performance of TEMPO's tracking to VoxelTrack [57] in Table (a)a. Our tracker is competitive but performs slightly worse, which is expected due to its lack of learned Re-ID features. To our knowledge, pose forecasting has not been attempted for the multi-view case, so we compare against the closest forecasting method, Snipper [58], which estimates future pose from monocular video. Our method takes as input 4 previous timesteps and predicts the pose over the next 3 timesteps, which is 0.33 seconds into the future, matching prior work [58]. Our model achieves state-of-the-art performance, shown in Table (a)a.
We conducted an additional experiment to measure the performance of our model in transfer on different datasets. The standard practice in monocular 3D pose estimation is to train on and evaluate performance on a combination of multiple datasets. However, the predominant paradigm for
Figure 4: Samples of our model’s pose estimation performance on the Panoptic Studio, Human3.6M, and EgoHumans datasets. TEMPO predicts accurate poses and tracks them over time.
evaluating multi-view pose estimation methods has been to train a single model only on a single dataset and evaluate it on the same dataset.This severely limits the potential of these models to generalize across different settings and thus be deployed in real-world scenarios. Similar to [23], we evaluate the performance of our model trained on multiple combinations of annotated datasets, and report the results for each combination in Table 2(b). We run our model with no fine tuning and the same voxel size of 10cm\({}^{3}\) across each dataset. Our method is able to transfer and successfully tracks and predicts pose, but performs noticeably worse, likely due to being trained on a single camera configuration. In particular, the CMU Panoptic training dataset uses 5 cameras, whereas Human3.6M uses 4 and EgoHumans uses 8 fisheye cameras. The model has the most trouble generalizing to EgoHumans, likely due to the significantly larger indoor space and different camera models. We find that TEMPO performs better in transfer after including training data from each dataset, especially on the EgoHumans dataset, suggesting that future works should include diverse multi-view data from different camera configurations and camera models in order to better generalize.
### Ablations
We train ablated models to study the impact of individual components in our method. All our experiments are conducted on the CMU Panoptic dataset, with results shown in table 5. We find that using temporal information slightly helps, but with per-timestep supervision, the model is able to greatly improve. Warping the pose between timesteps further improves the model. We hypothesize that the improvement from warping is small due to the relative lack of motion in the Panoptic dataset - the distance between body centers in consecutive timesteps is usually small. We also measure the effect of the history length on performance and found no significant difference. While intuitively, larger history lengths should provide more context, the GPU memory constraints of our method prevent investigating \(T>5\). We also ablated on the number of cameras on the Panoptic dataset. We found that the MPJPE increases with the number of cameras, matching the findings of [49, 23] and [44].
## 5 Conclusions
Understanding human behavior from video is a fundamentally temporal problem, requiring accurate and efficient pose estimation algorithms that can reason over time. We presented the first method that satisfies these requirements, achieving state-of-the-art results over existing multi-view pose estimation benchmarks via temporal consistency as a learning objective. Our model is also highly efficient, relying on recurrence to maintain a temporal state while enabling pose tracking and forecasting. TEMPO represents a step closer towards general-purpose human behavior understanding from video.
## Acknowledgements
This research was supported partially by Fujitsu.
\begin{table}
\begin{tabular}{l|c c c c c} \hline \hline Cameras & 1 & 2 & 3 & 4 & 5 \\ MPJPE & 51.32 & 32.13 & 19.22 & 17.34 & 14.68 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Ablation on number of cameras. We observe that performance decreases with decreasing number of cameras.
\begin{table}
\begin{tabular}{l|c|c} \hline \hline Method & \(T\) & Forecasting & Warping & Per-\(t\) loss & MPJPE \(\downarrow\)(mm) \\ \hline (a) & 3 & & & 17.83 \\ (b) & 3 & & ✓ & 15.03 \\ (c) & 3 & ✓ & ✓ & 14.94 \\ (d) & 3 & ✓ & ✓ & ✓ & 14.68 \\ (e) & 4 & ✓ & ✓ & ✓ & 14.90 \\ (f) & 5 & ✓ & ✓ & ✓ & 14.82 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Ablation study on various components to our model. The most important component to performance was the per-timestep supervision. Warping the previous feature also improved performance. We observed the forecasting and slightly increasing the length of the input history had no noticeable effect on performance.
\begin{table}
\begin{tabular}{l|c|c} \hline \hline Method & \(T\) & Forecasting & Per-\(t\) loss & MPJPE \(\downarrow\)(mm) \\ \hline (a) & 3 & & & 17.83 \\ (b) & 3 & & ✓ & 15.03 \\ (c) & 3 & ✓ & ✓ & 14.94 \\ (d) & 3 & ✓ & ✓ & ✓ & 14.68 \\ (e) & 4 & ✓ & ✓ & ✓ & 14.90 \\ (f) & 5 & ✓ & ✓ & ✓ & 14.82 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Ablation study on various components to our model. The most important component to performance was the per-timestep supervision. Warping the previous feature also improved performance. We observed the forecasting and slightly increasing the length of the input history had no noticeable effect on performance. |
2310.20214 | The external photoevaporation of structured protoplanetary disks | The dust in planet-forming disks evolve rapidly through growth and radial
drift, and external photoevaporation also contributes to this evolution in
massive star-forming regions. We test whether the presence of substructures can
explain the survival of the dust component and observed millimeter continuum
emission in protoplanetary disks located within massive star-forming regions.
We also characterize the dust content removed by the photoevaporative winds.
For this, we performed hydrodynamical simulations of protoplanetary disks
subject to irradiation fields of $F_{UV} = 10^2$, $10^3$, and $10^4\, G_0$,
with different dust trap locations. We used the FRIED grid to derive the mass
loss rate for each irradiation field and disk properties, and then measure the
evolution of the dust mass over time. For each simulation we estimate continuum
emission at $\lambda = 1.3\, \textrm{mm}$ along with the radii encompassing
$90\%$ of the continuum flux, and characterize the dust size distribution
entrained in the photoevaporative winds, along with the resulting
far-ultraviolet (FUV) cross section. Our simulations show that the presence of
dust traps can extend the lifetime of the dust component of the disk to a few
millionyears if the FUV irradiation is $F_{UV} \lesssim 10^3 G_0$, but only if
the dust traps are located inside the photoevaporative truncation radius. The
dust component of a disk quickly disperse if the FUV irradiation is strong
($10^4\, G_0$) or if the substructures are located outside the photoevaporation
radius. We do find however, that the dust grains entrained with the
photoevaporative winds may result in an absorption FUV cross section of $\sigma
\approx 10^{-22}\, \textrm{cm}^2$ at early times of evolution ($<$0.1 Myr),
which is enough to trigger a self-shielding effect that reduces the total mass
loss rate, and slow down the disk dispersal in a negative feedback loop
process. | Matías Gárate, Paola Pinilla, Thomas J. Haworth, Stefano Facchini | 2023-10-31T06:25:30Z | http://arxiv.org/abs/2310.20214v1 | # The external photoevaporation of structured protoplanetary disks
###### Abstract
Context:The dust in planet-forming disks is known to evolve rapidly through growth and radial drift. In the high irradiation environments of massive star-forming regions where most stars form, external photoevaporation also contributes to rapid dispersal of disks. This raises the question of why we still observe quite high disk dust masses in massive star-forming regions.
Aims:We test whether the presence of substructures is enough to explain the survival of the dust component and observed millimeter continuum emission in protoplanetary disks located within massive star-forming regions. We also characterize the dust content removed by the photoevaporative winds.
Methods:We performed hydrodynamical simulations (including gas and dust evolution) of protoplanetary disks subject to irradiation fields of \(F_{UV}=10^{2}\), \(10^{3}\), and \(10^{4}\,G_{0}\), with different dust trap configurations. We used the FRIED grid to derive the mass loss rate for each irradiation field and disk properties, and then proceed to measure the evolution of the dust mass over time. For each simulation we estimated the continuum emission at \(\lambda=1.3\) mm along with the radii encompassing 90% of the continuum flux, and characterized the dust size distribution entrained in the photoevaporative winds, in addition to the resulting far-ultraviolet (FUV) cross section.
Results:Our simulations show that the presence of dust traps can extend the lifetime of the dust component of the disk to a few millioyears if the FUV irradiation is \(F_{UV}\lesssim 10^{9}G_{0}\), but only if the dust traps are located inside the photoevaporative truncation radius. The dust component of a disk will be quickly dispersed if the FUV irradiation is strong (\(10^{4}\,G_{0}\)) or if the substructures are located outside the photoevaporation radius. We do find however, that the dust grains entrained with the photoevaporative winds may result in an absorption FUV cross section of \(\sigma\approx 10^{-22}\,\mathrm{cm}^{2}\) at early times of evolution (\(<\)0.1 Myr), which is enough to trigger a self-shielding effect that reduces the total mass loss rate, and slow down the disk dispersal in a negative feedback loop process.
Conclusions:
## 1 Introduction
Protoplanetary disks are composed of the gas and dust material that orbits around newly formed stars. The evolution of isolated protoplanetary disks is often modeled by accounting for internal processes such as viscous accretion driven by magneto-rotational instabilities (MRI), magneto-hydrodynamic (MHD) winds, and/or thermal photoevaporative winds due to the irradiation from the central star (Lynden-Bell & Pringle, 1974; Balbus & Hawley, 1991; Blandford & Payne, 1982; Clarke et al., 2001; Pascucci et al., 2023).
However, growing observational evidence suggest that the irradiation from the environment also plays a significant role in the disk evolution in dense stellar regions such as the Orion Nebular Cluster, Cygnus, and Upper Sco (see Guarcello et al., 2016; Otter et al., 2021; van Terwisga & Hacar, 2023, Anania et al., in prep.). In these regions, observations report particularly compact disk sizes, which points towards photoevaporative truncation, and disks that exhibit strong morphological signatures of mass loss (e.g, O'Dell et al., 1993; McCullough et al., 1995; Bally et al., 1998; Mann et al., 2014; Ballering et al., 2023).
The observational evidence is also consistent with theoretical models that predict significant mass loss rates due to the environmental radiation in a process known as external photoevaporation. (e.g. Scally & Clarke, 2001; Adams et al., 2004; Anderson et al., 2013; Facchini et al., 2016; Haworth et al., 2018). However, one of the open questions regarding the disks in these highly irradiated regions is why these disks have not yet dispersed if the mass loss rates experienced are so high, which has also been dubbed as the "proplyd lifetime problem" (Henney & O'Dell, 1999).
Some of the solutions proposed to this problem include taking into account different star formation events in a given region, which means that some of the disks are actually younger than the cluster itself, and also considering that stars (along with their protoplanetary disks) migrate within the cluster, and therefore they experience a varying ultra-violet (UV) irradiation during their lifetime (Winter et al., 2019, 2019). Additionally, disks may be shielded from external irradiation by an optically thick envelope during the early stages of evolution, delaying the starting time of the photoevaporation process (Qiao et al., 2022; Wilhelm et al., 2023).
However, besides the survival of the gas component which is directly dispersed by external photoevaporation, it is also necessary to explain the survival of the dust component, which is observed in the millimeter continuum (Eisner et al., 2018; Otter et al., 2021). Numerical models by Sellck et al. (2020) have shown that drift is even more efficient in disks truncated by external photoevaporation, with typical depletion timescales of \(t_{depletion}\approx 2\times 10^{5}\,\mathrm{yr}\) (defined as the timescale in which 99% of the initial dust mass is lost).
In order to explain the detected millimeter fluxes and sizes, it is necessary to retain the dust for longer timescales, with dust-trapping substructures for example (Whipple, 1972; Pinilla et al., 2012). Except for very young class 0/I objects (Ohashi et al., 2023, see the recent eDisk sample), substructures such as rings and gaps appear to be a common feature in protoplanetary disks (e.g. the DSHARP sample Andrews et al., 2018), occurring even in compact disks (Zhang et al., 2023; Miley et al., in prep.). Despite this, there are currently no models that study the evolution and observability of a sub-structured disk subject to external photoevaporation. In particular, it is not clear if the solid material that is concentrated at dust traps will be able to survive the photoevaporative dispersal, or if it will be dragged along with the gas component in the thermal winds.
The goal of this paper is to characterize the evolution of the dust size distribution, the flux in the millimeter continuum, and the estimated disk size. We performed numerical simulations of disks that are subject to external photoevaporation, in cases with and without gap-(ring-)like structures.
We are interested on whether the dust traps located at the edge of gap-like structures can survive the photoevaporative dispersal process, for different UV fluxes and for different trap locations. Though we expect only the small grains to be entrained in the photoevaporative wind, the fragmentation of larger dust and uncertain growth timescale of small dust makes the outcome unclear. We also test the survival of such dust traps in the particular case of an increasing UV flux, motivated by the model Winter et al. (2019) in which protoplanetary disks experience a variable UV irradiation during their lifetime. Finally, from our simulations we also track the distribution of the dust grains entrained with the winds, and use that information to re-estimate the effective opacities at UV wavelengths, which could result in a self-shielding effect that would protect the disk from the environmental irradiation (Facchini et al., 2016; Qiao et al., 2022).
The paper is structured as follows: In Section 2 we describe our gas and dust evolution model, which includes the growth and fragmentation of multiple grain species, the prescription for the gap-like structures, and the external photoevaporation model using the FRIED mass loss rate grid (Haworth et al., 2018; Sellek et al., 2020). In Section 3 we describe the simulations, the parameter space, and the calculation of different observable quantities. Section 4 shows our results, including a review of the difference between photoevaporating and non-photoevaporating disks, the evolution of our simulations for the different parameters, and the description of the dust content in the wind. Section 5 discusses the implications of our results and how they compare to observations of highly irradiated regions.
## 2 Disk model
We used the code DustPy1(Stammler and Birnstiel, 2022) to simulate the gas and dust evolution of protoplanetary disks, and include the effects of external photoevaporation.
Footnote 1: \(\mathrm{github.com/stammler/DustPy}\), version 0.5.6 was used for this work. Article number, page 2 of 15
### Gas evolution
The gas component evolves through viscous spreading and mass loss by external photoevaporation, through the following equation:
\[\frac{\partial}{\partial t}\Sigma_{\mathrm{g}}+\frac{1}{r}\frac{\partial}{ \partial r}(r\,\Sigma_{\mathrm{g}}\,v_{r})=-\Sigma_{\mathrm{g,w}}, \tag{1}\]
where \(r\) is the radial distance to the star, \(\Sigma_{\mathrm{g}}\) is the gas surface density, and \(v_{r}\) is the radial viscous velocity (Pringle, 1981):
\[v_{r}=-\frac{3}{\Sigma_{\mathrm{g}}\sqrt{r}}\frac{\partial}{\partial r}(v_{ \alpha}\,\Sigma_{\mathrm{g}}\,\sqrt{r}), \tag{2}\]
with \(v_{\alpha}\) the kinematic viscosity:
\[v_{\alpha}=\alpha c_{s}h_{\mathrm{g}}, \tag{3}\]
which depends on the sound speed \(c_{s}\), the pressure scale height \(h_{\mathrm{g}}\), and the turbulence parameter \(\alpha\)(Shakura and Sunyaev, 1973). The adiabatic sound speed is defined as:
\[c_{s}=\sqrt{\gamma\frac{k_{B}T}{\mu m_{p}}}, \tag{4}\]
where \(\gamma\) is the adiabatic index, \(T\) is the gas temperature, \(k_{B}\) is the Boltzmann constant, \(\mu\) is the mean molecular weight, and \(m_{p}\) is the proton mass.
The surface density loss rate \(\dot{\Sigma}_{\mathrm{g,w}}\), was derived from the FRIED grid from Haworth et al. (2018), following the implementation of Sellek et al. (2020, see their equations 5 - 7). To describe it briefly, this method first predicts the expected total mass loss rate \(\dot{M}_{\mathrm{w}}\), for a disk with surface density \(\Sigma_{\mathrm{g}}\), around a star with mass \(M_{*}\), subject to an external FUV radiation field \(F_{\mathrm{UV}}\), along with the photoevaporative radius \(r_{\mathrm{phot}}\) which is located at the interface between the optically thick and optically thin regions in the outer disk. Then, the total mass loss rate \(\dot{M}_{\mathrm{w}}\) is distributed across all the grid cells in the photoevaporative region (\(r\geq r_{\mathrm{phot}}\)), according to the material available in each grid cell. Figure 1 shows an example of the mass loss grid, with a surface density profile overlaid on top to illustrate the location of the photoevaporation radius and the regions affected by the mass loss.
This implementation assumes that the mass loss by external photoevaporation occurs only in the outer regions of the disk, and loss from the inner regions is negligible. For more details
Figure 1: Example mass loss rate grid for a disk orbiting a \(1\,\mathrm{M}_{\mathrm{o}}\) star subject to an irradiation of \(F_{UV}=10^{3}\,G_{0}\). The grid determines the gas mass loss \(\dot{M}_{\mathrm{w}}\) as a function of the gas surface density \(\Sigma_{\mathrm{g}}\) and radii \(r\), using the FRIED grid from Haworth et al. (2018) and the implementation of Sellek et al. (2020). The solid white line, shows an example gas surface density profile, the dashed vertical line indicates the photoevaporative truncation radius \(r_{phot}\), and the solid red line indicates the regions which are subject to external photoevaporation. The photoevaporation radius evolves with the simulation, and corresponds to the maximum of the mass loss rate along the current surface density profile.
about the implementation, we refer to their original paper.2 For simplicity, we did not include internal MHD or photoevaporative winds, which would be launched from smaller radii.
Footnote 2: The implementation of external photoevaporation in DustPy is available at:
github.com/matgraate/dustpy_module_externalPhotoevaporation,
and will also be included within the DustPy library package at:
github.com/stammler/dustpylib.
### Dust dynamics
DustPy tracks the evolution of multiple grain sizes, that can grow through sticking, fragmentation into smaller species, drift towards the local pressure maximum, and diffuse according to the concentration gradient, following the model from Birnstiel et al. (2010). The corresponding advection-diffusion equation is:
\[\frac{\partial}{\partial t}\Sigma_{\rm d}+\frac{1}{r}\frac{\partial}{\partial r }(r\,\Sigma_{\rm d}\,v_{\rm d})-\frac{\partial}{\partial r}\left(rD_{\rm d} \Sigma_{\rm g}\frac{\partial}{\partial r}\left(\frac{\Sigma_{\rm d}}{\Sigma_{ \rm g}}\right)\right)=-\dot{\Sigma}_{\rm d,w}, \tag{5}\]
where \(\Sigma_{\rm d}\) is the dust surface density, \(v_{\rm d}\) is the corresponding radial velocity, \(D_{d}\) is the dust diffusivity, and \(\dot{\Sigma}_{\rm d,\,w}\) is the dust loss by wind entrainment. We note that we solve this equation for every dust size bin (along with the coagulation equation, see Section 2.4) and therefore all dust quantities are dependent on the grain size.
Overall, the dust dynamics can be described by its particle size \(a\), or more specifically, by the Stokes number St, which measures the coupling between gas and dust with:
\[{\rm St}=\frac{\pi}{2}\frac{a\,\rho_{s}}{\Sigma_{\rm g}}\cdot\begin{cases}1& \lambda_{\rm mfp}/a\geq 4/9\,-\,{\rm Epstein},\\ \frac{4}{5}\frac{a\,\pi}{\lambda_{\rm mfp}}&\lambda_{\rm mfp}/a<4/9\,-\,{\rm Stokes \,I},\end{cases} \tag{6}\]
with \(\rho_{s}\) the material density of the dust, and \(\lambda_{\rm mfp}\) the mean free path of the gas molecules, where the latter is used to determine the corresponding drag regime (Epstein or Stokes I).
Given the Stokes number, the radial advection velocity of the dust is defined by:
\[v_{\rm d}=\frac{1}{1+{\rm St}^{2}}v_{\rm v}-\frac{2{\rm St}}{1+{\rm St}^{2}} \eta v_{k}. \tag{7}\]
Which means that small grains (\({\rm St}\ll 1\)) become coupled to the gas motion \(v_{\rm v}\), large boulders (\({\rm St}\gg 1\)) become decoupled from the gas, and mid-sized pebbles (\({\rm St}\sim 1\)) drift towards the pressure maximum with a velocity of \(v_{\rm d}\approx-\eta v_{K}\), where \(\eta=-(1/2)\,(h_{\rm g}/r)^{2}\,{\rm dln}P/{\rm dln}\,r\) measures the relative difference between the gas orbital speed and the local Keplerian speed \(v_{K}\) due to the pressure support of the gas \(P=\Sigma_{\rm g}c_{\rm s}^{2}/(\sqrt{2}\pi h_{\rm g})\)(Weidenschilling, 1977; Nakagawa et al., 1986; Takeuchi & Lin, 2002).
We note that the effect of the pressure gradient on the dust dynamics is measured at the midplane, since large particles settle down to smaller scale heights than the gas, with \(h_{\rm d}=h_{\rm g}\sqrt{\alpha/(\alpha+{\rm St})}\)(Dubrulle et al., 1995). Observational evidence of effective dust settling has recently been confirmed by observations of a handful of edge-on disks at different wavelengths(Villenave et al., 2020). The radial diffusivity is characterized by \(D_{\rm d}=v_{\rm a}/({\rm St}^{2}+1)\), where smaller particles diffuse more easily than larger ones (Youdin & Lithwick, 2007).
To include the effect of dust entrainment in the photoevaporative wind, we followed the prescription from Sellek et al. (2020), where grains smaller than the entrainment size can be lost with the wind:
\[a_{\rm ent}=\sqrt{\frac{8}{\pi}}\frac{c_{s}}{GM_{s}}\frac{\dot{M}_{\rm w}}{ \Omega_{s}\rho_{s}}, \tag{8}\]
where \(\Omega_{s}=4\pi h_{\rm g}/\sqrt{h_{\rm g}^{2}+r^{2}}\) is the solid angle covered by the disk outer edge, as seen from the star. Then, the surface density loss rate for dust grains with sizes \(a\leq a_{\rm ent}\) is:
\[\dot{\Sigma}_{\rm d,w}(a)=\epsilon(a)\dot{\Sigma}_{\rm g,w}, \tag{9}\]
with \(\epsilon(a)\) the dust-to-gas ratio of the grains in the bin size \(a\).
In terms of disk evolution, this could lead to an outcome where the dust grains in the outer regions are either entrained at early phases during the disk lifetime, or grow and drift before they can be entrained with the wind. The study of Sellek et al. (2020) suggests that the former scenario occurs in protoplanetary disks, though differences may arise when including the full coagulation of multiple grain sizes in the model, instead of the two-population approximation from Birnstiel et al. (2012b).
### Substructures and dust traps
To include the effects of substructures capable of trapping dust grains (e.g. such as the gaps formed by giant planets), we used the same approach as in Stadler et al. (2022), which implements a perturbation in the viscous \(\alpha\) profile, that in turn creates a gap-like structure in the gas surface density profile. The perturbation in the turbulence has the shape of a Gaussian bump:
\[\alpha(r)=\alpha_{0}\times\left(1+A_{\rm gap}\exp\left(-\frac{\left(r-r_{ \rm gap}\right)^{2}}{2w_{\rm gap}^{2}}\right)\right), \tag{10}\]
where, \(\alpha_{0}\) is the turbulence base value, and \(A_{\rm gap}\), \(r_{\rm gap}\), and \(w_{\rm gap}\) are the gap amplitude, location, and width, respectively. From viscous evolution theory, a power-law gas surface density profile is scaled approximately by a factor of \(\Sigma_{\rm g}(r)\propto\alpha_{0}/\alpha(r)\), assuming that the disk is in steady state accretion.
The presence of a gap-like substructure then leads to the formation of a pressure maximum, where large particles (\({\rm St}\gtrsim\alpha\)) can be easily trapped (Pinilla et al., 2012a, see also see Equation 7), though we note that small particles would still be able to pass through the gap by coupling with the gas component.
### Grain growth
Dust growth was computed by solving the coagulation equation (Smoluchowski, 1916), that accounts for the result of the collision between two grain species given their relative velocities and sizes, which can be the sticking between particles, the fragmentation of both species, and the erosion in case of a significant size difference (for more details we refer to Birnstiel et al., 2010; Stammer & Birnstiel, 2022).
There are two characteristic regimes of grain growth: the drift-limited, when particles drift inward faster than they can grow, and the fragmentation-limited, when particles collide at speeds higher than the fragmentation threshold of the material \(v_{\rm frag}\)(Brauer et al., 2008; Birnstiel et al., 2009, 2012b). In particular, in pressure maxima, such as the one in the outer edge of a gap, the contribution of drift to both the radial motion and the relative collision velocities between dust grains is cancelled, allowing the particles to locally accumulate and grow into larger sizes until they reach the size limit given by the fragmentation barrier.
## 3 Simulation setup
In this section we describe the initial conditions of our simulations, the parameter space explored, and the post-processing method used to derive the observable fluxes from dust grain size distribution.
### Initial conditions and grid resolution
The initial surface density profile was set according to a modified version of the Lynden-Bell and Pringle (1974) self-similar solution, that include gap-like substructures that are consistent with the turbulence radial profile (see Equation 10):
\[\Sigma_{\rm g}(r)=\frac{M_{\rm disk}}{2\pi r_{c}^{2}}\left(\frac{r}{r_{c}} \right)^{-1}\exp(-r/r_{c})\frac{\alpha_{0}}{\alpha(r)}, \tag{11}\]
where \(M_{\rm disk}\) and \(r_{c}\) are the initial disk mass and characteristic radius.
The gas temperature for the simulations follows a power law profile, with:
\[T_{\rm g}(r)=T_{0}\left(\frac{r}{1\,{\rm AU}}\right)^{-1/2}, \tag{12}\]
where \(T_{0}\) is the temperature at 1 AU. We note that this profile assumes that the heating is dominated by the passive stellar irradiation on the disk, and neglects the contribution of accretion heating, and the radiation from nearby stars.The dust distribution is initialized assuming a uniform constant dust-to-gas ratio, with \(\Sigma_{\rm d}=\epsilon_{0}\Sigma_{\rm g}\), following the ISM size distribution from (Mathis et al., 1977), which goes from 0.5 \(\mu\)m to an initial maximum grain size of \(a_{0}=1\,\mu\)m.
The radial grid was set to be linearly spaced with \(r^{1/2}\), and going from 2.5 AU to 500 AU, with \(n_{r}=200\) grid cells. The FRIED grid is tabulated out to 400 AU, however, in our models the photoevaporative truncation radius is always smaller than the outer radial grid boundary of the FRIED grid and no extrapolation in the 400-500 AU range is needed. The grid for the dust size distribution was set with a logarithmic spacing, going from approx. 0.5 \(\mu\)m to 50 cm with \(n_{m}=127\) grid cells, such there are 7 grid cells for each order of magnitude in mass in order to reliably solve the grain coagulation (Ohtsuki et al., 1990; Drazkowska et al., 2014; Stammler and Birnstiel, 2022). We evolved the simulations from time \(t=0\) up to 5 Myr, though some disks may fully disperse earlier due to the influence of external photoevaporation.
### Parameter space
In this work, our main focus is to explore on how the gap presence and its location affects the retention of solids in protoplanetary disks that are subject to different FUV radiation environments.For the fiducial model we took a 1 M\({}_{\odot}\) mass star, surrounded by a disk with mass of \(M_{\rm disk}=0.1M_{*}\), and an initial characteristic radius of \(r_{c}=90\) AU. We started our study with an initial comparison against disks without photoevaporation, to have an overview of the key aspects of the disk evolution.
For the main parameter space exploration, gap-like substructures can be located at \(1/3\,r_{c}\) (inner trap) or \(2/3\,r_{c}\) (outer trap), or completely absent. (\(A_{\rm gap}=0\), no traps). With \(r_{c}=90\) AU, this means that the gaps are located either at 30 AU or 60 AU in our simulations. Bae et al. (2022) compiled data of disks with observed substructures, showing that most of the rings have been found between 20-60 AU (their histogram in Fig. 3d), as assumed in this work. Some disks have substructures up to 100-200 AU, which are outside the photoevaporation radius in our models. The disk can be subject to external photoevaporation due to far ultra-violet (FUV) fluxes of \(10^{2}\,G_{0}\)3 (weak), \(10^{3}\,G_{0}\) (medium), or \(10^{4}\,G_{0}\) (strong) (where \(G_{0}\) corresponds to the local interstellar radiation field Habing, 1968). We note that in low mass star-forming regions such as Taurus/Lupus the external FUV radiation field is of order 1-100 \(\rm G_{0}\). In more massive stellar clusters such as the Orion Nebular Cluster the FUV radiation field strength that disks are exposed to ranges from \(1-10^{7}\,\rm G_{0}\). Previous work has suggested that the most common experienced UV environment is around \(10^{3}\,\rm G_{0}\)(Fatuzzo and Adams, 2008; Winter et al., 2020). Additionally, we also study how the disk evolution changes for a star with lower mass (correspondingly with lower disk temperature), a disk that is initially more compact, and a disk that is subject to a FUV radiation field that increases with time (in order to mimic, for example the effect of migration within a stellar cluster, or the clearing of the original molecular cloud Winter et al., 2019; Qiao et al., 2022). The complete parameter space and the additional disk physical parameters can be found in Table 1.
Footnote 3: The FUV radiation field strength is usually measured using the Habing unit, defined as \(1\rm G_{0}=1.6\times 10^{-3}\)\(\rm erg\,s^{-1}\,cm^{-2}\) integrated over the wavelength range 912–2400Å. 1 \(\rm G_{0}\) is representative of the mean interstellar FUV radiation field in the Solar neighbourhood.
### Fluxes in the millimeter continuum
To compare our models with observations, we obtain the grain size distribution from the dust evolution simulations with DustPy, which is then used to compute the intensity profile and the total flux in the millimeter continuum, at \(\lambda=1.3\) mm. We assume that the dust grains follow the opacity model from Ricci et al. (2010), and are composed of water ice, carbon and silicates (Zubko et al., 1996; Draine, 2003; Warren and Brandt, 2008).
With the opacity profile \(k_{\nu}(a)\), with \(\nu\) the frequency, we can calculate the optical depth at every radius as:
\[\tau_{\nu}=\sum_{a}\kappa_{\nu}(a)\,\Sigma_{\rm d}(a), \tag{13}\]
the intensity profile with:
\[I_{\nu}=B_{\nu}(T)\left(1-\exp(-\tau_{\nu})\right), \tag{14}\]
\begin{table}
\begin{tabular}{l l c c} \hline \hline Symbol & Description & Value & Unit \\ \hline \(F_{\rm UV}\) & External FUV Flux & \(10^{2}\), \(\bf 10^{3}\), \(10^{4}\) & \(G_{0}\) \\ \(M_{*}\) & Stellar mass & 0.3, \(\bf 1.0\) & \(M_{\odot}\) \\ \(M_{\rm disk}\) & Initial disk mass & 0.1 & \(M_{*}\) \\ \(r_{\rm c}\) & Characteristic radius & 45, \(\bf 90\) & AU \\ \(r_{\rm gap}\) & Gap locations & **1/3**, 2/3 & \(r_{\rm c}\) \\ \(A_{\rm gap}\) & Trap amplitude & **0**, 4 & - \\ \(T_{0}\) & Temperature at 1 AU & 125, \(\bf 205\) & K \\ \(\alpha_{0}\) & Viscosity parameter & \(10^{-3}\) & - \\ \(\gamma\) & Adiabatic index & 1.4 & - \\ \(\mu\) & Mean molecular weight & 2.3 & - \\ \(\epsilon_{0}\) & Initial dust-to-gas ratio & 0.01 & - \\ \(a_{0}\) & ISM maximum grain size & 1 & \(\mu\)m \\ \(v_{\rm frag}\) & Fragmentation velocity & 10 & m s\({}^{-1}\) \\ \(\rho_{s}\) & Grain material density & 1.67 & g cm\({}^{-3}\) \\ \hline \end{tabular}
\end{table}
Table 1: Parameter space. Fiducial values are highlighted in boldface.
and the total flux as \(F_{\nu}=\int I_{\nu}\,\mathrm{d}\Omega\), where \(\mathrm{d}\Omega\) is the differential of the solid angle covered by the disk in the sky, and \(B_{\nu}(T)\) is the emission of a black body with temperature \(T\). For the purpose of this work, we assumed our disks to be at a distance of \(400\,\mathrm{pc}\), which is the approximate distance to the Orion Nebular Cluster (ONC, Hirota et al., 2007; Kounkel et al., 2017), and to account for observational limitations, we also assume a beam size of \(40\,\mathrm{mas}\) and a sensitivity threshold of \(0.1\,\mathrm{mJy}\,\mathrm{beam}^{-1}\) (this is used to calculate the measured disk size \(r_{90\%}\), and the intensity profile \(I_{\nu}\)).
While for the gas and dust evolution, both of them have the same temperature (Eq. 12), for the calculations of the millimetre fluxes, we assume that there is a background temperature of \(T_{\mathrm{b}}=20\,\mathrm{K}\), to account for the irradiation from massive stars in the interstellar space, thus:
\[T=\sqrt[4]{T_{\mathrm{g}}^{4}+T_{\mathrm{b}}^{4}}. \tag{15}\]
## 4 Results
In this section we show how the mass evolution, dust size distribution, and observable properties such as the intensity profile, total flux and the observable dust disk size change due to the effect of external photoevaporation and substructures within our parameter space.
### Fiducial comparison
We begin our comparison between simulations with and without external photoevaporation, in order to understand the main features of each evolution pathway. The effect of photoevaporation on the gas component is straightforward (Figure 2, top panel). The gas mass decreases faster for disks undergoing external photoevaporation than in the viscous evolution counterparts, and (for our model) the presence of substructures have no significant impact on the overall gas evolution. Meanwhile, the dust component is distinctively affected by both the effects of photoevaporation and dust traps. Figure 3 shows the morphology of the dust size distribution at relevant times, where we see how photoevaporation truncates the outer disk, and how dust traps retain higher concentrations of large grains.
Disks subject to external photoevaporation have lower dust masses than both of their viscous evolution counterparts (Figure 2, middle panel), and the mass loss in (externally) photoevaporating disks can be described in two stages: a wind dominated loss, and a drift dominated loss (Figure 4). During the first stage of disk evolution, the dust grains in the outer regions are still small and can be easily entrained with the photoevaporative winds. This leads to a decrease in the dust mass from the initial \(300\,\mathrm{M}_{\mathrm{\oplus}}\) to \(80\,\mathrm{M}_{\mathrm{\oplus}}\) during the first \(0.1\,\mathrm{Myrs}\). In contrast, the dust mass in viscous evolution simulations only decreases to \(250\,\mathrm{M}_{\mathrm{\oplus}}\) during the same period of time.
From dust evolution theory, the lifetime of the disk (in terms of the dust component) is on the same order of magnitude as the drift and growth timescales of dust particles at the disk outer edge (Birnstiel et al., 2012; Powell et al., 2017). Because photoevaporation removes all the material from the disk outer regions, effectively truncating the disk size, the remaining dust component will drift faster towards the star, in comparison with the viscous counterparts.
Figure 4 shows that in the simulation with photoevaporation and without dust traps, the dust loss rate is first dominated by wind entrainment (\(t\lesssim 0.1\,\mathrm{Myr}\)), and quickly becomes dominated by drift (\(t\gtrsim 0.1\,\mathrm{Myr}\)). For this simulation, the disk has lost \(99\%\) of the initial dust mass by \(t_{\mathrm{depletion}}\approx 0.4\,\mathrm{Myrs}\). Up until this point, our results agree with the work of Sellek et al. (2020, see their Fig. 5), and we refer to their paper for a detailed analysis of the evolution of disks without substructures.
It is during the drift dominated stage that the presence of a dust trap becomes relevant, greatly delaying the depletion of the dust component. In this simulation with external photoevaporation and dust traps, the total mass lost by drift becomes comparable to that lost by winds after \(t=1\,\mathrm{Myr}\), and the disk loses the \(99\%\) of the initial dust mass only by \(t_{\mathrm{depletion}}=2.3\,\mathrm{Myrs}\). This depletion timescale exceeds the values reported in Sellek et al. (2020) by an order of magnitude, indicating that the presence of dust traps may explain why disks in dense star clusters subject
Figure 2: Gas and dust mass evolution (top and middle panels), and the global dust-to-gas ratio (bottom panel). The plot shows the evolution of a disk with and without the influence of FUV external photoevaporation (black vs. gray lines), for disks with and without a gap-like substructure (solid vs. dotted lines). For this comparison, the disk is around a \(M_{\odot}\) mass star with an initial size of \(r_{c}=90\,\mathrm{AU}\). The gap substructures are located at \(r_{sp}=1/3r_{c}\), and the irradiation field is \(F_{\mathrm{UV}}=10^{3}\,G_{0}\).
to high \(F_{\rm UV}\) fluxes are still detectable in observations (Guarcello et al., 2016; Eisner et al., 2018; Otter et al., 2021).
In terms of the global dust-to-gas mass ratio (Figure 2, bottom panel) it is interesting to note that this is always below the initial \(\epsilon_{0}=0.01\), despite the gas mass loss in disks with external photo-evaporation. In particular, the simulation without dust traps under photoevaporation displays a remarkably lower dust-to-gas ratio than its viscous counterpart, which is due to the reduced drift timescale by the photoevaporative truncation of the disk.
The effect of photoevaporation and the dust traps can also be seen in the derived observable quantities, i.e. the intensity profiles, the flux, and the dust disk radius that is usually assumed as the radius enclosing 90% of the total millimeter flux (obtained following Equation 14), which are shown in Figure 5 for the wavelength \(\lambda=1.3\) mm. We also indicate the dust masses, the
\begin{table}
\begin{tabular}{l l|c c c|c c c|c c c} \hline \hline & & \multicolumn{3}{c|}{\(M_{\rm dust}\) (\(M_{\rm e}\))} & \multicolumn{3}{c|}{\(F_{\nu}\) (mJy) \(-\) 1.3 mm} & \multicolumn{3}{c}{\(r_{90\%}\) (AU)} \\ \cline{2-13} \(F_{\rm UV}\) (\(G_{0}\)) & Dust trap & 1 Myr & 3 Myr & 5 Myr & 1 Myr & 3 Myr & 5 Myr & 1 Myr & 3 Myr & 5 Myr \\ \hline
0 & Yes & 104.3 & 13.8 & 4.7 & 53.2 & 15.0 & 5.2 & 105.2 & 42.5 & 38.5 \\
0 & No & 26.8 & 5.6 & 2.9 & 30.5 & 6.3 & 2.8 & 113.9 & \(<5\) & \(<5\) \\ \(10^{3}\) & Yes & 22.8 & 1.3 & 0.1 & 16.5 & 2.8 & 0.3 & 38.5 & 34.7 & 33.5 \\ \(10^{3}\) & No & 0.4 & 0.02 & 0.003 & 1.2 & 0.1 & 0.004 & 16.2 & \(<5\) & \(<5\) \\ \hline \end{tabular}
\end{table}
Table 2: Dust mass evolution - Fiducial Comparison.
Figure 3: Dust size distribution at 0.1, 1, and 3 Myr (from left to right), for the simulations with/without photoevaporation, and with/without dust traps. The solid lines indicate the estimated fragmentation and drift growth limits (magenta and cyan, respectively). For reference, we show the grain sizes that correspond to St = 0.1 with a dashed white line, and the photoevaporation radius with a dashed red line. Note that we plot \(\sigma_{\rm dust}\), which has surface density units, but it is normalized by the logarithmic bin size (see Birnstiel et al., 2010).
corresponding fluxes, and the radii containing 90% of the dust emission at specific times in Table 2.
From the intensity profiles (top panel, shown at 1 Myr) we see how the disk subject to external photoevaporation are truncated at \(r\approx 60\,\mathrm{AU}\,-\,70\,\mathrm{AU}\), in contrast with their viscous evolution counterparts, which display a more extended emission. As expected from the difference in dust masses, the disks with the gap-substructures are brighter than their smooth counterparts, featuring a ring-like emission at \(r\approx 35\,\mathrm{AU}\). We note that, since the dust trap is located well inside the truncation radius, the morphology of the bright ring is not affected by external photoevaporation.
The evolution of the total flux \(F_{\nu}\) (middle panel) follows the trend of the total dust mass. Here, the simulations with photoevaporative loss show fluxes that range between 0.01-15 mJy after \(t=1\,\mathrm{Myrs}\), and in particular the simulation with a dust trap manages to display an emission higher than 1 mJy until 4 Myrs. The value of \(r_{00\%}\) (bottom panel) reflects the extension of the intensity profile until the sharp drop below the sensitivity limit (0.1 mJy beam\({}^{-1}\) for this plot). For the disks subject to external photoevaporation the disk size progressively shrinks, first until the truncation radius within the first 0.1 Myr due to the initial dust removal by winds. Then, if no dust traps are present, the disk continues to shrink until it disappears at \(t\approx 1\,\mathrm{Myr}\) (i.e., the emission is fully below the sensitivity limit) or, if substructures are present, the disk only shrinks until the location of the dust ring.
We also note that the viscous evolution simulations first experience an expansion until reaching \(r_{00\%}=150\,\mathrm{AU}\) at \(t=0.5\,\mathrm{Myr}\), and then contract as the dust drifts inwards, as the emission from outer regions of the disk falls below the sensitivity limit. At 3 Myr the dust disk sizes in the viscous simulations either reach the dust trap location or drop sharply (for the case without substructures).
### Parameter Space: UV field and Trap location
In this section, we study how the presence of dust traps affects the dust mass and observable properties of the disk depending on the external FUV flux (Figure 6 and Figure 7), and also list the dust depletion timescale (i.e. the age of the disk when 99% of the initial dust mass has been lost) covering the parameter space of Table 3. The table also indicates whether a dust trap
Figure 4: _Top:_ Evolution of the dust loss rate over time for a photoevaporating disk (with \(F_{\mathrm{UV}}=10^{5}\,G_{0}\)), with and without dust traps (solid vs. dotted lines). The figure shows the distinction between the dust loss rate due to drift into the star (red), and due to entrainment with the photoevaporative wind (blue). _Bottom:_ Cumulative fraction of dust lost (relative to the initial dust mass) due to winds and drift.
Figure 5: _Top:_ Intensity profile for the fiducial simulations at \(\lambda=1.3\,\mathrm{mm}\), assuming a distance of \(d=400\,\mathrm{pc}\), and a beam size of 40 mas (16 AU, shown as a horizontal blue line), for a time of \(t=1\,\mathrm{Myr}\). _Middle:_ Evolution of the disk flux at \(\lambda=1.3\,\mathrm{mm}\). _Bottom:_ Evolution of the radius enclosing 90% of the disk continuum emission, assuming a sensitivity threshold of 0.1 mJy beam\({}^{-1}\).
was dispersed by the photoevaporative winds within the 5 Myr of simulation time.
For the strongest FUV flux (\(10^{4}\,G_{0}\)), the mass loss due to photoevaporation is so intense, that it renders existing substructures irrelevant in terms of observability. From the evolution of \(r_{90\%}\) and the photoevaporative radius we can see how photoevaporation progressively shrinks the disk, until it completely disappears by 1 Myr, approximately at the same time that the emission in the millimeter continuum drops. We see that if the substructure is located in the inner regions (\(r_{\rm gap}=30\) AU, or \(1/3\,r_{c}\) in our simulation), there is only a minor delay of approx 0.2 Myr until the flux in the millimeter continuum drops.
For the medium FUV flux (\(10^{3}\,G_{0}\)), we already saw in the previous section that an inner dust trap can extend the disk lifetime by 2 Myr, and that the disk \(r_{90\%}\) size converges to the location of the dust trap, as it also occurs in models with dust traps
Figure 6: _Top:_ Mass evolution for our parameter space in UV fluxes and trap location (or absence). _Bottom:_ Fraction of dust mass lost. The lines distinguish between the total dust lost (black), the fraction lost by drift at into the star (blue), and lost by entrainment with the photoevaporative wind (red).
Figure 7: Evolution of the disk observables, for different FUV fields and trap locations (\(1/3r_{c}\), \(2/3r_{c}\) or none). _Top:_ Evolution disk flux in the 1.3 mm continuum at a distance of 400 pc. _Bottom:_ Time evolution of the \(r_{90\%}\) enclosing 90% (black) of the disk emission in the continuum above the sensitivity limit (0.1 mJy beam\({}^{-1}\)), and the photoevaporative radius derived from the FRIED grid mass loss profile (red dotted lines, see Sellek et al., 2020). We note that towards the end of the simulation the calculation of the photoevaporative radius suffers from numerical effects due to the discrete nature of the FRIED grid.
but without external photoevaporation (e.g. Stadler et al. 2022). However, we also find that in the case where the substructure is located further out (\(r_{\rm gap}=60\) AU, or \(2/3\,r_{c}\)), the disk would still be dispersed by photoevaporation, once the photoevaporative radius reaches the location of the dust trap. This can be seen in Figure 7 (mid-bottom panel, dashed line), where the disk size initially matches the location of the outer dust trap, and quickly disperses once the disk \(r_{90\%}\) becomes comparable with the photoevaporative radius. In terms of the dispersal timescale and continuum fluxes, there is almost no difference between having an outer dust trap or no dust trap at all.
In the case with a weak FUV field (\(10^{2}\,G_{0}\)), both the inner and outer dust trap remain inside the photoevaporation radius (as seen from the value of \(r_{90\%}\)), which means that the size and flux of the disk will be dominated by the material trapped at local the pressure maximum. In comparison with the medium and strong photoevaporative regimes, disks in the weak photoevaporative environment have longer lifetimes of \(t_{\rm depletion}=3\) Myr and \(t_{\rm depletion}\gtrsim 5\) Myr, for the cases with inner and outer substructures respectively. For this regime, the lifetime of the disk with the outer substructure is longer than the one with the inner substructure, which we infer to be due to the longer drift timescales from the outer dust trap to the star, and the longer diffusion timescales across the gap in the outer regions.
From these three photoevaporative regimes we conclude that the lifetime of the disk is dominated by the outermost dust trap, provided that this is located well inside the photoevaporative radius, and that the dust trap located in the photoevaporative regions (even marginally) would have little to no effect on the disk survival timescale. It might seem surprising that disks with fully formed dust traps are dispersed, since photoevaporative winds can only carry away small grains. However, because fragmentation of large grains continuously replenishes the population of micron-sized particles, there is always a non-negligible fraction of solids being carried away. We will revisit the dust size distribution of entrained grain in Section 4.5.
### Low mass stars and compact disks
We perform two additional simulations to study whether a dust trap would survive in a disk around a low mass star (\(M_{*}=0.3\) M\({}_{\odot}\)), and if an initially more compact disk size (\(r_{c}=60\) AU) would affect our observational predictions. For this comparison, we use a radiation field of \(F_{UV}=10^{3}\,G_{0}\), and fix the dust trap location at 30 AU.
In Figure 8 we see that the disk around a \(0.3\) M\({}_{\odot}\) mass star is dispersed faster than the one around a sun-like star, and that even the presence of the dust trap cannot prevent the sharp decrease in the millimeter continuum flux. This faster dispersal occurs because the gravitational potential is weaker around a low mass star, which means that the gas and solid can be removed more easily by the UV irradiation from the environment. Consequently, the photoevaporation radius can reach further into the inner regions of the disk, all the way down to \(r_{\rm phot}\approx 20\) AU, and disperse the dust trap that was located at 35 AU. In order for the dust component to survive in a disk around low mass star, the trap should be located further inward (\(r_{\rm gap}\lesssim 10\) AU), provided that the UV field is on the order of \(F_{UV}=10^{3}G_{0}\). Disks in regions with lower irradiation could still display dust traps at larger radii.
For the case of a disk that is initially more compact, we do not see any significant differences in the disk evolution in terms of the observed size \(r_{90\%}\) or the flux in the millimeter continuum (see Figure 9). From this plot, we can expect for disk sizes in photoevaporative regions to be determined by their outermost dust trap, independently of the initial extension.
### Variable UV field
Winter et al. (2019) showed some of the disks in high irradiation regions, could have actually formed and migrated from lower irradiation environments. This would explain why these objects are still observable despite the short dispersal timescales associated with the high irradiation.
We conduct a simple simulation with our fiducial parameters (\(M_{*}=1\) M\({}_{*}\), \(r_{c}=90\) AU, \(r_{\rm gap}=30\) AU) in which we gradually increase the \(F_{UV}\) irradiation with the following function:
Figure 8: Evolution of the disk continuum flux at \(\lambda=1.3\) mm and the \(r_{90\%}\) (assuming a sensitivity threshold of 0.1 mJy beam\({}^{-3}\)) for two different stellar masses, and including the gap at \(r_{c}=30\) AU. The photoevaporative radius is shown for comparison. The UV flux is \(10^{3}\,G_{0}\).
\begin{table}
\begin{tabular}{c c c} \hline \hline \(F_{\rm UV}\) (\(G_{0}\)) & Trap Location & \(t_{\rm depletion}\) (Myr) \\ \hline \multirow{3}{*}{\(10^{2}\)} & Inner & 3.3 \\ & Outer & \(>\)5.0 \\ & None & 1.0 \\ \hline \multirow{3}{*}{\(10^{3}\)} & Inner & 2.3 \\ & Outer (dispersed) & 0.4 \\ & None & 0.3 \\ \hline \multirow{3}{*}{\(10^{4}\)} & Inner (dispersed) & 0.4 \\ & Outer (dispersed) & 0.2 \\ \cline{1-1} & None & 0.2 \\ \hline \end{tabular}
\end{table}
Table 3: Dust depletion timescales. “Dispersed” refers to dust traps that have vanished in the simulation due to external photoevaporation.
\[F_{UV}=F_{UV0}+(F_{UV,\rm max}-F_{UV0})\left(\frac{t}{1\,{\rm Myr}}\right)^{3/2}, \tag{16}\]
where \(F_{UV0}=10\,G_{0}\) and \(F_{UV,\rm max}=10^{4}\,G_{0}\) are the lower and upper limits of the FRIED grid (Haworth et al., 2018). The exponent of \(3/2\) is meant to represent an UV flux that increases rapidly as the disk approaches the high irradiation regions, though the exact shape would depend on variables such as the disk trajectory, the distribution of bright massive stars within the cluster (Winter et al., 2019; Qiao et al., 2022; Wilhelm et al., 2023), and the variation in their luminosity with time (Kunitomo et al., 2021). In Figure 10 we show the evolution UV flux and the mass loss rate over time up to 1 Myr.
Figure 11: Dust size distribution at 0.1, 0.5, and 1 Myr for the simulation with an increasing UV Flux (Equation 16). As in Figure 3, the fragmentation limit is marked in magenta, the drift limit in cyan, and the photoevaporation radius as a vertical dashed-red line.
Figure 10: _Top:_ Time evolution of the UV flux following Equation 16. _Bottom:_ Evolution of the dust mass loss rate by drift (blue) and photoevaporative winds (red) in a disk with variable UV flux. The stellar mass and initial size correspond to the fiducial model (\(M_{*}=1\,{\rm M_{\odot}}\), \(r_{c}=90\,{\rm AU}\)).
Figure 9: Same as Figure 8 for two initial characteristic radius \(r_{c}\). The UV flux is \(10^{3}\,G_{0}\).
From the dust distribution shown in Figure 11 we see how the radial extension of the dust component gets progressively smaller as time passes, and in particular, we see how the dust trap completely vanishes by 1 Myr once the UV flux reaches its peak. We note that the dust trap dispersal occurs despite the fact that dust grains have already grown larger into millimeter-to-centimeter sized pebbles (Figure 11, middle panel). While these particles are not easily entrained by the wind, they are still being indirectly depleted, since they continuously fragment into the smaller size grains that are directly removed by photoevaporation.
This phenomenon can also be seen in the spike in dust loss rate at 0.7 Myr (Figure 10, bottom panel), which coincides whit the photoevaporative radius when it catches up with the location of the dust trap, and in the millimeter continuum (Figure 12), when the flux sharply drops. From our results in Table 3 for the high UV fluxes (\(10^{4}\,G_{0}\)) we can expect the remaining dust component to disperse in timescales of 0.2 Myr or less.
This implies that even if dust traps manage to survive during an early low irradiation, for example if the disk was initially shielded from UV irradiation (Qiao et al., 2022; Wilhelm et al., 2023), farther away from the dense regions of the cluster (Winter et al., 2019), or surrounded by intermediate mass stars (\(M_{5}3M_{\odot}\)) in early evolutionary stages (when the UV emission is lower Kunitomo et al., 2021), once the FUV flux increases the dust component in the dust traps will be quickly dispersed along with the gas.
### Dust distribution of the lost grains
Photoevaporative winds can remove the small grains entrained with the gas flow from the disk outer regions (defined through the photoevaporative radius). While tracking the subsequent spatial evolution of these grains goes beyond the scope of this paper, we can still record of the mass and size distribution of lost dust component, and compare between the different irradiation regimes. Figure 13 shows the time integrated distribution of all the dust grains that have been lost by entrainment with the wind up to 1 Myr, as a function of the grain size and the launching location,
Figure 12: Same as Figure 8, comparing the simulation with constant UV Flux of \(10^{3}\,G_{0}\) and the one with increasing UV flux (Equation 16).
Figure 13: Grain size distribution (integrated in time) of the dust grains lost with the photoevaporative flow for the disks subject to low, medium and high \(F_{UV}\) environments, at \(r=1\) Myr. The disk’s parameters are a star of \(1M_{\odot}\), initial size of \(r_{c}=90\) AU, and a gap substructure at 30 AU. The location of the gap \(r_{\rm gap}\) is indicated with a dotted white line.
that is:
\[\Sigma_{\rm d,\,los}(r,a)=\int_{0}^{1\,{\rm Myr}}\dot{\Sigma}_{\rm d,w}(r,a){\rm d }t, \tag{17}\]
where we see how the maximum grain size entrained in the winds and the launching region depend on the irradiation from the environment. In particular, we see how for the weakest irradiation field (\(10^{2}\,G_{0}\)) only grains smaller than \(10\,\mu\)m can be removed from regions beyond \(100\,{\rm AU}\), while for the strongest irradiation field (\(10^{4}\,G_{0}\)) even grains of sizes up to \(a\approx 100\,\mu\)m can be dragged along by the winds, with a noticeable increase in the dust removal at \(r\approx 30-40\,{\rm AU}\) where the dust trap is located.
To obtain a broader overview of the lost grain distribution, we show in Figure 14 the cumulative size distribution for the parameter space shown in Section 4.2, i.e.:
\[M_{\rm dust,\,los}(a)=\sum_{a}^{a}\int_{r_{in}}^{r_{out}}2\pi r\,\Sigma_{\rm d,los}(r)\,{\rm d}r, \tag{18}\]
which shows that approximately \(70\%-80\%\) of the lost mass of solids comes from the sub-micron size particles. Overall, we find that the location of the dust trap has little to no impact in the size distribution of the lost grains (specially in the sub-micron size range). For the weak irradiation environment (\(10^{2}\,G_{0}\)) the dust traps are located well inside the photoevaporation radius and do not contribute to the dust content in the wind, and for the strong photoevaporative environment (\(10^{4}\,G_{0}\)) there is an increase of \(15\,{\rm M}_{\oplus}\) in the micron range size range for configuration with the inner dust trap respect to the case without dust trap.
The dust distribution within the photoevaporative winds should also, in principle, determine the effective cross section and opacity of the disk to the environment FUV irradiation (Faccchini et al., 2016). If the dust content and effective opacity are high, then the disk could be self-shielded (Qiao et al., 2022; Wilhelm et al., 2023), leading to a negative feedback loop where photoevaporation regulates itself. As a first step to study this process we calculate what would be the self-consistent cross section for FUV photons per gas molecule with:
\[\sigma_{FUV}=\sum_{a}\epsilon_{w}(a)\,\kappa_{UV}(a)\,\mu m_{p}, \tag{19}\]
where \(\epsilon_{w}(a)\) is the dust-to-gas ratio of the dust species with size \(a\) in the wind, \(\kappa_{UV}(a)\) is the absorption opacity at \(\lambda=0.1\,\mu\)m, \(\mu=2.3\) is the mean molecular weight, and \(m_{p}\) is the proton mass (see also Facchini et al., 2016, Eq. 23). In?? we show the dust-to-gas ratio in the photoevaporative wind, and the respective FUV cross section per gas molecule (using the Ricci opacities as in the results in Sect. 4).
We find that the total dust-to-gas ratio remains almost constant in the first \(\sim 0.05\)-\(0.1\,{\rm Myr}\) of evolution and close to the initial value of \(10^{-2}\) for all the irradiation regimes and trap configurations. Once the dust has had a chance to grow, the dust-to-gas ratio sharply decreases with time as the dust loss rates due to photoevaporation decreases over time (see Fig. 15)
Following the same trend, the effective cross section for UV wavelengths is on the order of \(\sigma_{\rm FUV}\approx 2-2\times 10^{-22}\,{\rm cm}^{2}\). This is calculated assuming
\[\epsilon_{w}(a)=\frac{\dot{\Sigma}_{\rm d}(a)}{\dot{\Sigma}_{\rm g}}. \tag{20}\]
We also perform the calculation of the FUV cross section assuming an ISM grain size distribution (Mathis et al., 1977) for comparison (with \(\sigma_{\rm ISM}\approx 2.5\times 10^{-22}\,{\rm cm}^{2}\)), and find that all our simulations fall below this value, specially after grains have growth after \(\sim\)0.02-0.1 Myr. Though grain growth is expected
Figure 14: Cumulative size distribution of the dust grains lost up to a time of \(1\,{\rm Myr}\) for the simulations shown in Section 4.2, for the different UV fluxes, and trap properties. The line styles represent the disks with an inner trap, an outer trap, and no traps at all (solid, dashed, and dotted lines respectively). For the disks with low irradiation environments (\(10^{2}G_{0}\)) the dust traps have no effect on the lost dust distribution and the three lines overlap.
Figure 15: _Top:_ dust-to-gas ratio of the material removed by the photoevaporative winds for different irradiation environments and dust trap configurations. _Bottom:_ FUV cross section per gas molecule of the material removed by stellar winds, considering the dust-to-gas ratios from the top panel.
to reduce the effective absorption cross section at FUV wavelengths (Facchini et al., 2016) since larger grains have lower absorption opacities, the decrease of \(\sigma_{FUV}\) overtime seems to be mostly dominated by the decrease of the dust-to-gas ratio in the wind.
In section Section 5.2 we compare the values found for the FUV cross section with previous studies, and discuss if self-shielding by the grains entrained in the wind could be important for the global disk evolution and dispersal process.
## 5 Discussion
### Can dust traps explain the observations of highly irradiated regions?
Observations in the millimeter continuum by Eisner et al. (2018); Otter et al. (2021) of the ONC and OMC1 star-forming regions show protoplanetary disks with fluxes on the order of 0.1 mJy to 10 mJy at 0.85, 1.3, and 3 mm wavelengths, as well as a lack of disks sizes larger than 75 AU (Otter et al., 2021) which could be explained by photoevaporative truncation due to external photoevaporation.
However, the work of Sellek et al. (2020) showed that the lifetime of the dust component is significantly shorter in disks undergoing external photoevaporation, than in disks undergoing regular viscous evolution (see Section 4.1), with depletion timescales that are on the order of 0.1 Myrs. This means that the survival of the dust component in observations could not be explained even in medium radiation environments without an additional process that prevents the dust loss.
Since the study of Sellek et al. (2020) does not consider the presence of substructures in protoplanetary disks, which currently are known to be a common feature (e.g. DSHARP sample Andrews et al., 2018), we test whether the presence of dust traps could be help to explain the lifetime of the dust component in photoevaporative disks. Though it might seem evident that substructures should contribute to increase the disk lifetime(e.g. Pinilla et al., 2012, 2012), it is not clear whether the dust grains will be able to resist the drag force from the gas in the photoevaporative winds, specially considering that fragmentation continuously replenishes the population of the small grains that are easily entrained with gas.
We find in this work that a gap-like substructure can significantly increase the dust component lifetime in weak radiation environments (\(10^{2}\,G_{0}\)) to more than 5 Myr, and that in medium radiation environments (\(10^{3}\,G_{0}\)) it can increase the disk lifetime to approx. 2 Myrs, but only if the dust trap is located well inside the photoevaporation radius, which in our parameter space was approximately between 50 and 75 AU. The simulations where the dust traps were located outside the photoevaporation radius were dispersed, as the small grains were replenished by fragmentation at a faster rate than the spatial evolution of the photoevaporation front, resulting in the dispersal of the dust traps.
For extreme photoevaporation environments (\(10^{4}G_{0}\)) we observe that substructures have no significant effect on the survival timescale of the protoplanetary disk solid component. The photoevaporation front quickly truncates the whole disk from the outside-in, dragging all the solid grains along with the wind. Because this strong irradiation regime is precisely the one that affects the disks in dense stellar regions such as the core of the ONC, we infer that dust traps alone would not be able to explain the millimeter emission in this type of environment.
We also consider the possibility that photoevaporation might be delayed, with the environmental FUV flux increasing over time instead of acting at full strength from the very beginning of the simulation. This scenario would be consistent with migration across the stellar cluster (Winter et al., 2019), or shielding of the disk by the primordial envelope (Qiao et al., 2022), however in this case we still find that the material in the dust traps is quickly dispersed once the photoevaporation front reaches the location of the pressure maximum (see Section 4.4).
We conclude that the presence of dust traps can increase the disk lifetime in weak and mild photoevaporation environments, which in the case of disks without substructures is limited by the drift timescale at the truncation radius, but cannot help to retain the dust component beyond the lifetime of the gas component, as the grains are efficiently dragged along with the photoevaporative winds. Therefore, while dust traps seem necessary to prevent the dust from draining too quickly by drift, these are not sufficient to explain the survival of the disks in the millimeter continuum, and models that extend simultaneously the lifetime of the gas and dust component such as those accounting for shielding, migration, or multiple star formation events (Qiao et al., 2022; Wilhelm et al., 2023; Winter et al., 2019, 2019) are better suited to explain the effective actual disk lifetime. Other mechanisms that could contribute to the disk long-term survival are the luminosity evolution of bright B stars, which require approximately 1 Myr from their formation to reach their peak brightness at UV wavelengths (see Kunitro et al., 2021, Fig. 3), and dominate the irradiation field in regions such as Upper Sco (Anania et al., in prep.), It is necessary to conduct high resolution observations in the millimeter continuum of disks that can be subject to influenced by high environmental irradiation, and identify first if substructures are present, and second if their presence (or absence) would be consistent with the age of the dust component. One promising target for this study would be the disk ISO-Oph2 (Casassus et al., 2023), which is in the proximity of the B star HD147889, and shows signatures of being heated by its bright neighbour.
### Self-shielding by the grains entrained in the wind
The calculations performed in this paper use the FRIED grid from Haworth et al. (2018), which assumes that photoevaporative winds are depleted in dust, with a low dust-to-gas ratio of \(e_{w}=3\times 10^{-4}\) and a FUV cross section of \(\sigma_{FUV}=2.7\times 10^{-23}\) cm\({}^{2}\). In Fig. 15 we recalculated the FUV cross section using the dust-to-gas ratio measured directly from the simulation lost material, and found them to be higher (\(\sigma_{FUV}\approx 10^{-22}\)cm\({}^{-2}\)) at early times of evolution (\(\sim\)0.05 Myr) than the values used by Haworth et al. (2018). This means that, in order to be self-consistent with the dust content in the wind, the effective mass loss rates should be lower than those used for this paper (Haworth et al., 2023)
This self-shielding effect is a negative feedback loop which would moderate the strength of the photoevaporative dispersal. As such we could expect mass loss to proceed at a slower rate, and prolong the disk lifetime in both the gas and dust component, though a dedicated study in which the mass loss and cross section are computed simultaneously in run time would be necessary to confirm this theory.
Another important effect to account when accounting for self-shielding would be the spatial evolution of the dust and gas after these are launched from the disk surface (Paine et al., in prep.). If the ejected material disperses quickly (as suggested in our results in Fig. 15), it will not be able to shield the disk from the external irradiation sources since the cross section sharply drops after the first few \(\sim\)0.02-0.1 Myr of disk evolution. Therefore, in order to determine if self-shielding can extend the disk
lifetime or not, it is also necessary to study whether the dust and gas material lifted by the photoevaporative winds can remain around their parent protoplanetary disk for longer timescales, on the scale of 1 Myr or more.
We note that in this work we assumed the opacity values of Ricci et al. (2010) to calculate the FUV cross sections, while previous calculations of Faccichini et al. (2016, for example), used the optical constants from Li & Greenberg (1997) which leads to the difference in the reported for the ISM cross section, where our estimation is lower approximately by a factor of 4. We infer that our cross sections would increase by a factor of a few if we had used the same optical constants and grain composition.
Previous studies have shown that shielding can reduce the effect of external photoevaporation on the disk (e.g. Qiao et al., 2022), however its effect is usually constrained to the early stages of disk evolution. In contrast, if the self-shielding by the wind-entrained grains is proven to be effective, it could actually contribute to reduce the mass loss in high irradiation environments for extended periods of time, specially in disks with multiple substructures.
We highlight the importance of developing self-consistent models that couple dust trapping, photoevaporation driven by external irradiation, and the dust content in the ejected winds, since these ingredients do interact with each other and change the course of the disk evolution, as shown, for example, in Owen and Lin (2023) where it was proposed that circumstellar disks in extreme environments such as the galactic centre could survive photoevaporative dispersal.
## 6 Summary
In this work, we studied the evolution of the dust component and emission in the millimeter continuum of protoplanetary disks subject to high environmental FUV irradiation, accounting for the presence of gap-like substructures that act as dust traps. As in Sellek et al. (2020), we also find that dust drift is more efficient in disks subject to external photoevaporation than in standard viscous disks, where the lifetime of the dust component can be as short as a few 0.1 Myr if dust traps are not present.
In weak irradiation environments of \(F_{UV}=10^{2}\,G_{0}\), the presence of dust traps does prevent drift of dust grains into the star and allow the disk to survive for several Myrs. In irradiation environments of \(10^{3}\,G_{0}\), dust traps need to be inside the photoevaporative truncation radius to extend the disk lifetime (from 0.3 to 2.3 Myr for our choice of parameters, see Section 4.2), while dust traps located outside the truncation radius are dispersed with the photoevaporative winds and do not extend the lifetime of the dust component significantly. Finally, in extreme irradiation environments of \(10^{4}\,G_{0}\) all the dust traps are dispersed as the photoevaporation front clears the entire disk from the outside-in.
Though dust traps are a necessary ingredient to explain the survival of the dust and the observed millimeter emissions in highly irradiated environments, specially considering that drift is specially efficient in disks truncated by external photoevaporation (Sellek et al., 2020), they are not enough to explain why the objects found at the core of dense stellar regions such as the ONC or Cyg OB2 (Guarcello et al., 2016; Eisner et al., 2018; Otter et al., 2021) have not yet dispersed.
Overall, it seems more likely that the disks observed in these regions have only recently began to experience the high irradiation and extreme mass loss rates measured in observations. These objects could have been subject to a much lower irradiation earlier in their evolution due to shielding, migration from the regions with lower stellar densities, or they could have been born in a more recent star formation event than the rest of the cluster (Winter et al., 2019, 2019; Qiao et al., 2022).
On that line, we do find that the dust content entrained with the photoevaporative winds might be enough to increase the cross section for FUV photons to \(\sigma_{FUV}\approx 10^{-22}\,\mathrm{cm^{2}}\) at 0.1 \(\mu\)m wavelengths at early times of evolution, partially shielding the protoplanetary disk from external irradiation, and decreasing the total mass loss rate. However, it is yet unclear whether the dusty material lifted with the winds will stay in the proximity of the parent disk, effectively acting as a shield, or if it will quickly disperse leaving the disk surface bare to the environmental irradiation, and further studies are necessary.
In conclusion, dust traps are necessary but not sufficient to explain the survival of the dust component and continuum emission in high photoevaporative regions, and self-shielding by the dust entrained with the wind may contribute to reduce the extreme mass loss rates, but only if the dusty material can remain close enough to the parent disk to absorb the irradiation from the environment.
###### Acknowledgements.
MG and PP acknowledge funding from the Alexander von Humboldt Foundation in the framework of the Sofia Kovalevskaja Award endowed by the Federal Ministry of Education and Research. PP acknowledge funding from UKRI under the UK government's Horizon Europe funding guarantee from ERC. TH is funded by a Royal Society Dorothy Hodgkin Fellowship and UKRI guarantee funding (EP/Y024710/1). SF is funded by the European Union under the European Union's Horizon Europe Research & Innovation Programme 101076613 (INVEVL1). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them.
|
2309.05861 | A two variable zeta function associated to the space of binary forms of
degree $d$ | In this paper we prove the analytic continuation of a two variable zeta
function defined using the vector space of binary forms of degree $d$ to the
entire two dimensional complex space as a meromorphic function. | Eun Hye Lee, Ramin Takloo-Bighash | 2023-09-11T22:53:00Z | http://arxiv.org/abs/2309.05861v3 | # A two variable zeta function associated to the space of binary forms of degree \(d\)
###### Abstract.
In this paper we prove the analytic continuation of a two variable zeta function defined using the vector space of binary forms of degree \(d\) to the entire two dimensional complex space as a meromorphic function.
## 1. Introduction
Let \(X_{d}\) be the vector space of all binary \(d\) forms of degree \(d\), i.e., the vector space of all polynomials of the form
\[F(X,Y)=\sum_{r=0}^{d}a_{r}X^{r}Y^{d-r} \tag{1}\]
with real coefficients. Let \(X_{d}^{+}\) be the collection of forms with integral coefficients such that \(a_{d}>0\). Also, let \(\Gamma_{\infty}=\left\langle\begin{pmatrix}1&1\\ 0&1\end{pmatrix}\right\rangle\) be the upper triangular unipotent elements of \(\operatorname{SL}_{2}(\mathbb{Z})\). Then we have a left action of \(\Gamma_{\infty}\) on \(X_{d}^{+}\), given by \(\gamma\circ F(X,Y)=F((\gamma\circ(X,X)^{T})^{T})\). We note that under the action of \(\Gamma_{\infty}\) every form is equivalent to form \(F\) in Equation (1) such that \(0\leq a_{d-1}\leq da_{d}-1\).
**Lemma 1.1**.: _Let \(F(X,Y)=\sum_{r=0}^{d}a_{r}X^{r}Y^{d-r}\). Then, \(I_{2}=(d-1)a_{d-1}^{2}-2da_{d}a_{d-2}\) is invariant under \(\Gamma_{\infty}=\left\langle\begin{pmatrix}1&1\\ 0&1\end{pmatrix}\right\rangle\)._
Proof.: Since \(F(X+tY,Y)=\sum_{r=0}^{d}\ a_{r}\ (X+tY)^{r}\ Y^{d-r}\), we have
\[\frac{\partial F}{\partial t}(X+tY,Y)= \sum_{r=0}^{d}\ a_{r}\ r\ (X+tY)^{r-1}\ Y^{d-r+1}\] \[\frac{1}{2}\frac{\partial^{2}F}{\partial t^{2}}(X+tY,Y)= \sum_{r=0}^{d}\ a_{r}\ \frac{r(r-1)}{2}\ (X+tY)^{r-2}\ Y^{d-r+2}.\]
Hence,
\[F(X+tY,Y)|_{t=0}+s\ \frac{\partial F}{\partial t}(X+tY,Y)|_{t=0}+ \frac{s^{2}}{2}\ \frac{\partial^{2}F}{\partial t^{2}}(X+tY,Y)|_{t=0}\] \[=\sum_{r=0}^{d}\ a_{r}\ X^{r}\ Y^{d-r}+s\ \sum_{r=0}^{d}\ a_{r}\ r\ X^{r-1}\ Y^{d-r+1}+s^{2}\ \sum_{r=0}^{d}\ a_{r}\ \frac{r(r-1)}{2}\ X^{r-2}\ Y^{d-r+2}\] \[=a_{d}\ X^{n}+(a_{d-1}+s\ a_{d}\ d)X^{d-1}\ Y\] \[\ \ +\sum_{r=0}^{d-2}\left(a_{d-r}+s\ a_{d-r+1}(d-r+1)+s^{2}\ a_{d -r+2}\ \frac{(d-r+2)(d-r+1)}{2}\right)X^{d-r}\ Y^{r}.\]
Therefore,
\[(a_{d-1}+s\ a_{d}\ d)^{2}-\frac{2d}{d-1}\ a_{d}\left(a_{d-1}+s\ a_{d-1}(d-1)+ s^{2}\ a_{d}\ \frac{d(d-1)}{2}\right)=a_{d-1}^{2}-\frac{2d}{d-1}\ a_{d}\ a_{d-2}.\]
In analogy with the zeta function considered in [1] we set, at first formally, for \((s_{1},s_{2})\in\mathbb{C}^{2}\),
\[Z_{d}(s_{1},s_{2})=\sum_{\begin{subarray}{c}a,b,c\in\mathbb{Z}\\ a>0,0\leq b<da\\ 2dac-(d-1)b^{2}>0\\ 2dac-(d-1)b^{2}\ \text{square-free, odd}\end{subarray}}\frac{1}{a^{s_{1}}(2dac-(d-1)b^{2}) ^{s_{2}}}.\]
**Theorem 1.2**.: _The function \(Z_{d}(s_{1},s_{2})\) converges absolutely in \(\operatorname{Re}s_{1},\operatorname{Re}s_{2}\gg 0\), and has an analytic continuation to a meromorphic function on all of \(\mathbb{C}^{2}\)._
In fact we will prove a more general result, see Theorem 2.4 below. The proof we present in the following section is an adaptation of the proof of the main theorem of [1].
The zeta function in Theorem 2.4 lacks the typical symmetries that zeta functions with analytic continuation enjoy. In particular we do not know if it satisfies a functional equation. In general, whenever a zeta function has an analytic continuation it is for a very good reason. At present we do not have a conceptual explanation for why the zeta function considered here continues to all of \(\mathbb{C}^{2}\). It would be desirable to find such an explanation.
The second author is partially supported by a grant from the Simons Foundation. We thank Evan O'Dorney for suggesting that we extend our results from [1] to general \(d\), and Takashi Taniguchi for catching an error in an earlier version of this paper.
The paper is organized as follows. After this introduction, we define a general zeta function in SS2 and prove its analytic properties. The main theorem of the paper is included at the end SS2 as Theorem 2.4.
## 2. A two variable zeta function
Let \(A,B\) be a pair of coprime integers. For \((s_{1},s_{2})\in\mathbb{C}^{2}\), set, at first formally,
\[Z_{A,B}(s_{1},s_{2})=\sum_{\begin{subarray}{c}a,b,c\in\mathbb{Z}\\ a>0,0,0<b<Aa\\ Aac-Bb^{2}>0,\text{ odd, square-free}\end{subarray}}\frac{1}{a^{s_{1}}(Aac-Bb^{2} )^{s_{2}}}. \tag{2}\]
It is easy to see that the above sum is formally equal to
\[Z_{A,B}(s_{1},s_{2})=\sum_{\begin{subarray}{c}m,n=1\\ n\text{ odd, square-free}\end{subarray}}^{\infty}\frac{C_{A,B}(m,n)}{m^{s_{1}}n ^{s_{2}}}\]
with
\[C_{A,B}(m,n)=\#\{x\mod mA\mid Bx^{2}\equiv-n\mod mA\}.\]
Since \(C_{A,B}(m,n)\) is at most \(mA\), the absolute convergence of \(Z(s_{1},s_{2})\) for \(\operatorname{Re}s_{1},\operatorname{Re}s_{2}\) large is immediate. Until otherwise noted, we will proceed within the domain of absolute convergence.
Next, we write
\[Z_{A,B}(s_{1},s_{2})=\sum_{d\mid B}\sum_{\begin{subarray}{c}m,n=1\\ \text{gcd}(B,m)=\delta\\ n\text{ odd, square-free}\end{subarray}}^{\infty}\frac{C_{A,B}(m,n)}{m^{s_{1}}n ^{s_{2}}}.\]
In the inner sum, unless \(\delta\mid n\), \(C_{A,B}(m,n)=0\). Next, when \(\delta\mid m,\delta\mid n\) we have
\[C_{A,B}(m,n) =\#\{x\mod mA\mid Bx^{2}\equiv-n\mod mA\}\] \[=\delta\#\{x\mod\frac{m}{\delta}A\mid\frac{B}{\delta}x^{2}\equiv- \frac{n}{\delta}\mod\frac{m}{\delta}A\}\] \[=\delta C_{A,\frac{B}{\delta}}(\frac{m}{\delta},\frac{n}{\delta}).\]
After a change of variables we write
\[Z_{A,B}(s_{1},s_{2})=\sum_{\delta\mid B}\frac{1}{\delta^{s_{1}+s_{2}-1}}\sum_ {\begin{subarray}{c}m,n=1\\ \text{gcd}(m,\frac{B}{\delta})=1\\ n\text{ odd, square-free}\end{subarray}}^{\infty}\frac{C_{A,\frac{B}{\delta}}(m,n)}{m^{s_{1}}n^{s_{2}}}.\]
Since we are interested in analytic continuation, it suffices to prove the analytic continuation of each of the inner summands. For that reason replacing \(B/\delta\) with \(B\), for a pair of coprime integers \(A,B\) we set
\[\zeta_{A,B}(s_{1},s_{2})=\sum_{\begin{subarray}{c}m,n=1\\ \text{gcd}(m,B)=1\\ n\text{ odd, square-free}\end{subarray}}^{\infty}\frac{C_{A,B}(m,n)}{m^{s_{1}}n ^{s_{2}}}\]
Since we need to keep track of moduli, given a modulus \(k\), whenever \(\gcd(x,k)=1\), we denote the multiplicative inverse of \(x\) modulo \(k\) by \(f_{k}(x)\). We note that \(f_{k}(x)\) can always be chosen to be represented by an odd number and we will do this. Indeed, if
\(2\mid k\), then \(x\) is odd as have assumed \(\gcd(x,k)=1\), and that means that since \(f_{k}(x)\) will be odd for any choice of the representative. On the other hand, if \(k\) is odd, and \(f_{k}(x)\) is represented by an even number, then we will replace \(f_{k}(x)\) by \(f_{k}(x)+k\) to obtain an odd number.
With this convention, since \(\gcd(B,mA)=1\), we have
\[C_{A,B}(m,n) =\#\{x\mod mA\mid x^{2}\equiv-f_{mA}(B)n\mod mA\}\] \[=C(mA,-f_{mA}(B)n),\]
where as in SS2.1 of [1] for integers \(m,n\)
\[C(m,n)=\#\{x\mod m\mid x^{2}\equiv n\mod m\}.\]
Recall the following proposition from [1]:
**Proposition 2.1**.: _The following properties hold._
1. _For any fixed n,_ \(C(m,n)\) _is a multiplicative function in_ \(m\)_. In particular,_ \(C(1,n)=1\) _for all_ \(n\)_._
2. _If any prime_ \(p\neq 2\) _and_ \(p\nmid n\)_, then_ \(C(p^{\alpha},n)=1+\left(\dfrac{n}{p}\right)\) _for_ \(\alpha>0\)_._
3. _If_ \(p=2\) _and_ \(n\) _is odd, then for_ \(\alpha>0\)_,_ \[C(2^{\alpha},n)=\begin{cases}1&\alpha=1,\\ 2&\alpha=2,n\equiv 1\mod 4,\\ 4&\alpha\geq 3,n\equiv 1\mod 8,\\ 0&\text{otherwise.}\end{cases}\]
4. _If_ \(n=p^{r}n_{0}\) _with_ \(p\nmid n_{0}\)_, then for_ \(\alpha>0\)_,_ \[C(p^{\alpha},p^{r}n_{0})=\begin{cases}p^{\left\lfloor\frac{\alpha}{2}\right \rfloor}&r\geq\alpha,\\ p^{\frac{r}{2}}C(p^{\alpha-r},n_{0})&r<\alpha,r\text{ even},\\ 0&\text{otherwise.}\end{cases}\]
Proof.: See Proposition 2.2 of [1].
We apply the proposition to compute the value of \(C(mA,-f_{mA}(B)n)\). We write \(mA=p_{1}^{\alpha_{1}}\cdots p_{r}^{\alpha_{r}}\). By multiplicativity, we have
\[C(mA,-f_{mA}(B)n)=\prod_{i}C(p_{i}^{\alpha_{i}},-f_{mA}(B)n).\]
If \(p_{i}\nmid n\) and \(p\neq 2\), then
\[C(p_{i}^{\alpha_{i}},-f_{mA}(B)n)=1+\left(\dfrac{-f_{mA}(B)n)}{p_{i}}\right)= 1+\left(\dfrac{f_{mA}(B)}{p_{i}}\right)\left(\dfrac{-n}{p_{i}}\right).\]
Since \(Bf_{mA}(B)\equiv 1\mod mA\) and \(p_{i}\mid mA\), this means \(Bf_{mA}(B)\equiv 1\mod p_{i}\). Consequently, \((f_{mA}(B)/p_{i})=(B/p_{i})\). We have
\[C(p_{i}^{\alpha_{i}},-f_{mA}(B)n)=1+\left(\dfrac{-Bn}{p_{i}}\right).\]
If \(p_{i}\|n\) and \(p_{i}\neq 2\), then
\[C(p_{i}^{\alpha_{i}},-f_{mA}(B)n)=\begin{cases}1&\alpha_{i}=1\\ 0&\alpha_{i}>1.\end{cases}\]
Since \(n\) is assumed to be square free we do not need to consider the cases where \(p_{i}^{2}\mid n\).
If \(p_{i}=2\), let \(2^{\alpha}\|mA\). Then since \(2^{\alpha}\mid mA\), \(Bf_{mA}(B)\equiv 1\mod 2^{\alpha}\). This implies
\[C(2^{\alpha},-nf_{mA}(B))=\begin{cases}1&\alpha=1,\\ 2&\alpha=2,n\equiv-B\mod 4,\\ 4&\alpha\geq 3,n\equiv-B\mod 8,\\ 0&\text{otherwise.}\end{cases}\]
We note the following important consequence of the above computations:
\[C(mA,-f_{mA}(B)n)=\prod_{i}C(p_{i}^{\alpha_{i}},-f_{p^{i}}(B)n).\]
Next, we write
\[\zeta_{A,B}(s_{1},s_{2})=\sum_{\begin{subarray}{c}m,n=1\\ \gcd(m,B)=1\\ n\text{ odd, square-free}\end{subarray}}^{\infty}\frac{C_{A,B}(m,n)}{m^{s_{1}}n ^{s_{2}}}\] \[=\sum_{\begin{subarray}{c}n=1\\ n\text{ odd, square-free}\end{subarray}}^{\infty}\frac{1}{n^{s_{2}}}\sum_{ \begin{subarray}{c}m=1\\ \gcd(m,B)=1\end{subarray}}^{\infty}\frac{C(mA,-nf_{mA}(B))}{m^{s_{1}}}\] \[=\sum_{\begin{subarray}{c}n=1\\ n\text{ odd, square-free}\end{subarray}}^{\infty}\frac{Z_{n,A,B}(s_{1})}{n^{s_{2}}}\]
where for any \(s\in\mathbb{C}\) we have set
\[Z_{n,A,B}(s)=\sum_{\begin{subarray}{c}m=1\\ \gcd(m,B)=1\end{subarray}}^{\infty}\frac{C(mA,-nf_{mA}(B))}{m^{s}}.\]
As in Proposition 2.3 of [1] the zeta function \(Z_{n,A,B}\) has an Euler product expansion of the form
\[Z_{n,A,B}(s)=\prod_{p\mid B}Z_{n,A,B,p}(s)\]
with
\[Z_{n,A,B,p}(s)=\sum_{k=0}^{\infty}\frac{C(p^{k+\operatorname{ord}_{p}(A)},- nf_{p^{k+\operatorname{ord}_{p}(A)}}(B))}{p^{ks}}.\]
We now compute the local zeta functions for various primes \(p\). We will be adapting the computations of SS2.1.1 of [1].
If \(p\nmid 2A\), then
\[Z_{n,A,B,p}(s)=\frac{1-p^{-2s}}{1-p^{-s}}\cdot\frac{1}{1-p^{-s}\left(\frac{-nB}{p }\right)}.\]
If \(p\mid A\), \(p\nmid n\), \(p\) odd, then
\[Z_{n,A,B,p}(s)=\left(1+\left(\frac{-nB}{p}\right)\right)\frac{1}{1-p^{-s}}.\]
If \(p\mid A\), \(p\|n\), \(p\) odd, then if we write \(nf_{p^{k+\mathrm{ord}_{p}(A)}}(B)=pn_{0}\)
\[Z_{n,A,B,p}(s) =\sum_{k=0}^{\infty}\frac{C(p^{k+\mathrm{ord}_{p}(A)},-pn_{0})}{p ^{ks}} \tag{4}\] \[=\begin{cases}1&p\|A\\ 0&p^{2}|A.\end{cases} \tag{3}\]
Finally, let \(p=2\). We note that this means \(B\) is odd and for each \(k\geq 1\), \(f_{2^{k+\mathrm{ord}_{2}(A)}}(B)\) is also odd. We recognize a few cases based on \(\mathrm{ord}_{2}(A)\).
If \(\mathrm{ord}_{2}(A)=0\), then we have the following possibilities:
* If \(n\equiv-B\mod 8\), then \[Z_{n,A,B,2}(s)=\frac{1-2^{-2s}}{1-2^{-s}}\cdot\frac{2\cdot 2^{-2s}-2^{-s}+1}{1- 2^{-s}}.\]
* If \(n\equiv-B+4\mod 8\), then \[Z_{n,A,B,2}(s)=1+\frac{1}{2^{s}}+\frac{2}{2^{2s}}.\]
* If \(n\equiv-B+2\) or \(n\equiv-B+6\mod 8\), then \[Z_{n,A,B,2}(s)=1+\frac{1}{2^{s}}.\]
If \(\mathrm{ord}_{2}(A)=1\), then we have the following cases:
* If \(n\equiv-B\mod 8\), then \[Z_{n,A,B,2}(s)=1+\frac{1}{2^{s}}+\frac{4\cdot 2^{-2s}}{1-2^{-s}}.\]
* If \(n\equiv-B+4\mod 8\), then \[Z_{n,A,B,2}(s)=1+\frac{2}{2^{s}}.\]
* If \(n\equiv-B+2\) or \(n\equiv-B+6\mod 8\), then \[Z_{n,A,B,2}(s)=1.\]
If \(\mathrm{ord}_{2}(A)=2\), then we have the following cases:
* If \(n\equiv-B\mod 8\), then \[Z_{n,A,B,2}(s)=2+\frac{4\cdot 2^{-s}}{1-2^{-s}}.\]
* If \(n\equiv-B+4\mod 8\), then \[Z_{n,A,B,2}(s)=2.\]
* If \(n\equiv-B+2\) or \(n\equiv-B+6\mod 8\), then \[Z_{n,A,B,2}(s)=0.\]
If \(\mathrm{ord}_{2}(A)\geq 3\), then we have the following cases:
* If \(n\equiv-B\mod 8\), then \[Z_{n,A,B,2}(s)=\frac{4}{1-2^{-s}}.\]
* If \(n\equiv-B+2\), \(-B+4\), \(-B+6\mod 8\), then \[Z_{n,A,B,2}(s)=0.\]
The upshot of the above computation is the following proposition:
**Proposition 2.2**.: _For any \(n\) and \(p\nmid B\) there is a rational function \(\gamma_{n,A,B,p}(X)\) such that_
\[Z_{n,A,B,p}(s)=\gamma_{n,A,B,p}(p^{-s})\cdot\frac{1-p^{-2s}}{1-p^{-s}}\cdot \frac{1}{1-p^{-s}\left(\frac{-nB}{p}\right)},\]
_and if \(p\nmid 2A\), then \(\gamma_{n,A,B,p}(X)=1\). Furthermore, if \(n_{1}\equiv n_{2}\mod 8A\), then for all \(p\),_
\[\gamma_{n_{1},A,B,p}(X)=\gamma_{n_{2},A,B,p}(X).\]
The above proposition should be compared with Proposition 2.4 of [1]. It should also be noted that if \(B\) is odd, then the period \(8A\) can be replaced by \(A\).
Going back to the two variable zeta function we get
\[\zeta_{A,B}(s_{1},s_{2})=\frac{\zeta(s_{1})}{\zeta(2s_{1})}\sum_{n\text{ odd, square-free}}a_{n,A,B}(s_{1})\frac{L_{2AB}\left(\left(\frac{-nB}{\cdot}\right),s_{1} \right)}{n^{s_{2}}},\]
where
\[a_{n,A,B}(s)=\prod_{p\mid 2A}\gamma_{n,A,B,p}(p^{-s}),\]
and \(L_{2AB}(\left(\frac{-nB}{\cdot}\right),s_{1})\) is the Dirichlet \(L\)-function of the character \(\psi(k)=\left(\frac{-nB}{k}\right)\) with Euler factors corresponding to the primes \(p\) dividing \(2A\) removed.
In order to prove the analytic continuation of the above zeta function we set
\[\tilde{\zeta}_{A,B}(s_{1},s_{2})=\frac{\zeta(s_{1})}{\zeta(2s_{1})}\sum_{ \begin{subarray}{c}n\text{ square-free}\\ \gcd(n,2A)=1\end{subarray}}a_{n,A,B}(s_{1})\frac{L_{2AB}\left(\left(\frac{-nB}{ \cdot}\right),s_{1}\right)}{n^{s_{2}}}.\]
Set
\[\delta_{j}(n)=\begin{cases}1&n\equiv j\mod 8A\\ 0&n\not\equiv j\mod 8A.\end{cases}\]
By the orthogonality of characters, if \(\gcd(j,8A)=1\),
\[\delta_{j}(n)=\frac{1}{\phi(8A)}\sum_{\chi}\chi(j)^{-1}\chi(n),\]
where the sum is over all Dirichlet characters modulo \(8A\). We then have
\[\tilde{\zeta}_{A,B}(s_{1},s_{2}) =\frac{\zeta(s_{1})}{\zeta(2s_{1})}\sum_{\begin{subarray}{c}n \text{ square-free}\\ \gcd(n,2A)=1\end{subarray}}a_{n,A,B}(s_{1})\frac{L_{2AB}\left(\left(\frac{-nB}{ \cdot}\right),s_{1}\right)}{n^{s_{2}}}\] \[=\frac{\zeta(s_{1})}{\zeta(2s_{1})}\sum_{n\text{ square-free}}\sum_{j\in(\mathcal{Z}/8AZ)^{\times}}\delta_{j}(n)a_{j,A,B}(s_{1}) \frac{L_{2AB}\left(\left(\frac{-nB}{\cdot}\right),s_{1}\right)}{n^{s_{2}}}\] \[=\frac{\zeta(s_{1})}{\zeta(2s_{1})}\sum_{n\text{ square-free}}\sum_{j\in(\mathcal{Z}/8AZ)^{\times}}\frac{1}{\phi(8A)}\sum_{\chi} \chi(j)^{-1}\chi(n)a_{j,A,B}(s_{1})\frac{L_{2AB}\left(\left(\frac{-nB}{\cdot} \right),s_{1}\right)}{n^{s_{2}}}\] \[=\frac{\zeta(s_{1})}{\zeta(2s_{1})}\frac{1}{\phi(8A)}\sum_{\chi} \sum_{j\in(\mathcal{Z}/8AZ)^{\times}}\chi(j)^{-1}a_{j,A,B}(s_{1})\sum_{n\text{ square-free}}\chi(n)\frac{L_{2AB}\left(\left(\frac{-nB}{\cdot}\right),s_{1}\right)}{n^{s_{2}}}.\]
The analytic continuation of the innermost zeta function
\[\sum_{n\text{ square-free}}\chi(n)\frac{L_{2AB}\left(\left(\frac{-nB}{\cdot} \right),s_{1}\right)}{n^{s_{2}}}\]
to a meromorphic function on all of \(\mathbb{C}^{2}\) is the main result of SS2.2-2.3 of [1]. Consequently we get the following proposition:
**Proposition 2.3**.: _The zeta function \(\tilde{\zeta}_{A,B}(s_{1},s_{2})\) has an analytic continuation to a meromorphic function on all of \(\mathbb{C}^{2}\)._
Next we treat the zeta function \(\zeta_{A,B}(s_{1},s_{2})\). Write \(A=A_{1}A_{2}\) with \(A_{1}\) square-free, \(A_{2}\) squarefull, i.e., for all \(p\), \(p\mid A_{2}\) implies \(p^{2}\mid A\), and \(\gcd(A_{1},A_{2})=1\). By Equation (4) if \(\gcd(n,A_{2})\neq 1\), \(Z_{n,A,B}=0\). Hence,
\[\zeta_{A,B}(s_{1},s_{2})=\frac{\zeta(s_{1})}{\zeta(2s_{1})}\sum_{ \begin{subarray}{c}n\text{ odd, square-free}\\ \gcd(n,A_{2})=1\end{subarray}}a_{n,A,B}(s_{1})\frac{L_{2AB}\left(\left(\frac{ -nB}{\cdot}\right),s_{1}\right)}{n^{s_{2}}},\]
Since \(A_{1}\) is square-free, we can write \(A_{1}=q_{1}\cdots q_{r}\) with \(q_{i}\)'s distinct primes. For each nonempty subset \(I\) of \(\{q_{1},\ldots,q_{r}\}\), let
\[\zeta^{I}_{A,B}(s_{1},s_{2})=\frac{\zeta(s_{1})}{\zeta(2s_{1})}\sum_{\begin{subarray} {c}n\text{ odd, square-free}\\ \gcd(n,A_{2})=1\\ \text{for all }q\in I,q|n\end{subarray}}a_{n,A,B}(s_{1})\frac{L_{2AB}\left( \left(\frac{-nB}{\cdot}\right),s_{1}\right)}{n^{s_{2}}}.\]
Then by inclusion-exclusion,
\[\zeta_{A,B}(s_{1},s_{2})=\tilde{\zeta}_{A,B}(s_{1},s_{2})+\sum_{I\neq\varnothing }(-1)^{\#I+1}\zeta^{I}_{A,B}(s_{1},s_{2}).\]
So, it suffices to prove the analytic continuation of each \(\zeta^{I}_{A,B}(s_{1},s_{2})\). Let \(t(I)=\prod_{q\in I}q\). Then since \(n\) is square-free, writing \(n=n^{\prime}t(I)\),
\[\zeta^{I}_{A,B}(s_{1},s_{2}) =\frac{\zeta(s_{1})}{\zeta(2s_{1})}\sum_{\begin{subarray}{c}n \text{ odd, square-free}\\ \gcd(n,A_{2})=1\\ \gcd(n^{\prime},t(I))=1\end{subarray}}a_{n^{\prime}t(I),A,B}(s_{1})\frac{L_{2 AB}\left(\left(\frac{-n^{\prime}t(I)B}{\cdot}\right),s_{1}\right)}{n^{s_{2}}}\] \[=\frac{\zeta(s_{1})}{\zeta(2s_{1})}\sum_{\begin{subarray}{c}n \text{ odd, square-free}\\ \gcd(n,A_{2})=1\\ \gcd(n^{\prime},t(I))=1\end{subarray}}a_{n^{\prime}t(I),A,B}(s_{1})\frac{L_{2 AB}\left(\left(\frac{-n^{\prime}t(I)B}{\cdot}\right),s_{1}\right)}{n^{s_{2}}}\]
The expression \(a_{n^{\prime}t(I),A,B}(s_{1})\) is determined with \(n^{\prime}\) modulo \(8A/t(I)\) the square-free part of which has fewer prime factors than the square-free part of \(8A\). By repeating this process we may assume \(A=1\), and that, after a change of notation, we have a summation of the form
\[\frac{\zeta(s_{1})}{\zeta(2s_{1})}\sum_{n\text{ odd, square-free}}A_{n}(s_{1})\frac{L_{2AB}\left(\left(\frac{-n^{\prime}t(I)B}{ \cdot}\right),s_{1}\right)}{n^{s_{2}}} \tag{5}\]
with meromorphic functions \(A_{n}(s)\) such that if \(n_{1}\equiv n_{2}\mod 8\) then \(A_{n_{1}}(s)=A_{n_{2}}(s)\). Now an argument similiar to the proof of the analytic continuation of \(\tilde{\zeta}_{A,B}\), with \(8\) replacing \(8A\), gives the analytic continuation of the zeta function in Equation (5). This gives us the analytic continuation of \(\zeta^{I}_{A,B}(s_{1},s_{2})\) to the entire \(\mathbb{C}^{2}\) which implies the analytic continuation of \(\zeta_{A,B}(s_{1},s_{2})\). Putting everything together we obtain the following theorem:
**Theorem 2.4**.: _The zeta function \(Z_{A,B}(s_{1},s_{2})\) in Equation (2) has an analytic continuation to the entire \(\mathbb{C}^{2}\) as a meromorphic function._
|
2301.13821 | Complete Neural Networks for Complete Euclidean Graphs | Neural networks for point clouds, which respect their natural invariance to
permutation and rigid motion, have enjoyed recent success in modeling geometric
phenomena, from molecular dynamics to recommender systems. Yet, to date, no
model with polynomial complexity is known to be complete, that is, able to
distinguish between any pair of non-isomorphic point clouds. We fill this
theoretical gap by showing that point clouds can be completely determined, up
to permutation and rigid motion, by applying the 3-WL graph isomorphism test to
the point cloud's centralized Gram matrix. Moreover, we formulate an Euclidean
variant of the 2-WL test and show that it is also sufficient to achieve
completeness. We then show how our complete Euclidean WL tests can be simulated
by an Euclidean graph neural network of moderate size and demonstrate their
separation capability on highly symmetrical point clouds. | Snir Hordan, Tal Amir, Steven J. Gortler, Nadav Dym | 2023-01-31T18:07:26Z | http://arxiv.org/abs/2301.13821v4 | # Complete Neural Networks for Euclidean Graphs
###### Abstract
We propose a \(2\)-WL-like geometric graph isomorphism test and prove it is complete when applied to Euclidean Graphs in \(\mathbb{R}^{3}\). We then use recent results on multiset embeddings to devise an efficient geometric GNN model with equivalent separation power. We verify empirically that our GNN model is able to separate particularly challenging synthetic examples, and demonstrate its usefulness for a chemical property prediction problem.
Equivariant machine-learning models are models that respect data symmetries. Notable examples include Convolutional Neural Networks, which respect translation symmetries of imgaes, and Graph Neural Networks (GNNs), which respect the symmetry of graphs to permutations of their vertices.
In this paper we focus on equivariant networks for point clouds (which are often also called Euclidean or geometric graphs). Point clouds are sets of \(n\) points in \(\mathbb{R}^{d}\), whose symmetries include permutation of the \(n\) points, as well as translation, rotation and possibly also reflection. We denote the group of permutations by \(S_{n}\), the group of translations and rotations by \(SE(d)\), and the group obtained by also including reflections by \(E(d)\). Our interest is thus in functions on \(\mathbb{R}^{d\times n}\) that are invariant or equivariant to the action of \(E(d)\times S_{n}\) or \(SE(d)\times S_{n}\). In the past few years many works have focused on symmetry-preserving networks for point clouds in \(\mathbb{R}^{3}\), and on their applications for 3D computer vision and graphics (Deng et al., 2021), chemistry (Gasteiger et al., 2020) and physics simulation (Kondor, 2018). There are also applications for \(d>3\) for graph generation (Victor Garcia Satorras, 2021) and processing of Laplacian eigendecompositions (Lim et al., 2022).
The search for equivariant networks with good empirical performance is complemented by the theoretical study of these networks and their approximation power. These typically focus on two strongly related concepts: (i) _separation_ - the ability of a given invariant architecture to distinguish between two objects that are not related by a group symmetry, and (ii) _universality_ - the ability of the architecture to approximate any continuous equivariant function. These two concepts are intimately related and one typically implies the other, as discussed in Section 4 and in (Chen et al., 2019).
Recent results (Pozdnyakov and Ceriotti, 2022) show that distance-based message passing networks cannot separate point clouds better than a geometric variant of the 1-WL test (1-Geo), and that this test cannot separate all point clouds. On the other extreme, several works describe equivariant architectures that are universal, but these rely on high-dimensional representations of \(SO(d)\)(Dym and Maron, 2020; Finkelshtein et al., 2022; Gasteiger et al., 2021) or \(S_{n}\)(Lim et al., 2022). Thus, there is a large gap between the architectures needed for universality and those used in practice. For example, (Lim et al., 2022) requires hidden tensors of dimension \(n^{n}\) for universality, but uses tensors of dimension \(n^{2}\) in practice.
### Main results
In this paper, we make a substantial step towards closing this gap, by showing that complete separation can be obtained using efficient architectures of practical size.
We begin by expanding upon the notion of _geometric graph isomorphism_ tests, recently presented in (Pozdnyakov and Ceriotti, 2022; Anonymous, 2023). We show that while 1-Geo is not complete, it does separate _almost all_ distinct pairs. We then build on ideas from (Kurlin, 2022) to propose a _geometric \(2\)-WL_ test (2-Geo), which separates _any_ pair of 3D point clouds. Similarly, for general \(d\), we achieve separation using a geometric \(d-1\)-WL test. These results and some variations are discussed in Section 2.
In Section 3 we explain how to construct invariant architectures whose separation power is equivalent to 2-Geo (or 1-Geo), and thus are separating. This problem has been addressed successfully for graphs with discrete labels (see discussion in Section 7), but is more challenging for geometric graphs and other graphs with continuous labels. The main difficulty in this construction is the ability to construct
efficient continuous injective multiset-valued mappings. We will show that the complexity of computing standard injective multiset mappings is very high, and show how this complexity can be considerably improved using recent results from (Dym and Gortler, 2022). As a result we obtain \(\mathcal{SO}(3)\) and \(\mathcal{O}[3,n]\) separating invariant architectures with a computational complexity of \(O(n^{4}\log(n))\) and embedding dimension of \(6n+1\), which is approximately \(n^{2}\) times lower than what can be obtained with standard approaches. This advantage is even more pronounced when considering 'deep' 1-Geo tests (see Figure 2).
In Section 4 we use our separation results to prove the universality of our models, which is obtained by appropriate postprocessing steps to our separating architectures.
To empirically validate our findings, we present a dataset of point-cloud pairs that are difficult to separate, based on examples from (Pozdnyakov and Ceriotti, 2022; Pozdnyakov et al., 2020) and new challenging examples we construct. We verify that our architectures can separate all of these examples, and evaluate the performance of some competing architectures as well. We also show that our architectures achieve improved results on a benchmark chemical property regression task, in comparison to similar non-universal architectures. These results are described in Section 5.
## 1 Mathematical notation
A (finite) _multiset_\(\{\!\!\{y_{1},\ldots,y_{N}\}\!\!\}\) is an unordered collection of elements where repetitions are allowed.
Let \(\mathcal{G}\) be a group acting on a set \(\mathcal{X}\) and \(f:\mathcal{X}\to\mathcal{Y}\) a function. We say that \(f\) is _invariant_ if \(f(gx)=f(x)\) for all \(x\in X,g\in G\), and we say that \(f\) is _equivariant_ if \(\mathcal{Y}\) is also endowed with some action of \(G\) and \(f(gx)=gf(x)\) for all \(x\in\mathcal{X},g\in\mathcal{G}\).
A separating invariant mapping is an invariant mapping that is injective, up to group equivalence. Formally, we denote \(X\underset{\mathcal{G}}{=}Y\) if \(X\) and \(Y\) are related by a group transformation from \(\mathcal{G}\), and we define
**Definition 1.1** (Separating Invariant).: Let \(\mathcal{G}\) be a group acting on a set \(\mathcal{X}\). We say \(F:\mathcal{X}\to\mathbb{R}^{K}\) is a _\(\mathcal{G}\)-separating invariant_ if for all \(X,Y\in\mathcal{X}\),
1. **(Invariance)**\(X\underset{\mathcal{G}}{=}Y\Rightarrow F(X)=F(Y)\)
2. **(Separation)**\(F(X)=F(Y)\Rightarrow X\underset{\mathcal{G}}{=}Y\).
We call \(K\) the _embedding dimension_ of \(F\).
We focus on the case where \(\mathcal{X}\) is some Euclidean domain and require the separating mapping to be continuous and differentiable almost everywhere, so that it can be incorporated in deep learning models -- which typically require this type of regularity for gradient-descent-based learning.
The natural symmetry group of point clouds \((x_{1},\ldots,x_{n})\in\mathbb{R}^{d\times n}\) is generated by a translation vector \(t\in\mathbb{R}^{d}\), a rotation matrix \(R\in\mathcal{SO}(d)\), and a permutation \(\sigma\in S_{n}\). These act on a point cloud by
\[(R,t,\sigma)_{*}(x_{1},\ldots,x_{n})=(Rx_{\sigma^{-1}(1)}{+}t,\ldots,Rx_{ \sigma^{-1}(n)}{+}t).\]
We denote this group by \(\mathcal{SO}[d,n]\). In some instances, reflections \(R\in\mathcal{O}(d)\) are also permitted, leading to a slightly larger symmetry group, which we denote by \(\mathcal{O}[d,n]\).
For simplicity of notation, throughout this paper we focus on the case \(d=3\). In Appendix D we explain how our constructions and theorems can be generalized to \(d>3\).
## 2 Geometric Graph isomorphism tests
In this section we discuss geometric graph isomorphism tests, namely, tests for checking whether two given point clouds \(X,Y\in\mathbb{R}^{3\times n}\) are related by a permutation, rotation and translation (and possibly also reflection). Given two point clouds \(X,Y\), these tests typically compute some feature \(F(X),F(Y)\) and check whether \(F(X)=F(Y)\). This feature is \(\mathcal{G}\)-invariant, with \(\mathcal{G}\) denoting our symmetry group of choice, so that \(F(X)\neq F(Y)\) automatically implies that \(X\underset{\mathcal{G}}{\neq}Y\). Ideally, we would like to have _complete_ tests, meaning that \(X\underset{\mathcal{G}}{\neq}Y\) implies that \(F(X)\neq F(Y)\). Typically, these require more computational resources than _incomplete tests_.
### Incomplete geometric graph isomorphism test
Perhaps the most well-known graph isomorphism test is 1-WL. Based on this test, (Pozdnyakov and Ceriotti, 2022) formulated the following test, which we refer to as the 1-Geo test: Given two point clouds \(X=(x_{1},\ldots,x_{n})\) and \(Y=(y_{1},\ldots,y_{n})\) in \(\mathbb{R}^{3\times n}\), this test iteratively computes for each point \(x_{i}\) an \(\mathcal{O}(3)\)-invariant feature \(h_{i}^{t}\) via
\[h_{i}^{t}=\mathbf{Embed}^{(t)}\left(h_{i}^{t-1},\{\!\!\{h_{j}^{t-1},\|x_{i}-x_ {j}\|\}\,,j\neq i\!\!\}\right), \tag{1}\]
using an arbitrary initialization \(h_{i}^{t}\) for \(t=0\). This process is repeated \(T\) times, and then a final global feature for the point cloud is computed via
\[F^{\text{1-Geo}}(X)=\mathbf{Embed}^{(T+1)}\{\!\!\{h_{i}^{T}\mid i=1,\ldots,n \!\!\}.\]
A similar computation is performed on \(Y\) to obtain \(F^{\text{1-Geo}}(Y)\). In this test \(\mathbf{Embed}^{(t)}\) are hash functions, namely, they are discrete mappings of multisets to vectors, defined such that they assign distinct values to the finite number of multisets encountered during the computation of \(F^{\text{1-Geo}}\) for \(X\) and \(Y\). Note that by this construction, these functions are defined differently for different pairs \(X,Y\).
The motivation in (Pozdnyakov & Ceriotti, 2022) in considering this test is that many distance-based symmetry-preserving networks for point clouds are in fact a realization of this test, though they use **Embed** functions that are continuous, defined globally on \(\mathbb{R}^{3\times n}\), and in general may assign the same value to different multisets. Consequently, the separation power of these architectures is at most that of \(F^{\text{1-Geo}}\) with discrete hash functions. We note that continuous multiset functions _can_ be used to construct architectures with the separation power of geometric isomorphism tests. This will be discussed in Section 3.
The separation power of \(F^{\text{1-Geo}}\) is closely linked to the notion of _geometric degree_: For a point cloud \(X=(x_{1},\ldots,x_{n})\), we define the geometric degree \(d(i,X)\) to be the multiset
\[d(i,X)=\{\hskip-1.422638pt\{\|x_{1}-x_{i}\|,\ldots,\|x_{n}-x_{i}\|\}\hskip-1.422638pt \},\quad i\in[d],\]
and the geometric degree histogram \(d_{H}(X)\) to be
\[d_{H}(X)=\{\hskip-1.422638pt\{d(1,X),\ldots,d(n,X)\}\hskip-1.422638pt\}.\]
It is not difficult to see that if \(d_{H}(X)\neq d_{H}(Y)\) then \(X\) and \(Y\) can be separated by \(F^{\text{1-Geo}}\) even with a single iteration \(T=1\). With \(T=2\), as we show in the following theorem, \(F^{\text{1-Geo}}\) can do even more, and separate \(X\) and \(Y\) even if \(d_{H}(X)=d_{H}(Y)\), provided that the values in their histograms are all distinct, namely that \(X\) and \(Y\) belong to
\[\mathbb{R}^{3\times n}_{distinct}=\{X\in\mathbb{R}^{3\times n}|\,d(i,X)\neq d (j,X)\;\;\forall i\neq j\}.\]
Figure 1 depicts the distance matrices of two point clouds \(A,B\) that belong to \(\mathbb{R}^{3\times n}_{distinct}\) while having the same degree histogram.
**Theorem 2.1**.: _Suppose that \(X,Y\in\mathbb{R}^{3\times n}_{distinct}\), and \(\textbf{Embed}^{(t)},t=1,2,3\), are multiset-to-vector functions that assign distinct values to the finite number of multi-sets encountered when computing \(F^{\text{1-Geo}}(X)\) and \(F^{\text{1-Geo}}(Y)\). Then \(F^{\text{1-Geo}}(X)=F^{\text{1-Geo}}(Y)\) if and only if \(X\underset{\mathcal{O}[3,n]}{=}Y\)._
While _almost any_ pair of point clouds \(X,Y\) (in the Lebesgue sense) belongs to \(\mathbb{R}^{3\times n}_{distinct}\), and thus can be separated by \(F^{\text{1-Geo}}\), this test is not complete in general. This was shown in (Pozdnyakov & Ceriotti, 2022), by providing an example of point clouds \(C,D\in\mathbb{R}^{3\times 6}\) (see Figure 1)) that cannot be distinguished by \(F^{\text{1-Geo}}\).
### \(\mathcal{SO}[3,n]\)-isomorphism test
We now describe a \(2\)-WL-like geometric graph isomorphism test for \(\mathcal{SO}[d,n]\), which we name 2-Geo. Unlike 1-Geo, this test is _complete_. It is inspired by a similar test described in (Kurlin, 2022). The relationship between this work and ours is discussed in Section 7.
Figure 1: Distance matrices (Left) and degree histograms \(d_{H}\) (Right) of three pairs of point clouds \((A,B)\), \((C,D)\), \((E,F)\). These pairs are hard to separate by distance-based methods, as they have the same degree histogram. Nonetheless, \((A,B)\) can be separated by two iterations of 1-Geo, since \(A,B\in\mathbb{R}^{3\times n}_{distinct}\). Each of \(C\), \(D\) is comprised of three pairs of points, each of which share the same degree. While it was shown in (Pozdnyakov & Ceriotti, 2022) that 1-Geo cannot separate \(C\) from \(D\), our 2-Geo can separate _any_ distinct pair of 3D point clouds. \(E\) and \(F\) are especially challenging 6-dimensional point clouds in which all points have the same geometric degree.
As a first step, we eliminate the translation symmetry by centering the point clouds \(X\) and \(Y\). The centering of \(X=(x_{1},\dots,x_{n})\) is the point cloud \((x_{1}^{c},\dots,x_{n}^{c})\) defined by \(x_{i}^{c}=x_{i}-\frac{1}{n}\sum_{j=1}^{n}x_{j}\). It is known (Dym & Gortler, 2022) that the original \(X\) and \(Y\) are related by a symmetry in \(\mathcal{SO}[3,n]\) if and only if the centralized point clouds are related by a rotation and permutation.
Now, let us make two simplifying assumptions that we shall later dispose of: (a) the first two points of \(X\) and \(Y\) are in correspondence- meaning that if \(X\) and \(Y\) can be aligned by an \(\mathcal{SO}[3,n]\) transformation, then the permutation component assigns \(x_{1}\) to \(y_{1}\) and \(x_{2}\) to \(y_{2}\), and (b) the first two points in each point cloud are linearly independent.
Under these assumptions, we can define bases for \(\mathbb{R}^{3}\) by \(x_{1},x_{2},x_{1}\times x_{2}\) and \(y_{1},y_{2},y_{1}\times y_{2}\). These two bases are related by a rotation if and only if the Gram matrices of these two bases are identical. If indeed they are, this still only implies that the first two points are related by a rotation. To check whether the remaining points are related by a rotation and permutation, it suffices to check that the unordered collection of inner products of the remaining points with the basis we defined are identical.
Formally, we define for \((i,j)=1,2\) or any other pair of indices \((i,j)\)
\[X_{[i,j]} =\big{[}x_{i}^{c},x_{j}^{c},x_{i}^{c}\times x_{j}^{c}\big{]}\in \mathbb{R}^{3\times 3} \tag{2}\] \[P_{[i,j,k]} =X_{[i,j]}^{T}x_{k}^{c}\] (3) \[G_{[i,j]}(X) =X_{[i,j]}^{T}X_{[i,j]}\] (4) \[h_{[i,j]}(X) =\textbf{Embed}^{(1)}\big{\{}\mathbb{R}P_{[i,j,k]}\;\mid\;k=3, \dots,n\big{\}}\] (5) \[m_{[i,j]}(X) =\big{(}G_{[i,j]}(X),h_{[i,j]}(X)\big{)} \tag{6}\]
and we define \(m_{[i,j]}(Y)\) a similar manner, where **Embed** is some multiset-valued hash functions. The above construction guarantees that if \(X,Y\) satisfy the simplifying assumptions (a)-(b), then \(X\) and \(Y\) are related by a symmetry in \(\mathcal{SO}[3,n]\) if and only if \(m_{[1,2]}(X)=m_{[1,2]}(Y)\).
Let us now remove the simplifying assumption (a)-(b). Since we no longer know the correspondence, instead of just considering \(m_{[1,2]}\), we consider the multiset of all possible \(m_{[i,j]}\) and define
\[F^{2\text{-Geo}}(X)=\textbf{Embed}^{(2)}\big{\{}m_{[i,j]}(X)\;\mid\;1\leq i \neq j\leq n\big{\}}. \tag{7}\]
We define \(F^{2\text{-Geo}}(Y)\) similarly. In the appendix we prove
**Theorem 2.2**.: _Let \(X,Y\in\mathbb{R}^{3\times n}\), and let \(\textbf{Embed}^{(1)},\textbf{Embed}^{(2)}\) be multiset-to-vector functions that assign distinct values to the finite number of multisets encountered when computing \(F^{2\text{-Geo}}(X)\) and \(F^{2\text{-Geo}}(Y)\). Then \(F^{2\text{-Geo}}(X)=F^{2\text{-Geo}}(Y)\) if and only if \(\underset{\mathcal{SO}[3,n]}{=}Y\)._
Proof ideaIf the centralized point cloud \(X^{c}\) has rank \(\geq 2\), there are some \(i,j\) such that \(x_{i}^{c},x_{j}^{c}\) are linearly independent. If \(F^{2\text{-Geo}}(X)=F^{2\text{-Geo}}(Y)\) then there are some \(s,t\) such that \(m_{[i,j]}(X)=m_{[s,t]}(Y)\), and the argument above then shows that \(\underset{\mathcal{SO}[3,n]}{=}Y\). The full proof (which does not assume any rank assumptions on \(X^{c}\)) is given in the appendix.
### \(\mathcal{O}[3,n]\)-isomorphism test
The 2-Geo test described above can be modified to the scenario where reflections are also considered symmetries of the point cloud, so we would like the test to distinguish point clouds up to \(\mathcal{O}[3,n]\) symmetries. A simple way to achieve this is to consider for each pair \(x_{i},x_{j}\) both orientations of the vector product
\[X_{[i,j]}^{pos} =X_{i,j}=[x_{i},x_{j},x_{i}\times x_{j}]\] \[X_{[i,j]}^{neg} =[x_{i},x_{j},-x_{i}\times x_{j}].\]
The details of this construction are given in Appendix B. Ultimately this leads to a complete \(\mathcal{O}[3,n]\) test with twice the time and space complexity of the 2-Geo test for \(\mathcal{SO}[3,n]\).
An interesting but less efficient alternative to the above test is to use the standard \(3\)-WL graph isomorphism test, with the initial label for each triplet of indices taken to be the Gram matrix corresponding to those indices. The details of this construction are described in Appendix B as well.
## 3 Separating Architectures
In the previous section we described incomplete and complete geometric graph isomorphism tests for \(\mathcal{SO}[3,n]\) and \(\mathcal{O}[3,n]\) symmetries. Our goal now is to build separating architectures based on these tests. Let us first focus on the 2-Geo test for \(\mathcal{SO}[3,n]\).
To construct a separating \(\mathcal{SO}[3,n]\)-invariant architecture based on our 2-Geo test, we first need to choose a realization for the multiset-to-vector functions \(\textbf{Embed}^{(1)}\) and \(\textbf{Embed}^{(2)}\). To this end, we use parametric functions \(\textbf{Embed}_{\alpha}:\mathbb{R}^{3\times(n-2)}\rightarrow\mathbb{R}^{K_{1}}\) and \(\textbf{Embed}_{\beta}:\mathbb{R}^{K_{1}\times(n^{2}-n)}\rightarrow\mathbb{R}^{ K_{2}}\) which are invariant to permutation of their second coordinate. Note that this also renders \(F^{2\text{-Geo}}\) parametric \(F^{2\text{-Geo}}_{\alpha,\beta}\). We will also want them to be continuous and piecewise differentiable with respect to \(X,\alpha\) and \(\beta\).
The main challenge is of course to guarantee that for some parameters \(\alpha,\beta\), the function \(F^{2\text{-Geo}}_{\alpha,\beta}\) is \(\mathcal{SO}[3,n]\) separating. A standard way (see (Anonymous, 2023; Maron et al., 2019)) to achieve this is to require that for some \(\alpha,\beta\), the functions \(\textbf{Embed}_{\alpha}\) and \(\textbf{Embed}_{\beta}\) are injective as functions of multisets, or equivalently that they are permutation-invariant and separating. Note that by Theorem 2.2, this requirement will certainly suffice to guarantee that \(F^{2\text{-Geo}}\) is an \(\mathcal{SO}[3,n]\)-separating invariant. Our next step is therefore
to choose permutation invariant and separating \(\mathbf{Embed}_{\alpha}\), \(\mathbf{Embed}_{\beta}\). Naturally, we will want to choose these so that the dimensions \(K_{1},K_{2}\) and the complexity of computing these mappings are as small as possible.
### \(S_{n}\) separating invariants
We now consider the problem of finding separating invariants for the action of \(S_{N}\) on \(\mathbb{R}^{D\times N}\). Let us begin with the scalar case \(D=1\). Two well-known separating invariant mappings in this setting are the power-sum polynomials \(\Psi_{pow}\) and the sort mapping \(\Psi_{sort}\), defined by
\[\Psi_{sort}(s_{1},\dots,s_{N}) =\mathrm{sort}(s_{1},\dots,s_{N})\] \[\Psi_{pow}(s_{1},\dots,s_{N}) =\left(\sum_{j=1}^{N}s_{j},\sum_{j=1}^{N}s_{j}^{2},\dots,\sum_{j= 1}^{N}s_{j}^{N}\right).\]
It is clear that \(\mathrm{sort}\) is permutation-invariant and separating. It is also continuous and piecewise linear, and thus meets the regularity conditions we set out for separating invariant mappings. The power-sum polynomials are clearly smooth. Their separation can be obtained from the separation of the elementary symmetric polynomials, as discussed e.g., in (Zaheer et al., 2017).
We now turn to the case \(D>1\), which is our case of interest. One natural idea is to use lexicographical sorting. However, for \(D>1\), this sorting is not continuous. The power-sum polynomials can be generalized to multi-dimensional input, and these were used in the invariant learning literature (Maron et al., 2019). However, a key disadvantage is that to achieve separation, they require an extremely high embedding dimension \(K=\binom{N+D}{D}\).
A more efficient approach was recently proposed in (Dym and Gortler, 2022). This method initially applies linear projections to obtain \(N\) scalars and then applies a continuous \(1\times N\)-separating mapping \(\Psi=\Psi_{pow}\) or \(\Psi=\Psi_{sort}\), namely, one-dimensional power-sum polynomials or sorting. In more detail, for some natural \(K\), the function \(\mathbf{Embed}_{\theta}:\mathbb{R}^{D\times N}\rightarrow\mathbb{R}^{K}\) is determined by a vector \(\theta=(a_{1},\dots,a_{K},b_{1},\dots,b_{K})\in\mathbb{R}^{K(D+N)}\) where each \(a_{i}\) and \(b_{i}\) are \(D\)- and \(N\)-dimensional respectively, and
\[\mathbf{Embed}_{\theta}(X)=\langle b_{j},\ \Psi\left(a_{j}^{T}X\right) \rangle,\ j=1,\dots,K. \tag{8}\]
The following theorem shows that this mapping is permutation invariant and separating.
**Theorem 3.1** ((Dym and Gortler, 2022)).: _Let \(\mathcal{X}\) be an \(S_{N}\)-invariant semi-algebraic subset of \(\mathbb{R}^{D\times N}\) of dimension \(D_{\mathcal{X}}\). Denote \(K=2D_{\mathcal{X}}+1\). Then for Lebesgue almost every \(\theta\in\mathbb{R}^{K(D+N)}\) the mapping \(\mathbf{Embed}_{\theta}:\mathcal{X}\rightarrow\mathbb{R}^{K}\) is \(S_{N}\) invariant and separating._
When choosing \(\mathcal{X}=\mathbb{R}^{D\times N}\) we get that \(D_{\mathcal{X}}=N\cdot D\). The embedding dimension of \(\mathbf{Embed}_{\theta}\) would then be \(2N\cdot D+1\). This already is a significant improvement over the cardinality of the power-sum polynomials. Another important point that we will soon use is that if \(\mathcal{X}\) is a strict subset of \(\mathbb{R}^{D\times N}\) the number of separators will depend linearly on the intrinsic dimension \(D_{\mathcal{X}}\), and not on the ambient dimension \(ND\).
To conclude this subsection, we note that sort-based permutation invariants such as those we obtain when choosing \(\Psi=\Psi_{sort}\) are common in the invariant learning literature (Zhang et al., 2018, 2019). In contrast, polynomial-based choices such as \(\Psi=\Psi_{pow}\) are not so popular. However, this choice does provide us with the following corollary.
**Corollary 3.2**.: _Under the assumptions of the previous theorem, there exists a smooth parametric function \(q_{\theta}:\mathbb{R}^{3}\rightarrow\mathbb{R}^{2D_{\mathcal{X}}+1}\) such that the separating permutation-invariant mapping \(\mathbf{Embed}_{\theta}:\mathbb{R}^{3\times N}\rightarrow\mathbb{R}^{2D_{ \mathcal{X}}+1}\) defined using \(\Psi=\Psi_{pow}\) is given by_
\[\mathbf{Embed}_{\theta}(x_{1},\dots,x_{N})=\sum_{i=1}^{N}q_{\theta}(x_{i})\in \mathbb{R}^{2D_{\mathcal{X}}+1}.\]
Accordingly, in all approximation results which we will get based on the embedding \(\Psi_{pow}\) we can approximate \(\mathbf{Embed}_{\theta}\) with a function of the form \(\sum_{i=1}^{n}\mathcal{N}(x_{i})\) where \(\mathcal{N}\) is a neural networks whose input and output dimension are the same as those of \(\mathbf{Embed}_{\theta}\).
### Dimensionality of separation
From the discussion above we see that we can choose \(\mathbf{Embed}_{\alpha}\) to be a separating invariant mapping on \(\mathbb{R}^{3\times(n-2)}\), with an embedding dimension of \(K_{1}=6n-11\). It would then seem natural to choose the embedding dimension of \(\mathbf{Embed}_{\beta}\) so that separation on \(\mathbb{R}^{K_{1}\times(n^{2}-n)}\) is guaranteed. This would require a rather high embedding dimension of \(\sim n^{3}\). We note that any permutation invariant and separating mapping on \(\mathbb{R}^{D\times N}\) with reasonable regularity will have embedding dimension of at least \(N\cdot D\) (see (Anonymous, 2023)), and as a result we will always obtain an embedding dimension of \(n^{3}\) when requiring \(\mathbf{Embed}_{\alpha},\mathbf{Embed}_{\beta}\) to be separating on all of their (ambient) domain.
Significant savings can be obtained by the following observation: while the ambient dimension of the domain of \(\mathbf{Embed}_{\beta}\) is large, we only need injectivity for multisets in this domain which were obtained from some point cloud in \(\mathbb{R}^{3\times n}\). Accordingly, (once \(\alpha\) is fixed) \(\mathbf{Embed}_{\beta}\) needs only to be injective on a subset \(\mathcal{X}\) of the domain whose intrinsic dimension is at most \(3n\). Using Theorem 3.1 (see details in the proof of Theorem 3.4 stated below) we can take the embedding dimension of \(\mathbf{Embed}_{\beta}\) to be \(K_{2}=6n+1\). This idea is visualized in Figure 2(a).
We note that the advantage of the intrinsic separation technique presented here is even more pronounced when considering the implementation of \(F^{1\text{-Geo}}\) with \(T\) large. If we
require the mappings \(\textbf{Embed}^{(t)}\) to be separating and invariant on their (ambient) domain, the embedding dimension \(K_{t}\) of the \(t\)-th mapping is at least \(n+1\) times larger than the previous embedding dimension \(K_{t-1}\), so that the final embedding dimension is roughly \(\sim n^{T+1}\). In contrast, since the intrinsic dimension at each step is \(3n\), we get a constant embedding dimension of \(\sim 6n\) for all \(t\), by using a variation of Theorem 3.1 for vector-multiset pairs. See Appendix E for a full explanation, and Figure 2(b) for an illustration.
### Separation by feed-forward Neural Network Architectures
To summarize our discussion, the \(\mathcal{SO}[3,n]\) geometric graph isomorphism discussed in Theorem 2.2 can be realized as a separating invariant architecture by replacing \(\textbf{Embed}^{(1)}\) and \(\textbf{Embed}^{(2)}\) with the parametric functions \(\textbf{Embed}_{\alpha}\) and \(\textbf{Embed}_{\beta}\) respectively as in (8), with embedding dimension \(K_{1}=6n-11\) and \(K_{2}=6n+1\) as discussed above, and \(\Psi=\Psi_{sort}\) or \(\Psi=\Psi_{pow}\). This leads to the architecture described in Algorithm 1.
``` Input:\(X=(x_{1},...,x_{n})\in\mathbb{R}^{3\times n}\) \(X_{[i,j]}\leftarrow\big{[}x_{i}^{c},x_{j}^{c},x_{i}^{c}\times x_{j}^{c}\big{]}\) \(h_{[i,j]}(X)\leftarrow\textbf{Embed}_{\alpha}\big{\{}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
### Universality for Rotation-Equivariant models
Separating invariants are useful not only for proving universality of _invariant_ models, but also for _equivariant_ models. In our context, we use a result from (Villar et al., 2021) which showed that a permutation invariant and equivariant functions can be written as combinations of general invariant functions and simple equivariant functions. Since this result requires \(\mathcal{O}(3)\) invariance, we use a modification of Algorithm 2 to \(\mathcal{O}(3)\) invariance which is described formally in the appendix in Algorithm 3. We then obtain the following equivariant universality result:
**Theorem 4.2** (Equivariant Universality).: _Let \(f:\mathbb{R}^{3\times n}\rightarrow\mathbb{R}^{3}\) be continuous, \(\mathcal{O}(3)\)-equivariant and translation and permutation invariant. Then for any compact \(M\subset\mathbb{R}^{3\times n}\) and any \(\varepsilon>0\), \(f\) can be approximated to \(\epsilon\)-accuracy uniformly on \(M\) by functions of the form_
\[\tilde{f}(X)=\sum_{k=1}^{n}\mathcal{N}(h_{k},h_{global})x_{k}^{c},\]
_where \(h_{k}(X),h_{global}(X)\) are the output of Algorithm 3 and \(\mathcal{N}\) is a fully connected neural network._
## 5 Experiments
### Separation experiment
To evaluate the separation power of different architectures, we constructed a dataset consisting of pairs of point clouds that are particularly difficult to separate. This dataset will be made available for public use.
Each pair of point clouds \(X_{1},X_{2}\) is used as a prototype, from which we generate data samples for a binary classification task. Samples are generated by randomly choosing one of \(X_{1},X_{2}\), applying a random rotation and permutation to it and adding noise. The task is to determine whether each sample originates from \(X_{1}\) or \(X_{2}\).
We used the following challenging pairs: (i) **Hard1-Hard3**: Three pairs of 3D point clouds from (Pozdnyakov et al., 2020). In each pair, both clouds have the same degree histogram, but are members of the set \(\mathbb{R}^{3\times n}_{distinct}\) -- which can be separated by 1-Geo according to Theorem 2.1. The distance matrices for one such pair are visualized in \(A,B\) of Figure 1. (ii) **Harder**: A pair of 3D point clouds from (Pozdnyakov and Ceriotti, 2022) that are not in \(\mathbb{R}^{3\times n}_{distinct}\), and provably cannot be separated by 1-Geo. These are \(C,D\) in Figure 1. (iii) **Cholesky dim=d**: Pairs \(X_{1},X_{2}\) of \(d\) points in \(\mathbb{R}^{d}\), with \(d=6,8,12\). All points in \(X_{1}\), \(X_{2}\) have the
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline Point Clouds & GramNet & GeoEGNN & EGNN & LinearEGNN & MACE & TFN & DimeNet & GVPGNN \\ \hline Hard1[2] & 1.0 & 0.998 & 0.5 & 1.0 & 1.0 & 0.5 & 1.0 & 1.0 \\ Hard2 [2] & 1.0 & 0.97 & 0.5 & 1.0 & 1.0 & 0.5 & 1.0 & 1.0 \\ Hard3 [2] & 1.0 & 0.85 & 0.5 & 1.0 & 1.0 & 0.55 & 1.0 & 1.0 \\ Harder [1] & 1.0 & 0.899 & 0.5 & 0.5 & 1.0 & 0.5 & 1.0 & 1.0 \\ Cholesky dim=6 & 1.0 & Irrelevant & 0.5 & 0.5 & 1.0 & Irrelevant & Irrelevant & Irrelevant \\ Cholesky dim=8 & 1.0 & Irrelevant & 0.5 & 0.5 & 1.0 & Irrelevant & Irrelevant & Irrelevant \\ Cholesky dim=12 & N/A & Irrelevant & 0.5 & 0.5 & 0.5 & Irrelevant & Irrelevant & Irrelevant \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results of our models on challenging point clouds [1](Pozdnyakov and Ceriotti, 2022)(Pozdnyakov et al., 2020) Fig. S4
Figure 2: The standard method for constructing architectures that are equivalent to isomorphism tests uses injective multiset mappings on the ambient domain. The embedding dimension of such mapping increases exponentially with depth. In contrast, Theorem 3.1 allows for injective multiset functions whose dimensionality is twice the intrinsic dimension of the features which is always \(3n\). The figure shows the implementation of these two approaches for the computation of \(F^{\text{2-Geo}}\) and \(F^{\text{1-Geo}}\).
same degree histogram. The point clouds \(E,F\) for \(d=6\) appear in Figure 1. Further details appear in Appendix A.
The results appear in Table 1. First, as our theory predicts, \(\mathbf{GramNet}\) achieves perfect separation on all examples, while in the \(0.1\) noise level we use \(\mathbf{GeoEGNN}\) to achieve good but not perfect separation.
Next, note that EGNN (Victor Garcia Satorras, 2021) fails for all examples. Surprisingly, replacing the neural networks in EGNN with simple linear functions (LinearEGNN) does yield successful separation of examples in \(\mathbb{R}^{3\times n}_{distinct}\), as predicted in Theorem 2.1.
Finally, note that Tensor Field Networks (Thomas et al., 2018) does not separate our examples well, while MACE (Batatia et al.,), DimeNet (Gasteiger et al., 2020) and GVPGNN (Jing et al.) do. Of these methods, only MACE is applicable to problems with \(d>3\), and it has failed to separate for the \(d=12\) case. **GramNet** was unable to run for \(d=12\) due to memory constraints.
### Invariant regression on QM9
We evaluated our architectures on the QM9 Dataset for molecular property prediction (Ramakrishnan et al., 2014). To implement \(\mathbf{GeoEGNN}\) we used the original implementation of **EGNN**(Victor Garcia Satorras, 2021) augmented by the addition of our \(h_{[i,j]}\) of Equation (5) as edge features. As shown in Table 2, this minor modification of EGNN typically leads to improved results. In contrast, despite its excellent separation properties \(\mathbf{GramNet}\) is not competitive on this task.
## 6 Conclusion
We presented fully separating architectures whose embedding dimension depends linearly on the point cloud's dimension, in stark contrast to contemporary approaches with an exponential dependence. Our implementation of these architectures achieves good separation results in practice and yields improved results in several tasks on the QM9 benchmark. We believe these results will open the door to further improvements, both in the theoretical understanding of what is necessary for separation and in the development of separating architectures with good practical performance.
## 7 Related Work
WL equivalence.The relationship between the \(k\)-WL test and GNNs is very well-studied. Proofs of equivalence of GNNs and \(k\)-WL often assume a countable domain (Xu et al., 2018) or require separation only for a single graph (Morris et al., 2018). To the best of our knowledge, our separation result is the first one in which, for a fixed parameter vector (and in fact for almost all such vectors), separation is guaranteed for _all_ graphs with features in a _continuous_ domain. This type of separation could be obtained using the power-sum methodology in (Maron et al., 2019), but the complexity of this construction is exponentially worse than ours (see Subsections 3.1 and 3.2).
Complete Invariants and universality.As mentioned earlier, several works describe \(\mathcal{SO}[3,n]\) and \(\mathcal{O}[3,n]\) equivariant point-cloud architectures that are universal. However, these rely on high-dimensional representations of \(SO(d)\)(Dym and Maron, 2020; Finkelshtein et al., 2022; Gasteiger et al., 2021) or \(S_{n}\)(Lim et al., 2022).
In the planar case \(d=2\), universality using low-dimensional features was achieved in (Bokman et al., 2022). For general \(d\), a complete test similar to our 2-Geo was proposed in (Kurlin, 2022). However, it uses Gram-Schmidt orthogonalization, which leads to discontinuities at point clouds with linearly-dependent points. Moreover, the complete invariant features defined there are not vectors, but rather sets of sets. As a result, measuring invariant distances for \(d=3\) requires \(O(n^{7.5}+n^{3.5}log^{3}(n))\) arithmetic operations, whereas using GramNet invariant features only requires \(O(n^{4}log(n))\) operations. Finally, we note that more efficient tests for equivalence of geometric graphs were suggested in (Brass and Knauer, 2000), but there does not seem to be a straightforward way to modify these constructions to efficiently compute a complete, continuous invariant feature.
Weaker notions of universalityWe showed that 1-Geo is complete on the subset \(\mathbb{R}^{3\times n}_{distinct}\). Similar results for a simpler algorithm, and with additional restrictions, were obtained in (Widdowson and Kurlin, 2022). Efficient separation/universality can also be obtained for point clouds with distinct principal axes (Puny et al., 2021; Kurlin, 2022), or when only considering permutation (Qi et al., 2017) or rigid (Wang et al., 2022) symmetries, rather than considering both symmetries simultaneously.
\begin{table}
\begin{tabular}{c c c c c c c c c c c c} \hline \hline Property & \(\alpha\) & \(\varepsilon_{HOMO}\) & \(H\) & \(\varepsilon_{LUMO}\) & \(\Delta\varepsilon\) & \(\mu\) & \(C_{\nu}\) & \(G\) & \(R^{2}\) & \(U\) & \(U_{0}\) & \(ZPVE\) \\ \([units]\) & \(bohr^{3}\) & meV & meV & meV & meV & D & cal/mol K & meV & \(bohr^{3}\) & meV & meV & meV \\ \hline
**EGNN** & 0.073 & 29 & 12 & **25** & 48 & **0.029** & **0.031** & 12 & 0.106 & 12 & 11 & **1.55** \\
**GeoEGNN** & **0.068** & **27.9** & **11.6** & 38.3 & **45.8** & 0.032 & **0.031** & **10.75** & **0.1004** & **11.5** & **10.5** & 1.61 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results on QM9 Dataset
AcknowledgementsN.D. acknowledges the support of the Horev Fellowship.
|
2309.15482 | Comparisons among the Performances of Randomized-framed Benchmarking
Protocols under T1, T2 and Coherent Error Models | While fundamental scientific researchers are eagerly anticipating the
breakthroughs of quantum computing both in theory and technology, the current
quantum computer, i.e. noisy intermediate-scale quantum (NISQ) computer
encounters a bottleneck in how to deal with the noisy situation of the quantum
machine. It is still urgently required to construct more efficient and reliable
benchmarking protocols through which one can assess the noise level of a
quantum circuit that is designed for a quantum computing task. The existing
methods that are mainly constructed based on a sequence of random circuits,
such as randomized benchmarking (RB), have been commonly adopted as the
conventional approach owning to its reasonable resource consumption and
relatively acceptable reliability, compared with the average gate fidelity. To
more deeply understand the performances of the above different
randomized-framed benchmarking protocols, we design special random circuit
sequences to test the performances of the three selected standard
randomized-frame protocols under T1, T2, and coherent errors, which are
regarded to be more practical for a superconductor quantum computer. The
simulations indicate that MRB, DRB, and CRB sequentially overestimate the
average error rate in the presence of T1 and T2 noise, compared with the
conventional circuit's average error. Moreover, these methods exhibit almost
the same level of sensitivity to the coherent error. Furthermore, the DRB loses
its reliability when the strengths of T1 grow. More practically, the simulated
conclusion is verified by running the designed tasks for three protocols on the
Quafu quantum computation cloud platform. We find that MRB produces a more
precise assessment of a quantum circuit conditioned on limited resources.
However, the DRB provides a more stable estimation at a specific precision
while a more resource-consuming. | Xudan Chai, Yanwu Gu, Weifeng Zhuang, Peng Qian, Xiao Xiao, Dong E Liu | 2023-09-27T08:25:00Z | http://arxiv.org/abs/2309.15482v1 | Comparisons among the Performances of Randomized-framed Benchmarking Protocols under T1, T2 and Coherent Error Models
###### Abstract
While fundamental scientific researchers are eagerly anticipating the breakthroughs of quantum computing both in theory and technology, the current quantum computer, i.e. noisy intermediate-scale quantum (NISQ) computer encounters a bottleneck in how to deal with the noisy situation of the quantum machine. Since fully characterizing the quantum device has become technically impossible, and although an error mitigation technology has been adopted, it is still urgently required to construct more efficient and reliable benchmarking protocols through which one can assess the noise level of a quantum circuit that is designed for a quantum computing task. The existing methods that are mainly constructed based on a sequence of random circuits, such as randomized benchmarking (RB), have been commonly adopted as the conventional approach owning to its reasonable resource consumption and relatively acceptable reliability, compared with the average gate fidelity. To more deeply understand the performances of the above different randomized-framed benchmarking protocols, we design special random circuit sequences to test the performances of the three selected standard randomized-frame protocols under T1, T2, and coherent errors, which are regarded to be more practical for a superconductor quantum computer. The simulations indicate that MRB, DRB, and CRB sequentially overestimate the average error rate in the presence of T1 and T2 noise, compared with the conventional circuit's average error. Moreover, these methods exhibit almost the same level of sensitivity to the coherent error. Furthermore, the DRB loses its reliability when the strengths of T1 grow. More practically, the simulated conclusion is verified by running the designed tasks for three protocols on the Quafu quantum computation cloud platform. We find that MRB produces a more precise assessment of a quantum circuit conditioned on limited resources. However, the DRB provides a more stable estimation at a specific precision while a more resource-consuming.
## I Introduction
The emergence of increasingly powerful quantum technologies has brought millions of researchers in both experimental [1; 2; 3]and theoretical[4] quantum physics into a new historical time node of the so-called third evolution of quantum through the developmental history of science and technology. This technological advancement has definitely triggered various unprecedented possibilities for exploring and exploiting the properties of complex quantum systems. As we know, either the exploration or the exploitation calls for more precise controllability[5] and manipulability[6]. Simultaneously, this highly demanded technological development stimulates both the applications[7] and the fundamental research [8] within the quantum field, especially for the quantum computing we are discussing.
We know the researchers belonging to this attractive field are now challenged by the complex, even unknowable noise within a quantum computer[9], such as the superconductor quantum computer[10], i.e. a very popular platform for realizing real quantum computations. Due to its intrinsic complex operation mechanism, we are only capable of abstracting very limited information to the quantum circuit designed for performing a quantum computing task. Even worse, it is nearly practically impossible to fully and succinctly characterize a general large-scale error-corrected quantum computer, i.e. called Noisy intermediate-scale quantum (NISQ) computer[11] using classical data. Although error mitigation[12; 13] as another selectable choice for developing the practical usability of a quantum computer has gained a lot of attention in this field, there are still urgent requirements both for experimentalists and theorists to theoretically create a more efficient and precise protocol[14], that can efficiently predict and precisely assess the quantum computer's performance with highly reasonable and practical reliability, especially for a large-scale quantum computer.
As far as we know, randomized benchmarking(RB) [15; 16]together with various modifications of this method, such as interleaved RB [17], direct RB [18], cycle RB [19], character RB [20], mirror RB [21], and even cross-entropy benchmarking(XEB) [22], i.e., this method has been also framed into the structure of the general RB, has been preferred as the most practically reliable method recently. Although these protocols have been proven to be reliable and efficient based on numerical simulations and real experiments, a much clearer interpretation of their performances under more practical noises, such as
T1, T2, [23] and coherent errors [24] of 1 and 2 qubit gate. There are a number of simulations and real experiments that have verified the reliability and efficiency of these protocols, however, it is still not very clear how they perform under more practical noises [25], such as T1, T2, and coherent errors of 1 and 2 qubit gate.
Although we have very limited information about the current superconductor quantum computer, we can still confirm that T1, T2, and coherent errors are the most significant noise resources that may unpredictably cause a catastrophic collapse [26]for a quantum circuit system.
Here, we select DRB, CRB, and MRB as three standard methods among the randomized-framed protocols [27] to test their performances under the two most concerned noises including T1, T2, and coherent errors for 1 and 2-qubit gate respectively within a whole quantum circuit system. Formally, these three different methods would probably contain not very much comparability due to the distinct practical purpose on which they are constructed. In order to contrast the performances of these methods, it is logically demanded that we carefully design a specific task that is operable for all of them and is also able to indicate the consistency or the difference among them. Therefore, we especially run a sequence of considerably designed random circuits [28] to test the performance of these three methods. The simulations show that in the presence of T1 and T2 noise, MRB, DRB, and CRB sequentially overestimate the average error rate of the circuit, compared with the circuit's average error rate calculated using the standard process tomography method. Moreover, the sensitivity [29]of these methods to the coherent error is nearly the same as the average gate fidelity calculated using the standard process tomography method. Furthermore, the DRB will fail especially with the strong T1 and T2 noise.
Finally, in order to verify our simulated conclusion, we run the specifically designed task with these three methods on the Quafu quantum computation cloud platform[30; 31; 32], which was released very recently. Except for the expected result, we also find that MRB produces a more precise assessment of a quantum circuit conditioned on limited resources. Conversely, aiming for a specific estimation precision, though more resource is consumed, the DRB provides a more stable estimation. In summary, the simulation and experiment demonstrated here contribute to the research in quantum benchmarking, therefore, we hope our work can be helpful in deeply understanding the characteristics of these protocols.
## II Theory and method
Quantum computing has gained huge attention and even economic investments in the past decade years. There is a large growing number of experimental efforts and theoretic explorations that aim to demonstrate the reliability of scalable quantum computing. However, the current quantum computers suffer from unbearable errors, which are caused by complex situations, such as imprecise control, the natural decoherence of the qubits [33], and even cross-talk [34]. Obviously, a criterion [25] is urgently needed to guarantee that the noisy scalable quantum computer is theoretically possible if the criteria are satisfied. One of the criteria is that the noise within physical gates is sufficiently low. Due to the complexity of the noise, the demonstration scalable quantum computer expects a particular type of noise that corresponds with the real situation as much as possible. The type of noise is often called an error model. So we must claim which type of error model is when we talk about the possibility.
The threshold theorem [25] indicates that criteria are currently a well-known rigorous guarantee that fault-tolerant quantum computing is possible if the threshold operating conditions are satisfied. The operating conditions are formed in two terms. One is that the noise must be promised from of noise, i.e. noise locality, and the other one is that the measurable errors must take place at a rate low enough within a quantum circuit. After roughly demonstrating the relationship between the error rate and the threshold theorem, we will explain these two terms in detail in the following two sub-parts.
### The Error rate of a quantum gate
The error rate of a quantum is a measure that indicts the closeness of an operation of a noisy quantum gate to the ideal quantum operation. Usually, the above demonstration is clear enough to be understood. Think much deeper, we might find some insights to unfold the vague points within this concept. There are many metrics to determine the difference between operations, unfortunately, some have no operation meaning, namely not intuitively understandable, and some even have the wrong operation meaning. Here we choose distinguishability to be a pretty good operational foundation in our context. An error at a low enough level is required in fault-tolerant quantum computing, therefore, we are specifically interested in very small errors. However, according to quantum mechanics, it is very hard or even impossible to detect a single mistake with confidence. If all gates are perfect, the circuit will output the correct result, however, if we get a wrong result, this real gate will be distinguished from the ideal one. According to this, we can define the error rate to be the distinguishability of gates from the ideal. After all, this is just one way to define the error rate of an operation.
The statistics for an ideal process are governed by a probability distribution \(p_{id}\), however, an error-prone produces a different distribution \(p_{ac}\) that governs the actual statistics. The total variation distance [35]
\[d_{TV}(p_{id},p_{id})=\frac{1}{2}\Sigma_{x\in X}|p_{id}(x)-p_{id}(x) \tag{1}\]
is a natural measure of the distance between two probability distributions over a set of outcomes \(X\). Unfortunately, this measure is just theoretically meaningful while practically unmeasurable, especially for a large-scale qubit system. We all know that process infidelity drastically underestimates the distinguishability between unitary operations and the diamond norm distance [36]
\[d_{\circ}(\mathbf{G}_{ac},\mathbf{G}_{id}):=\frac{1}{2}\left\|\mathbf{G}_{ac}-\mathbf{G}_{id}\right\| \tag{2}\]
between \(\mathbf{G}_{ac}\) and \(\mathbf{G}_{id}\) just provides an upper bound on distinguishability.
One may hope that an estimate of the infidelity of the logical noise under a probabilistic Pauli channel generalizes directly general logical noise. Unfortunately, even quantifying the error becomes more complicated for more general noise. The 'error rate' of a noisy process \(\varepsilon\) acting on a system is often experimentally quantified via the average gate infidelity to the identity, which can be efficiently estimated via randomized benchmarking[37]. However, theoreticians often report rigorous bounds on the performance of a quantum circuit in terms of the diamond distance to the identity. The infidelity and diamond distance are related via the bounds[38; 39]. Pauli noise saturates the lower bound of and the effect of coherent noise is often assumed to be negligible so that experimental infidelities are often compared to diamond distance targets to determine whether fault tolerance is possible [40]. However, even if coherent errors make a negligible contribution to the infidelity, they can dominate the diamond norm[41]. Because of this uncertainty about how to quantify errors effectively, it is unclear what figure of merit recovery operations should optimize and how to quantify the logical error rate. [42] has proved that encoding a system in a stabilizer code and measuring error syndromes decoheres errors toward probabilistic Pauli errors. Moreover, the error rate in a logical circuit is well quantified by the average gate fidelity at the logical level.
### Metrics to the performance of quantum processor
#### ii.2.1 Average gate fidelity
The central task of quantum computation is to characterize the quality of quantum channels and quantum gates. The average gate fidelity of a quantum channel described by trace-preserving quantum operation \(\varepsilon\) is defined by [43]:
\[\overline{F}(\varepsilon)\equiv\int d\psi\left\langle\psi|\varepsilon(\psi) |\psi\right\rangle \tag{3}\]
where the integral is over the uniform (Haar measure[44]) \(d\psi\) on state space, normalized so \(\int d\psi=1\). \(\overline{F}(\varepsilon)\) can be further extended to a measure of how well \(\varepsilon\) approximates a quantum gate, \(U\),
\[\overline{F}(\varepsilon,U)\equiv\int d\psi\left\langle\psi|U^{\dagger} \varepsilon(\psi)U|\psi\right\rangle \tag{4}\]
where \(\overline{F}(\varepsilon,U)=1\) if and only if \(\varepsilon\) implements \(U\) perfectly, however, a lower value indicate that \(\varepsilon\) is a noisy implementation of \(U\). And \(\overline{F}(\varepsilon,U)=\overline{F}(U^{\dagger}\circ\varepsilon)\),where \(U^{\dagger}(\rho)\equiv U^{\dagger}\rho U\), and \(\circ\) denotes composition of quantum gates.
Naturally, average gate fidelity is related to the gate error gate that we have defined. However, these two quantities are not so directly connected physically, or operationally. It means that the average gate infidelity \(1-\overline{F}(\varepsilon,U)\) cannot be easily interpreted as an average error rate despite the common situation, i.e., because the measurement basis is not fixed in the integral so the infidelity is not an average error for a fixed measurement, yet neither is it averaged independently from the state. Instead, the lower and upper bounds to the error rate can be derived from fidelity, therefore, the mismatch of these two quantities or the substantial conditions with which they can directly connect with each other, are both in regimes of research interest.
As authors in [45] have clarified the possible relationship between these two quantities, information beyond fidelity is required to assess the relative importance of various noise processes that influence quantum devices. Pauli-distance defined in [45] and unitarity defined in [46; 47] feature as two possible approaches to characterizing the influence of different noise sources. For example, if the Pauli-distance between an error channel and a Pauli channel [48]approaches zero, i.e. the error channel is close enough to a Pauli channel, \(1-\overline{F}(\varepsilon,U)\) is directly connected with average gate error rate. Although no direct connection between them exists generally, average gate fidelity is still clearly of some worth: if the fidelity of quantum is precisely one, it is definitely certain that the gate will always perform exactly as expected.
#### ii.2.2 Entanglement fidelity
Entanglement fidelity [43] as a quite simple and experimental useful formula is directly related to the average gate fidelity. To define this concept, we can assume \(\varepsilon\) acts on one half of a maximally entangled state [49]. Intuitively, if it acts on a qubit \(Q\), and another qubit \(R\), with \(RQ\) initially in the maximally entangled state labeled \(\psi\). Then the entanglement fidelity can be defined to be the overlap between \(\psi\) before and after the application of \(\varepsilon\), \(F_{e}(\varepsilon)\equiv\langle\psi(\mathbf{I}\bigotimes\varepsilon)(\psi)|\psi\rangle\), where \(\mathbf{I}\) denotes the identity operation on system R. Thus the entanglement fidelity measures how well entanglement with other systems is preserved by the action of \(\varepsilon\). Authors in [43] have put forward an elegant formula that connects \(F_{e}(\varepsilon)\) to \(\overline{F}(\varepsilon,U)\):
\[\overline{F}(\varepsilon,U)=\frac{dF_{e}(\varepsilon)+1}{d+1} \tag{5}\]
In summary, either average gate fidelity or entanglement fidelity is useful for experimentally characterizing quantum gates and channels.
### The noises within a quantum circuit
There are two main effects that cause the noises within a quantum circuit, one is the qubit interacting with the environment, and the other is the interaction between qubits.
An experimentalist of quantum computing always expects all the prepared qubits to be isolated, unfortunately, this expectation usually departs from the real case. The measurements or any non-unitary operations will affect the qubits to interact with the environment. The time of a qubit to keep isolated is coherent time, which yields two very important parameters in experiments. \(T1\) as one of the parameters characterizes the time that a qubit relaxes from an excited state to a ground state, which is mostly caused by the interaction between the qubit system and the environment.
\(T2\) as another important parameter determines the time that a qubit decay from a superposition state to either the excited state or the ground state.
Besides the above noises that result from the qubits interacting with the environment, quantum circuits always suffer from the disastrous crosstalk caused by the unwanted interaction with other qubits. A noise that is isolated is called a unitary error. Meanwhile, if not isolated, it might be the degradation of a quantum state, which is caused by imperfect operations or qubit leakage.
#### ii.3.1 Pauli twirling and randomized compiling
The current superconductor quantum computers suffer from a noisy environment that is too complex to be precisely characterized. Technically, the noise within a quantum circuit can be converted into a specific quantum channel, i.e., twirling. Usually, we introduce a Pauli gate set to perform this particular twirling that is called Pauli twirling. If a kind of noise can be represented as a matrix, and the effect caused by off-diagonal elements is ignorable, we can use Pauli twirling to convert this kind of noise into a Pauli noise channel.
Twirling [50, 51]as a method is often used to approximate a general noise or even a complicatedly combined noise, with an asymmetric depolarization channel. We can use this procedure to study the average effect of arbitrarily general noise models by mapping them into more symmetric ones. Twirling over the Pauli group removes the off-diagonal terms, and even with the special condition satisfied, the asymmetric depolarization channel reduces to the symmetric depolarization channel. The above approximate reduction of any quantum channel to the asymmetric depolarization channel is usually referred to as the Pauli twirling approximation. As authors in [51] proved that the standard forms of a completely positive map, the Pauli channel, and the depolarizing channel can be obtained by a random application of quantum operations before and after the actually completely positive map. These operations are chosen uniformly at random from a finite set of unitaries. Significantly, a depolarization-tion twirling protocol does not introduce additional noise to the system, i.e. the Jamiolkowski fidelity remains the same, which can be regarded as some kind of distance measure that represents the noise level of the respective completely positive map and the standard form of it.
Coherent errors severely affect the performance of quantum computers in an unpredictable way. Hence, achieving a reliable quantum computation necessarily requires mitigating their severe impact. Worse while, the average error rates measured by randomized benchmarking and even similar protocols, highly lack sensitivity to the whole impact of coherent errors. Therefore, this makes the prediction of the global performance of quantum computers unreliable and makes it difficult to validate the accuracy of future large-scale quantum computations. Fortunately, a protocol called randomized compiling has been proposed to overcome these limitations by converting the coherent noise into stochastic noise. This protocol dramatically reduces unpredictable errors and allows us to accurately predict the performance of a quantum computer by measuring the error rates via cycle benchmarking.
Imagine a circuit that consists of single-qubit gate layers \(C_{k}\) and two-qubit gate layers \(G_{k}\), where \(k\) is the index of the layers, we then insert and compile random single-qubit twirling gates[52]. \(C_{k}\to T_{k}C_{k}T_{k-1}^{c}\), where \(T_{k}\) is randomly sampled from a set of tensor products of single-qubit Paulis, and \(T_{k-1}^{c}\) is chosen to undo the
Figure 1: Illustration of circuits required in Direct RB, Mirror RB, and Cycle RB.
twirling gate, \(T_{k}^{c}=G_{k}T_{k}^{\dagger}G_{k}^{\dagger}\). By this procedure [52], the original single-gate layer will be logically equivalently replaced by the new layer without increasing depth instead.
There are following several major advantages of tailoring coherent errors into stochastic Pauli noise: (1) The off-diagonal terms in the error process caused by coherent errors can be completely suppressed(Pauli twirling assumed to be perfect); (2) Stochastic Pauli errors only grow linearly with circuit depth (in small error limit) because they occur with a finite probability in each gate cycle; (3) Stochastic Pauli errors have dramatically lower worst-case error rate than a coherent error at the same average rate. (4) The known fault-tolerant thresholds for stochastic noise are orders of magnitude higher than the threshold for generic local errors.
## III Method
### Protocols of Randomized-framed benchmarking
As a practicably simple and efficient schema to estimate the noise level, conceptually, Randomized benchmarking (RB) proceeds as the following: (1). running random sequences of Clifford gates that are supposed to return the processor to its initial state(or a known randomized state.); (2). measuring the survival probability at the end of the circuit; (3). plotting the observed survival probabilities vs. sequence length and then fitting this to an exponential decay curve. The fitted decay rate of the survival probability that is up to a dimensionality constant allows us to obtain the "RB number" -r, which is commonly as a metric for estimating the processor's performance. Theoretically, RB can estimate a processor on any scale. However, it reaches its bottle because the Clifford group grows quickly with the number of qubits. Moreover, the procedure of compiling the Clifford gates into the native gates varies for different conditions of the hardware. The above is the operational burden and another one is the reliability of the RB number. It has been proven that the RB number precisely corresponds to the average gate infidelity when it is a Clifford group RB, however, it is not so clear for a general group structured RB.
The drawbacks in terms of operability and reliability of RB yield a method called direct RB (DRB). The authors of DRB claim that an RB-number-like \(r\) can be obtained by running many sequences of user-defined random circuits that consist of native gates directly instead of compiled ones. Moreover, the stabilizer state preparation that is used to encode the qubits brings another benefit, in which coherent errors can be converted into Pauli errors. Using this kind of technique, another method called cycle benchmarking (CRB) has been proposed and they claimed it to be reliable and efficient even for a non-Clifford gate set. Seems DRB and CRB have promoted efficiency and reliability obviously, unfortunately, the stabilizer preparation limits this method to apply to a larger scale qubit system.
In order to solve the scalability of the previous protocols, the authors of MRB replace the 1-qubit Clifford state preparation rather than the multi-qubit stabilizer state. Besides this, they get the inversion of the user-defined random circuit by locally inverting each layer of the circuits, stead of the global inversion as the standard RB or DRM.
As benchmark protocols, DRB, CRB, and MRB shown in Fig.1 are structured based on random circuits, and the most practical thing is that we the users can define the random circuits according to special concerns. For instance, in order to determine the average error rate of a random circuit consisting of one and two-qubit gates, one can define a density that determines the number of the two-qubit gates. Therefore, one can access the performance of this structural circuit by calculating the average error rates through sampling based on the probability distribution that has been defined. These three protocols can give us a parameter that indicates the performance of the specially defined circuits. This kind of parameter can be understood as fidelity but they are not the same things at all.
### An instance defined to compare the performances of Randomized-framed Protocols
Since these protocols originally come from randomized benchmarking, the parameters exacted from the estimations would be operationally related to the average gate fidelity, however, these two parameters are not the same things essentially. Obviously, these protocols are operated very differently in terms of the structures of the circuits, and the formulas that are used to estimate the so-called error rates. Previous research has proved that these protocols are reliable for statistical noises and even coherent errors. However, it is not so clear for their performance under a more practical situation in superconductor qubit systems, for example, the T1 and T2 noise, and 1, 2-qubit gate coherent error.
If we want to compare the performances of these three protocols, there will be several points that can not be ignored. Firstly, because there are a variety of parameters among these protocols, we have to control variables, such as (1) the structure that is defined by the density of two-qubit gates within the circuits; (2) the number of the repeated circuits that are required to guarantee the precision of estimations. Once the above two points have been satisfied, we can design an experiment to explore the performance of these protocols when the practical noises are considered.
In order to figure out the above concerns, we mainly choose the density of the 2-qubit gate as a significant changeable quantity to build a sequence of random circuits. Meanwhile, in order to make the comparison more practical, we also consider the connectivity constraints in
the generation of the designed random circuits because we know that the final structure of a superconductor quantum circuit is crucially determined by the topology structure of the hardware. Partially based on the definition in [29], we define the density \(\xi\) as \(\xi=2\alpha/wd\), where the \(\alpha\) is the total number of the 2-qubit gates, and \(w\) and \(d\) are the 'width'(the number of qubits) and 'length' (the depth of a circuit) of a \(w\times d\) lattice-shaped quantum circuit. With the above definitions, we demonstrate the procedure of generating random circuits as follows:
1) Assuming circuit with \(\xi=2\alpha/wd=0.75\), the total number of the 2-qubit gates \(\alpha\) equals \(w\times d\times\xi/2\);
2) For the circuit of the depth of \(d\), the number of the 2-qubit gate (here we choose CNOT gate) for each layer is determined randomly with the connectivity constraints and the calculated \(\alpha\) satisfied simultaneously;
3) Once the 2-qubit gates are determined, the rest of the qubits that are idled will be filled with random 1-qubit Clifford gates from a set of \(\{id,x,y,z,h,s,sdg,t,tdag\}\). So far we have clarified the procedure to generate a sequence of random circuits and defined the crucial concepts associated. Now let's explain the reasonability with which we can compare the performances of these benchmarking protocols with the designed instance. According to the demonstrations in the second paragraph in the instance-design demonstration, it is clear that we have picked up the main factors of a quantum circuit at a gate level. Naturally, we choose the control variable method to those factors to make the results of comparison among those protocols more logically acceptable for the sake of rigor.
## IV Results of the simulation and experiment
### Experimental results and discussion
Even though we have clarified the reasonability of comparing MRB, DRB, and CRB, it is still necessary to test it with a very trial situation, i.e. depolarization channel, with which it is much clearer for the numerical calculation and the physical interpretation and as well as the possible consistency between them and the operational meaning. It might be sufficient for an experimentalist to perform a quantum computer benchmarking with only clear and efficient instructions. However, it might lack much further to figure out the deep operating mechanism of the procedure. For example, why and how can this protocol convince us? Or is there any possibility of generalizing this protocol for a more practical or general situation etc..... Thanks to the elegant results in [53], we could have understood this family of protocols at a deeper theoretical level. It has been very clear for the interpretation of the RB when the case is depolarization with the noise assumptions satisfied, for which the average gate fidelity (infidelity) estimated through the original definition exactly equals the averaged error rate estimated through the whole procedure according to RB. Therefore, before we investigate their performances in different situations, we choose this clearly interpreted situation to test the intuitive concern of why we could perform this research by building this instance. According to the previous study, we can ensure that the validity of our research would be verified if the results obtained through these three protocols are consistent with those calculated through the rigorous process tomography. As expected, we get this consistency in our simulation, which is shown in Fig. 2.
As shown in Fig. 3, these three behave quite similarly when the noise strength is kept lower than \(10E^{-2}\) (note that we compare only at the level of orders of magnitude ). However, the DRB will underestimate the noise level when the noise within a quantum circuit becomes worse and worse. Finding this, we will go further for more practical noisy situations, such as the T1, T2, and 1 and 2-qubit gate errors. Fig. 3 shows the results under this situation. The upper panels are the case of only one kind of noise while the lower panels are the combinations of those noises. The first panel of T1 and T2 errors only shows that MRB, DRB, and CRB estimate similar error rates compared to the process-tomography estimations. The difference is that the DRB gives a misleading estimation when the strength of the T1 & T2 errors is nearly larger than \(10E^{-2}\), or we would rather say that it goes far from the real situations.
Figure 2: The average gate error rate estimated by using MRB, DEB, CRB and process infidelity(i.e. equals to the average gate infidelity in the case of depolarization channel based on the [53].)
The next two panels of Fig. 3 show the results of the two kinds of coherent errors. If we regard the reaction to the variation in the strength of one kind of noise, as the sensitivity of a benchmarking protocol to that kind of noise, we can conclude that these three protocols exhibit nearly the same as the process tomography. It is worth mentioning that no differences we can find between the DRB and the other protocols. Having these results, we can go on with several possible combinations of these noises. The lower panels tell us one important information the performance of DRB is almost dominated by the T1 and T2 errors rather than coherent errors, no matter if it is a 1 or 2-qubit coherent error. Obviously, the special performance of the DRB requires a further understandable interpretation. Based on the procedure of the DRB, it requires preparing a stabilized state at the first step. Correspondingly, the last step conversely performs the inverse of the first state preparation. Unfortunately, even though the circuit of this protocol has suffered a devastating collapse at the first state preparation, almost
Figure 4: A more intuitive display of the results shown in Fig. 3. The colorful dots represent the estimations of MRB, DRB, and CRB, and the gray dot line as a reference is obtained by process tomograph.
Figure 3: The average gate error rates estimated by MRB, DRB, CRB, and the process tomography for a reference, under the T1, T2, and 1 and 2-qubit coherent errors.
little information we can abstract only according to the measurement obtained under the computational basis. This explanation can be more evidence-based as shown in Fig. 6, which simulates the above guesses. In order to be more precise, we introduce the purity to test the suspect. Firstly, as shown in Fig. 5, we find the departure of DRB mainly resulted from the T1 errors instead of the T2 errors. Obviously, we know that the T1 errors that result from the interaction with the environment would scramble the expected initial state into a mixed one. And then combined with Fig. 6, it is not difficult to imagine that the final estimation would exact no more extra information about the quantum circuit than the first state preparation stage. Therefore, we can say that only a very precise initial stabilized state preparation would give us an accurate estimation of the noise level of the circuit for the DRB protocol. However, we know that this requirement is usually too harsh for the sake of practicality. Furthermore, we replot the results into Fig 4 in order to make it more directly compared among these protocols. It is clearly shown in Fig 4 that all three kinds of randomized-framed protocols understand the noise level compared to the process tomography.
### Experimental results and discussion
So far one may ask, since the displayed results are nearly the same except the extreme situations, can it more clear about the resource consumption of these protocols, as well as the possible precision of the estimations with given only a very limited resource, such as the total quantum gates consuming. With the above concerns, we test our instance on a real quantum computing platform, which is called the QuaFu quantum cloud platform. We choose the _ScQ-P136_ chip Ito run the instance that has been numerical simulated on in _qiskit_.
Finally, we implement the experiment on the Quafu quantum cloud platform. As shown in [32], we find that these three protocols estimate the error rate when the number of qubits is small. However, when it multiplies, the error bars increase.
## V Conclusion
In conclusion, the field of quantum computing holds tremendous potential, fueling the anticipation of ground-breaking advancements in both theory and technology. However, the current quantum computing landscape, characterized by noisy intermediate-scale quantum (NISQ) computers, confronts a formidable obstacle
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Chip & \(T1\) & \(T2\) & \(Fidelity_{non}\) & Connectivity \\ \hline \multirow{3}{*}{ScQ-P136} & Avg: 34.208 & Avg: 17.485 & Avg: 0.946 & Square \\ & min: 15.17 & min: 0.95 & min: 0.83 & Matrix \\ \cline{1-1} & max: 59.1 & max: 53.41 & max: 0.996 & Neighbors \\ \hline \end{tabular}
\end{table}
Table 1: The details of calibration information of ScQ-P136.
Figure 5: The performances of DRB under the T1 and T2 errors. It is obvious that the T1 dominates the final behavior of the DRB.
Figure 7: The experiments of quantum circuits benchmarking on the Quafu platform using MRB and CRB, and the black triangles are the most optimist estimation of the error rate.
in managing quantum machine noise. The complete characterization of these quantum devices remains a technical impossibility, even with the adoption of error mitigation technologies.
Addressing this challenge, there is a pressing necessity to develop more efficient and dependable benchmarking protocols capable of evaluating noise levels in quantum circuits designed for specific computational tasks. Established techniques, predominantly rooted in random circuit sequences such as randomized benchmarking (RB), have gained widespread acceptance due to their judicious resource utilization and satisfactory reliability compared to average gate fidelity.
To attain deeper insights into the performance of these benchmarking protocols, specialized random circuit sequences were meticulously crafted to assess three standard randomized-frame protocols under the influence of T1, T2, and coherent errors--circumstances more pertinent to superconducting quantum computers. Simulations disclosed that MRB, DRB, and CRB consistently overestimated the average error rate in the presence of T1 and T2 noise when juxtaposed with conventional circuit average errors. Furthermore, these methods displayed akin sensitivities to coherent errors. Remarkably, the reliability of DRB waned as T1 strengths increased.
To substantiate the practicality of these findings, the simulated conclusions were authenticated by implementing designed tasks for the three protocols on the Quafu quantum computation cloud platform. The outcomes underscored that MRB furnished a more precise assessment of a quantum circuit within resource constraints, while DRB proffered a more steadfast estimation at specific precision levels, notwithstanding its greater resource utilization.
These research findings elucidate the intricacies of noise mitigation in quantum computing and appraise the efficacy of benchmarking protocols under diverse error scenarios. They enrich our comprehension of the constraints and avenues for enhancement in NISQ quantum computers, thus paving the way for more dependable and efficient quantum computing technologies in the future.
###### Acknowledgements.
We would like to thank Dr. Bo Gao at the Beijing Institute of Technology for her careful revision of the manuscript. This work was supported by the Beijing Natural Science Foundation (No. Z220002).
|
2306.17410 | Simple proof of the global inverse function theorem via the Hopf--Rinow
theorem | We explain that Hadamard's global inverse function theorem very simply
follows from the Hopf--Rinow theorem in Riemannian geometry. | Shinobu Ohkita, Masaki Tsukamoto | 2023-06-30T05:53:23Z | http://arxiv.org/abs/2306.17410v1 | # Simple proof of the global inverse function theorem via the Hopf-finow theorem
###### Abstract.
We explain that Hadamard's global inverse function theorem very simply follows from the Hopf-Rinow theorem in Riemannian geometry.
Key words and phrases:Global inverse function theorem, Hopf-Rinow theorem 2020 Mathematics Subject Classification: 26B10, 53C22 M.T. was supported by JSPS KAKENHI JP21K03227.
Introduction
Let \(M\) be a compact Riemannian manifold and let \(\mathbb{R}^{n}\) be a compact Riemannian manifold. Let \(\mathbb{R}^{n}\) be a compact Riemannian manifold and let \(\mathbb{R}^{n}\) be a compact Riemannian manifold. Let \(\mathbb{R}^{n}\) be a compact Riemannian manifold and let \(\mathbb{R}^{n}\) be a compact Riemannian manifold. Let \(\mathbb{R}^{n}\) be a compact Riemannian manifold and let \(\mathbb{R}^{n}\) be a compact Riemannian manifold. Let \(\mathbb{R}^{n}\) be a compact Riemannian manifold and let \(\mathbb{R}^{n}\) be a compact Riemannian manifold. Let \(\mathbb{R}^{n}\) be a compact Riemannian manifold and let \(\mathbb{R}^{n}\) be a compact Riemannian manifold. Let \(\mathbb{R}^{n}\) be a compact Riemannian manifold and let \(\mathbb{R}^{n}\) be a compact Riemannian manifold. Let \(\mathbb{R}^{n}\) be a compact Riemannian manifold and let \(\mathbb{R}^{n}\) be a compact Riemannian manifold. Let \(\mathbb{R}^{n}\) be a compact Riemannian manifold and let \(\mathbb{R}^{n}\) be a compact Riemannian manifold. Let \(\mathbb{R}^{n}\) be a compact Riemannian manifold and let \(\mathbb{R}^{n}\) be a compact Riemannian manifold. Let \(\mathbb{R}^{n}\) be a compact Riemannian manifold and let \(\mathbb{R}^{n}\) be a compact Riemannian manifold. Let \(\mathbb{R}^{n}\) be a compact Riemannian manifold and let \(\mathbb{R}^{n}\) be a compact Riemannian manifold. Let \(\mathbb{R}^{n}\) be a compact Riemannian manifold and let \(\mathbb{R}^{n}\) be a compact Riemannian manifold. Let \(\mathbb{R}^{n}\) be a compact Riemannian manifold and let \(\mathbb{R}^{n}\) be a compact Riemannian manifold. Let \(\mathbb{R}^{n}\) be a compact Riemannian manifold and let \(\mathbb{R}^{n}\) be a compact Riemannian manifold. Let \(\mathbb{R}^{n}\) be a compact Riemannian manifold and let \(\mathbb{R}^{n}\) be a compact Riemannian manifold. Let \(\mathbb{R}^{n}\) be a compact Riemannian manifold and let \(\mathbb{R}^{n}\) be a compact Riemannian manifold. Let \(\mathbb{R}^{n}\) be a compact Riemannian manifold and let \(\mathbb{R}^{n}\) be a compact Riemannian manifold. Let \(\mathbb{R}^{n}\) be a compact Riemannian manifold and let \(\mathbb{R}^{n}\) be a compact Riemannian manifold. Let \(\mathbb{R}^{n}\) be a compact Riemannian manifold and let \(\mathbb{R}^{n}\) be a compact Riemannian manifold. Let \(\mathbb{R}^{n}\) be a compact Riemannian manifold and let \(\mathbb{R}^{n}\) be a compact Riemannian manifold. Let \(\mathbb{R}^{n}\) be a compact Riemannian manifold and let \(\mathbb{R}^{n}\) be a compact Riemannian manifold. Let \(\mathbb{R}^{n}\) be a compact Riemannian manifold and let \(\mathbb{R}^{n}\) be a compact Riemannian manifold. Let \(\mathbb{R}^{n}\) be a compact Riemannian manifold and let \(\mathbb{R}^{n}\) be a compact Riemannian manifold. Let \(\mathbb{R}^{n}\) be a compact Riemannian manifold and let \(\mathbb{R}^{n}\) be a compact Riemannian manifold. Let \(\mathbb{R}^{n}\) be a compact Riemannian manifold and let \(\mathbb{R}^{n}\) be a compact Riemannian manifold. Let \(\mathbb{R}^{n}\) be a compact Riemannian manifold and let \(\mathbb{R}^{n}\) be a compact Riemannian manifold. Let \(\mathbb{R}^{n}\) be a compact Riemannian manifold and let \(\mathbb{R}^{n}\) be a compact Riemannian manifold. Let \(\mathbb{R}^{n}\) be a compact Riemannian manifold and let \(\mathbb{R}^{n}\) be a compact Riemannian manifold. Let \(\mathbb{R}^{n}\) be a compact Riemannian manifold and let \(\mathbb{R}^{n}\) be a compact Riemannian manifold. Let \(\mathbb{R}^{n}\) be a compact Riemannian manifold and let \(\mathbb{R}^{n}\) be a compact Riemannian manifold. Let \(\mathbb{R}^{n}\) be a compact Riemannian manifold.
the global inverse function theorem. More general global inverse function theorems were developed by Gutierrez-Biasi-Santos [10] in the context of Riemannian geometry. Rabier [11] developed a global inverse function theorem for Finsler manifolds.
2. As far as the authors know, most known proofs of the global inverse function theorem use some topological argument (e.g. the covering space theory in [15] or homotopy arguments in [17, 18]). But the above proof does not use any topological argument. Moreover, the above proof of the surjectivity of \(f\) is constructive, at least in principle. If the map \(f\) is given by an explicit formula, then the geodesic \(\gamma(t)\) is a solution of an explicit ordinary differential equation. So we can numerically solve it. In particular we can approximately calculate the point \(\gamma(1)\) satisfying \(f\left(\gamma(1)\right)=x\).
3. The metric \(g\) was defined by \(g(u,v)=h\left(df(u),df(v)\right)\) in the above proof. The geodesic equation involves first order differentials of the metric \(g\) and hence second order differentials of the map \(f\). So we need to assume that \(f\) is at least \(C^{2}\) maps. However it is known that the global inverse function theorem holds for \(C^{1}\) maps \(f\) as well [17, p.16]. Therefore our proof does not provide an optimal regularity. But we think that the striking simplicity of the proof compensates for this drawback.
4. The proof of the Hopf-Rinow theorem is not very easy. So one can argue that our proof just translates one difficulty into another. (In some sense, our discovery is that the Hopf-Rinow theorem contains all the ingredients needed for the proof of the global inverse function theorem.) But the Hopf-Rinow theorem is certainly much more well-known than the global inverse function theorem. We think that it is nice to see that the Hopf-Rinow theorem has such an unexpected application.
5. Plastock [15, SS3] studied generalizations of the global inverse function theorem motivated by the Hopf-Rinow theorem. So our paper is conceptually similar to [15]. But our viewpoint is different from [15]. The point of our paper is that the global inverse function theorem is a corollary of the Hopf-Rinow theorem. On the other hand, Plastock [15, SS3] did not use the Hopf-Rinow theorem itself, but he developed a method motivated by the proof of the Hopf-Rinow theorem.
6. Probably the most useful "application" of this paper is to use it for education. It may be nice to explain the global inverse function theorem as a corollary of the Hopf-Rinow theorem in an introductory course on Riemannian geometry. Indeed the content of this paper grew out of an undergraduate course conducted by the second named author in Kyoto university. He posed a problem concerning the global inverse function theorem in an exercise course on geometry. Then the first named author found a proof using the Hopf-Rinow theorem. His idea looked so beautiful that the second named author thought that it should be published.
Therefore all the ideas of this paper are due to the first named author. The second named author just elaborated technical details and presentations.
|
2309.17295 | covXtreme : MATLAB software for non-stationary penalised piecewise
constant marginal and conditional extreme value models | The covXtreme software provides functionality for estimation of marginal and
conditional extreme value models, non-stationary with respect to covariates,
and environmental design contours. Generalised Pareto (GP) marginal models of
peaks over threshold are estimated, using a piecewise-constant representation
for the variation of GP threshold and scale parameters on the (potentially
multidimensional) covariate domain of interest. The conditional variation of
one or more associated variates, given a large value of a single conditioning
variate, is described using the conditional extremes model of Heffernan and
Tawn (2004), the slope term of which is also assumed to vary in a piecewise
constant manner with covariates. Optimal smoothness of marginal and conditional
extreme value model parameters with respect to covariates is estimated using
cross-validated roughness-penalised maximum likelihood estimation.
Uncertainties in model parameter estimates due to marginal and conditional
extreme value threshold choice, and sample size, are quantified using a
bootstrap resampling scheme. Estimates of environmental contours using various
schemes, including the direct sampling approach of Huseby et al. 2013, are
calculated by simulation or numerical integration under fitted models. The
software was developed in MATLAB for metocean applications, but is applicable
generally to multivariate samples of peaks over threshold. The software and
case study data can be downloaded from GitHub, with an accompanying user guide. | Ross Towe, Emma Ross, David Randell, Philip Jonathan | 2023-09-29T14:51:15Z | http://arxiv.org/abs/2309.17295v3 | covXtreme : MATLAB software for non-stationary penalised piecewise constant marginal and conditional extreme value models
###### Abstract
The covXtreme software provides functionality for estimation of marginal and conditional extreme value models, non-stationary with respect to covariates, and environmental design contours. Generalised Pareto (GP) marginal models of peaks over threshold are estimated, using a piecewise-constant representation for the variation of GP threshold and scale parameters on the (potentially multidimensional) covariate domain of interest. The conditional variation of one or more associated variates, given a large value of a single conditioning variate, is described using the conditional extremes model of Heffernan and Tawn (2004), the slope term of which is also assumed to vary in a piecewise constant manner with covariates. Optimal smoothness of marginal and conditional extreme value model parameters with respect to covariates is estimated using cross-validated roughness-penalised maximum likelihood estimation. Uncertainties in model parameter estimates due to marginal and conditional extreme value threshold choice, and sample size, are quantified using a bootstrap resampling scheme. Estimates of environmental contours using various schemes, including the direct sampling approach of Huseby et al. 2013, are calculated by simulation or numerical integration under fitted models. The software was developed in MATLAB for metocean applications, but is applicable generally to multivariate samples of peaks over threshold. The software can be downloaded from GitHub, with an accompanying user guide.
## 1 Introduction
Reasonable estimation of the characteristics of extreme environments is essential to quantify natural hazards, and assure the reliability and safe operability of man-made infrastructure. For example, the extreme ocean environment can be thought of as a multivariate random process in space and time, describing interacting wind, wave, current, tide and surge phenomena. Accurate theoretical modelling of _extremes_ from the ocean environment is problematic, because of the complexity of the numerical calculations involved. Practical risk assessment for marine and coastal structures and operations therefore typically requires the development of a statistical model from observational data for the joint behaviour of multiple meteorological-oceanographic ("metocean") variables.
Physical intuition and previous work has demonstrated that the marginal distribution of a variable such as ocean storm severity (quantified in terms of significant wave height, \(H_{S}\)) is dependent on covariates such as the direction in which the storm evolves, and the time of year in which the storm occurs (e.g. Randell et al. 2016). Over the long term, there is evidence that storm severity also evolves slowly in time due to anthropogenic climate change as well as natural climate cycles (e.g. Ewans and Jonathan 2023). It is important therefore that any statistical model for the extreme environment captures covariate non-stationarity. The nature and extent of covariate dependence varies across metocean variables. Experience also suggests that it is reasonable to assume that typical metocean variables are drawn from so-called "max-stable" distributions (e.g. Coles 2001). Therefore, typically conditional on covariates, the conditional distribution of threshold exceedances will be well approximated by the generalised Pareto (GP) distribution, provided that the threshold level is sufficiently high. This suggests that the marginal characteristics of metocean variables can be described reasonably using a non-stationary GP model (e.g. Davison and Smith 1990, Chavez-Demoulin and Davison 2005, Randell et al. 2016).
In addition we require that the statistical model also describes the _joint_ tail of all metocean variables in general. However the nature of extremal dependence between different metocean variables is generally unknown. The specification of the statistical model therefore needs to be sufficiently general to admit different extents of extremal dependence, the specifics of which are then estimated by fitting the model to data. The conditional extremes model of Heffernan and Tawn (2004) is an attractive candidate, because it admits different classes of extremal dependence, it has a simple form and is relatively easily interpretable. There is also evidence that the nature and extent of extremal dependence also varies systematically with covariates (e.g. Jonathan et al. 2013, Ross et al. 2018, Shooter et al. 2021).
The sizes of data samples available (e.g. from direct observation of the environment) for modelling are typically small relative to the time-scales of the inference task. For example, in engineering design we typically need to characterise events which occur on average once in 1,000 or 10,000 years based on a sample of at most the order of 100 years of observation. In these circumstances, our estimate of the joint tail of the distribution of metocean variables is typically uncertain. It is critical that this uncertainty is captured and incorporated appropriately in our risk assessment.
For individual metocean variables, the size of a rare occurrence is often quantified in terms of a _return value_ associated with some return period \(N\) (=1000 years, say). The return value is defined straightforwardly as the quantile of the distribution of the annual maximum of the metocean variable with exceedance probability \(1/T\), or (for large \(T\), see Jonathan et al.2021) the quantile of the distribution of the \(T\)-year maximum with non-exceedance probability \(\exp(-1)\). In engineering design, the joint tail of the distribution of two (or sometimes three) metocean variables is often quantified in terms of an _environmental design contour_; points on the contour are "equally rare" with respect to some criterion, related to the joint cumulative distribution function or density of the variables (e.g. Ross et al.2020, Haselsteiner et al.2021). Therefore estimates for the distribution of the annual maximum or that of the \(T\)-year maximum for individual metocean variables, and environmental contours for pairs or triplets of variables, are key outputs from the statistical analysis.
There are many software tools available for applied extreme value analysis; Stephenson and Gilleland (2006), Gilleland et al. (2012), Gilleland (2016) and Belzile et al. (2023) provide reviews. The discussion of Belzile et al. (2023) notes the lack of good "off the shelf" software for practitioners, particularly incorporating more recent advances in methodology. Some software has been published for environmental contour estimation (e.g. Haselsteiner et al.2019). The purpose of the covXtreme software introduced here is to provide the environmental practitioner with a straightforward means to estimate marginal and joint tails of distributions of random variables, and quantify extremes in terms of return values and environmental design contours. covXtreme accommodates covariate non-stationarity in both marginal and dependence behaviour, provides flexibility in estimating extremal dependence, and careful uncertainty quantification in inferences.
The sophistication of the methodology underpinning covXtreme has been set deliberately, based on the authors' experience of metocean applications, to accommodate necessary effects in a pragmatic manner, whilst avoiding unnecessary complexity. Extreme value analysis is in some senses a less mature research area (e.g. Davison and Huser2015) with numerous pressing challenges. For example, there are many approaches to modelling non-stationary marginal extremes; the work of Biller (2000), Wood (2004), Brezger and Lang (2006), Huser and Genton (2016), Wood et al. (2016), Wood et al. (2017), Youngman (2019), Youngman (2020), Shao et al. (2022) provide examples. With sufficiently rich data, complex extreme value models can be well estimated. However, in practice, typical samples tend only to support the adoption of relatively simple covariate models; for example, Zanini et al. (2020) show that a simple piecewise constant covariate representation (used by covXtreme ) provides marginal inferences competitive with those from more sophisticated P-spline (e.g. Eilers and Marx2010) and Bayesian adaptive regression spline models (e.g. Biller2000) for a metocean application.
An early version of the covXtreme software was published alongside Ross et al. (2018) for modelling of storm surge, and Ross et al. (2020) for estimation of environmental contours. Bore et al. (2019) suggest that covXtreme is useful for analysis of extreme ocean current profiles with depth. Vanem et al. (2020) using covXtreme to compare different contour methods with response-based methods for extreme ship response analysis. Guerrero et al. (2023) have used their enhancement of an earlier version of covXtreme for analysis of electrical signals in the human brain. The software has motivated the development of analogous prototype software PPL for marginal extreme value modelling with penalised piecewise-linear covariate representations (Barlow et al.2023). The software has also been used as a pre-processor for transformation of data to standard marginal scale, allowing joint and conditional extreme value analysis (e.g. Shooter et al.2021; Shooter et al.2022), and in metocean consultancy work.
_Objectives and layout of article_
The objective of the current article is to provide motivation and description of the covXtreme software, and illustrations of its use in the development of design conditions for ocean engineering. The layout of the article is as follows. Section 2 provides an overview of the software and the statistical methodology on which it is based. Sections 3 and 4 present case studies, involving a bivariate response and single covariate (Section 3), and trivariate response with 2-D covariate (Section 4). An accompanying user guide (available at Towe et al.2023b) provides a detailed step-by-step description of developing a covXtreme model for ocean engineering data sets provided with the software.
## 2 Overview of software and statistical methodology
covXtreme provides simple software for multivariate non-stationary extreme value analysis of peaks over threshold. The modelling framework of covXtreme (a) is statistically straightforward to understand and use without sacrificing
rigour, (b) is computationally efficient, (c) provides good estimation of key quantities of interest to the extreme value practitioner, and (d) provides realistic quantification of modelling uncertainties.
Extreme value analysis of peaks over threshold of metocean variables requires the satisfactory completion of a number of analysis steps. Inference typically involves using a sample of time-series data for a number of variables (corresponding to some period \(P\) years of observation) as the basis for estimating the characteristics of extreme environments. These might include return values, associated values (e.g. Towe et al. (2023a)) and design contours (e.g. Haselsteiner et al. 2021) corresponding to a return period of \(T\) years, \(T\gg P\)). For reasonable extreme value inference of a single variable (such as significant wave height), key features of the data must be accommodated carefully in the analysis, including the serial correlation of time-series, the dependence of values of metocean variables on covariates such as direction and season. Appropriate model forms for tails of distributions should also be adopted. For peaks over threshold analysis of a single variable, choice of threshold level can be source of considerable uncertainty. Moreover, different metocean variables (such as wind speed and significant wave height) are likely to be intercorrelated, and the nature of this dependence must be characterised carefully especially for extremes of one or more variables, using appropriate model forms; in these models, choice of threshold is again not always straightforward. Further, since samples of peaks over threshold are generally small (e.g. for high thresholds), it is critical to quantify uncertainty in return values and associated extremal quantities thoroughly using well-understood estimators.
The structure of the covXtreme software is written to reflect the sequential nature of practical extreme value analysis, as a set of MATLAB functions (see Section 2.6). For a given application, these are executed sequentially, leading the user through the inference, addressing important analysis considerations in order. A typical analysis stage involves the specification of tuning parameters, the appropriate choice of which is informed by diagnostic information generated at that stage; generally it will therefore be necessary to repeat the analysis stage until satisfactory diagnostics are achieved, before moving on to the next analysis stage.
Given a sample of multivariate time-series of metocean variables, a typical analysis would proceed as described below. Note, for ease of reference, that subsection numbering 2.1-2.5 here corresponds to the numbering of stages 1-5 of the analysis procedure, and also to the numbering of sections in the covXtreme user guide.
### Data preparation
The main aspect of data preparation is the isolation of storm peak events. Storm peak events correspond to peaks over threshold for a dominant variable (e.g. significant wave height for a wave-dominated metocean application), _associated_ values of all other variables of interest (e.g. wind speed) per event, and storm peak covariate values for all relevant covariates (e.g. direction and season). Storm peak events are isolated using the procedure described by Ewans and Jonathan (2008), and discussed in Section 1 of the covXtreme user guide. Storm peak events are then assumed to be conditionally independent given (storm peak) covariates.
Mathematically, consider a sample \(\breve{Y}_{1}(t_{i}),\breve{Y}_{2}(t_{i}),...,\breve{Y}_{D}(t_{i})\) of multivariate time-series for \(D\) metocean variables observed at regularly-spaced time points \(t_{i}\), \(i=1,2,3,...\) over some period. Data preparation requires the isolation of a sample \(\{\dot{y}_{i1},\dot{y}_{i2},...,\dot{y}_{iD}\}_{i=1}^{N}\) of \(N\) storm peak events (by convention for the first variable) and associated events (for the remaining variables) summarising the peak characteristics of each of \(N\) storms observed, and corresponding storm peak covariate values \(\{x_{i1},x_{i2},...,x_{iC}\}_{i=1}^{N}\) for \(C\) storm peak covariates \(X_{1},X_{2},...,X_{C}\) defined on some domain \(\mathcal{X}\). Storm peak and associated values are taken to be conditionally-independent given covariates, in the sense that \(\dot{y}_{id}\) can be viewed as an independent draw from \(\dot{Y}_{d}|(X_{1}=x_{i1},X_{2}=x_{i2},...,X_{C}=x_{iC})\), for \(i=1,2,...,N\), \(d=1,2,...,D\).
Note that covXtreme also provides functionality to simulate data with known characteristics for checking of the performance of the statistical methodology.
### Specification of covariate bins
covXtreme adopts a piecewise-constant parameterisation for the distribution of peaks over threshold on a partition of the covariate domain comprised of elements referred to as "covariate bins". Covariate bins are not estimated as part of the analysis, but must be specified carefully by the user prior to the analysis, for reasonable inference. Briefly, the user is required to partition the domain of each covariate (independently) into a set of intervals. The resulting covariate bins over all covariates are then simply Cartesian products of these intervals. covXtreme assumes, _within_ a given covariate bin, that the statistical properties of a storm peak or associated variable are no longer dependent on covariates. Diagnostic plots illustrating features such as the variation of storm peak and associated variables with individual covariates, are provided to inform reasonable choice of marginal partitions. Illustrations of the specification of covariate bins are given in Section 2 of the covXtreme user guide.
Mathematically, each observation \(\dot{y}_{i1},\dot{y}_{i2},...,\dot{y}_{iD}\) is allocated to one of \(B\) covariate bins by means of an allocation vector \(A\), with \(A(i)=b\) mapping observation (with index) \(i\) to bin (with index \(\mathcal{B}=)\)\(b\), \(i=1,2,...,N\), \(b=1,2,...,B\). All observations within a specified covariate bin are assumed to have common extreme value characteristics, specified in terms of the parameters of the marginal model for peaks over threshold from that bin for each of the \(D\) components
of the observation. Hence in particular, all threshold exceedances of storm peak variable \(\dot{Y}_{d}\), \(d=1,2,...,D\) from covariate bin \(b\) can be viewed as independent draws from a generalised Pareto distribution with common shape, scale and threshold parameter values.
### Marginal modelling
A two-stage modelling procedure is used to describe the marginal distribution of storm peak and associated variables. First a three-parameter gamma distribution is fitted independently to all the data for each covariate bin in turn, providing a good description of the bulk of the distribution of storm peak and associated variables on the covariate domain. The extreme value threshold for each covariate bin is then set to the quantile of the corresponding gamma distribution with pre-specified non-exceedance probability. Finally the distribution of threshold exceedances per covariate bin is estimated using a non-stationary generalised Pareto distribution. Since generalised Pareto shape parameter is typically more difficult to estimate than scale, its value is assumed fixed but unknown across all covariate bins. Moreover, variation of the generalised Pareto scale parameter on the covariate domain is regulated using roughness penalisation, set to maximise the predictive performance of the generalised Pareto model on a hold-out sample within a cross-validation scheme. We judge from prior experience that these assumptions regarding the variation of generalised Pareto parameters are reasonable, and confirm using diagnostics during analysis that the assumptions give reasonably well-fitting models. By reducing the number of degrees of freedom for model fitting, the assumptions are a possible source of estimation bias, but also of a reduction in variance in estimated model parameters and return values. Further details and illustrations of marginal modelling are given in Section 3 of the covXtreme user guide.
Mathematically, for each variable \(\dot{Y}_{d}\), \(d=1,2,...,D\), and covariate bin \(\mathcal{B}=b\), \(b=1,2,...,B\) independently, we estimate a three-parameter gamma distribution using maximum likelihood estimation with sample likelihood
\[\mathcal{L}_{\mathrm{Gmm}}\left(\omega_{bd},\kappa_{bd},l_{bd};\{\dot{y}_{id} \}_{i=1}^{N}\right)=\prod_{i;A(i)=b}f_{\mathrm{Gmm}}(\dot{y}_{id};\omega_{bd},\kappa_{bd},l_{bd})\]
for sample \(\{\dot{y}_{id}\}_{i=1}^{N}\), where \(f_{\mathrm{Gmm}}\) is the density of the gamma distribution with shape \(\omega_{bd}\in\mathbb{R}\), scale \(\kappa_{bd}>0\) and location \(l_{bd}\in\mathbb{R}\) given by
\[f_{\mathrm{Gmm}}\left(\dot{y}_{id};\omega_{bd},\kappa_{bd},l_{bd}\right)= \left(\kappa_{bd}^{-\omega_{bd}}/\Gamma(\omega_{bd})\right)\left(\dot{y}_{id} -l_{bd}\right)^{\omega_{bd}-1}\exp\left((\dot{y}_{id}-l_{bd})/\omega_{bd}\right)\]
where \(\Gamma(\bullet)\) is the gamma function. In practice for fitting the gamma model, the location parameters \(\{l_{bd}\}\) are first estimated using a low empirical quantile of the sample per variate per covariate bin, and the remaining shape and scale parameters estimated using maximum likelihood. Next we calculate the extreme value threshold as \(\psi_{bd}=F_{\mathrm{Gmm}}(\tau_{d};\dot{\omega}_{bd},\dot{\kappa}_{bd}, \dot{l}_{bd})\) using the estimated gamma parameters, for the pre-specified non-exceedance probability \(\tau_{d}\), where \(F_{\mathrm{Gmm}}\) is the cumulative distribution function of the gamma distribution. The marginal sample likelihood for threshold exceedances of \(\psi_{bd}\) for variable \(\dot{Y}_{d}\) over all covariate bins can therefore be written
\[\mathcal{L}_{\mathrm{GP}}\left(\xi_{d},\{\nu_{bd}\}_{b=1}^{B};\{\dot{y}_{id} \}_{i=1}^{N},\{\psi_{bd}\}_{b=1}^{B}\right)=\prod_{b=1}^{B}\prod_{i;A(i)=b; \dot{y}_{id}>\psi_{bd}(\tau_{d})}f_{\mathrm{GP}}(\dot{y}_{id};\xi_{d},\nu_{bd },\psi_{bd})\]
where \(f_{\mathrm{GP}}\) is the density of the generalised Pareto distribution with shape \(\xi_{d}\in\mathbb{R}\) and scale \(\nu_{bd}>0\) given by
\[f_{\mathrm{GP}}(\dot{y}_{id};\xi_{d},\nu_{bd},\psi_{bd})=(1/\nu_{bd})[1+(\xi _{d}/\nu_{bd})\left(\dot{y}_{id}-\psi_{bd}\right)]_{+}^{-1/\xi_{d}-1}\]
where \([A]_{+}=A\) when \(A>0\), and \(=0\) otherwise. The corresponding cumulative distribution functions of the gamma and generalised Pareto distributions are
\[F_{\mathrm{Gmm}}(\dot{y}_{id};\omega_{bd},\kappa_{bd},l_{bd})=(1/\Gamma( \alpha))\quad\gamma(\omega_{bd},\kappa_{bd}(\dot{y}_{id}-l_{db}))\]
where \(\gamma(\bullet,\bullet)\) is the lower incomplete gamma function, and
\[F_{\mathrm{GP}}(\dot{y}_{id};\xi_{d},\nu_{bd},\psi_{bd})=1-[1+(\xi_{d}/\nu_{ bd})\left(\dot{y}_{id}-\psi_{bd}\right)]_{+}^{-1/\xi_{d}}\,.\]
**Optimal predictive performance** for the marginal generalised Pareto model is achieved using roughness-penalisation for the scale parameters \(\{\nu_{bd}\}_{b=1}^{B}\) across covariate bins. The corresponding penalised negative log likelihood takes the form
\[-\log\mathcal{L}_{\mathrm{GP}}^{*}(\xi_{d},\{\nu_{bd}\}_{b=1}^{B};\{\dot{y}_{ id}\}_{i=1}^{N},\{\psi_{bd}\}_{b=1}^{B}) \tag{1}\] \[= -\log\mathcal{L}_{\mathrm{GP}}(\xi_{d},\{\nu_{bd}\}_{b=1}^{B};\{ \dot{y}_{id}\}_{i=1}^{N},\{\psi_{bd}\}_{b=1}^{B})+\lambda_{d}\left(\frac{1}{B} \sum_{b=1}^{B}\nu_{bd}^{2}-\left[\frac{1}{B}\sum_{b=1}^{B}\nu_{bd}\right]^{2} \right).\]
The smoothness penalty \(\lambda_{d}\) controls the extent to which the generalised Pareto scale varies across covariate bins. Parameters can be estimated to minimise the penalised negative log likelihood for each variable \(\dot{Y}_{d}\), \(d=1,2,...,D\) in turn for each of a set of pre-specified values for \(\lambda_{d}\). The optimal value \(\lambda_{d}^{\circ}\) of \(\lambda_{d}\) is chosen to maximise predictive likelihood for a hold-out sample within a \(k\)-fold cross-validation procedure. Gamma parameter estimates per covariate bin, and generalised Pareto parameter estimates evaluated using the full sample for \(\lambda_{d}=\lambda_{d}^{\lambda}\) are carried forward to subsequent inference.
Since the numbers of parameters in the various marginal models above are relatively small, a simplex search procedure provides a straightforward approach to parameter estimation by minimisation of (penalised) negative log likelihoods.
**Threshold selection** for extreme value analysis of peaks over threshold is an important consideration (e.g. Northrop and Jonathan 2011, Scarrott and MacDonald 2012, Wadsworth 2016). Within a covXtreme analysis, multiple marginal models based on different random choices of threshold non-exceedance probabilities \(\tau_{d}\in\mathcal{I}_{\tau_{d}}\subseteq[0,1)\) are constructed, \(d=1,2,...,D\). The user's task is to specify the interval \(\mathcal{I}_{\tau_{d}}\) for each variable \(\dot{Y}_{d}\). As explained in Section 3 of the user guide, this choice is aided by numerous diagnostic plots, including examination of the stability of the estimated value of \(\xi_{d}\) as a function of \(\tau_{d}\).
**Uncertainty quantification** for marginal inference is performed using a non-parametric bootstrap procedure. The original sample of storm peak and associated values is resampled with replacement. For each variable \(\dot{Y}_{d}\), \(d=1,2,...,D\), the full marginal extreme value analysis is then repeated using the bootstrap resample together with a new random selection of \(\tau_{d}\). The outcome of the complete marginal inference is then quantified in terms of sets of parameter estimates ( \(\{\xi_{d}^{r},\{\nu_{bd}^{r},\psi_{bd}^{r},\omega_{bd}^{r},\kappa_{bd}^{r},l_ {bd}^{r}\}_{b=1}^{B};\tau_{d}^{r},\lambda_{d}^{\circ}\}_{d=1}^{D}\)) for each of \(R\) (typically \(=100\) or \(250\)) bootstrap resamples (where superscript \(r\) indicates an estimate from a resample). Typically, the value of \(\lambda_{d}^{\circ}\) estimated using the original sample is adopted for all bootstrap resamples also, although the option to estimate a new optimal roughness coefficient per bootstrap resample is provided.
The **distribution of the \(T\)-year maximum event** per covariate bin, and over all covariate bins can then be estimated (or sampled) under the fitted marginal model. From above, the full marginal model for storm peak or associated variate \(\dot{Y}_{d}\) in covariate bin \(\mathcal{B}=b\) is
\[F_{\text{GmmGP}}(y;\omega_{bd},\kappa_{bd},l_{bd},\xi_{d},\nu_ {bd},\psi_{bd}) =F_{\text{Gmm}}(y;\omega_{bd},\kappa_{bd},l_{bd}) \text{for }A(i) =b,\dot{y}_{id}\leq\psi_{bd}\] \[=\tau_{d}+(1-\tau_{d})F_{\text{GP}}(y;\xi_{d},\nu_{bd},\psi_{bd}) \text{for }A(i) =b,y>\psi_{bd}. \tag{2}\]
Then under the model, the distribution of a random occurrence of \(\dot{Y}_{d}\) from any covariate bin is
\[F_{\dot{Y}_{d}}(y;\{\omega_{bd},\kappa_{bd},l_{bd},\nu_{bd},\psi_{bd}\}_{b=1 }^{B},\xi_{d})=\sum_{b=1}^{B}p_{b}F_{\text{GmmGP}}(y;\omega_{bd},\kappa_{bd}, l_{bd},\xi_{d},\nu_{bd},\psi_{bd}) \tag{3}\]
where \(p_{b}\) is an empirical estimate of the probability of observing a storm event in covariate bin \(b\). If we further assume that the number \(N\) of storms in \(T\)-years is Poisson-distributed with mean \(T\rho\), where \(\rho\) is an empirical estimate for the number of storms per annum, and suppressing parameter dependence for brevity, the distribution of the \(T\)-year maximum is simply
\[F_{\dot{Y}_{d}T\text{-year}}(y) = \sum_{k=0}^{\infty}\mathbb{P}(N=k)F_{\dot{Y}_{d}}^{k}(y)=\sum_{k= 0}^{\infty}\left(\exp(-T\rho)(T\rho)^{k}/k!\right)F_{\dot{Y}_{d}}^{k}(y) \tag{4}\] \[= \exp\left(-T\rho\ (1-F_{\dot{Y}_{d}}(y))\right).\]
We can use a similar approach to estimate and sample from the distribution of the \(T\)-year maximum of \(\dot{Y}_{d}\) for any combination of covariate bins, by restricting the set of covariate bins over which the summation is performed in Equation 3 (and linearly scaling the values of \(\{p_{b}\}\) so that they sum to unity).
### Extremal dependence modelling
Given non-stationary marginal models for storm peak and associated variables, we next seek to describe the nature of the dependence between them for large values of the storm peak variable. This is achieved using the conditional extremes model of Heffernan and Tawn (2004). Under the fitted conditional extremes model, we can then estimate the characteristics of the joint distribution of all associated variables given a large storm peak, and thereby estimate joint environmental design conditions and design contours. The conditional extremes model is specified for sets of variables on a common standard Laplace marginal scale, rather than their original physical scales. For this reason, a necessary first step is to transform the sample of storm peak and associated values to this scale, using the fitted marginal
models. Incorporation of covariate effects is generally important for extremal dependence modelling, and these can be accommodated using conditional extremes models of different complexities with respect to covariates. The covXtreme software allows any number of the parameters of the conditional extremes model to vary with covariates. Nevertheless, experience suggests that the data indicate the need for covariate non-stationarity of just one model parameter (the "slope" parameter, see below) for typical met-ocean applications. We therefore describe this specific model form as a recommended "default" approach here. Further details and illustrations of extremal dependence modelling are given in Section 4 of the covXtreme user guide.
Mathematically, covXtreme provides a model for the joint conditional tail \((Y_{2},Y_{3}\ldots Y_{D}|\dot{Y}_{1}=\dot{y})\) for large \(\dot{y}\), using an extension of the conditional extremes model of Heffernan and Tawn (2004) on standard Laplace (and optionally, as originally, on standard Gumbel) scale. Inference therefore requires that we transform the storm peak and associate value sample \(\{\dot{y}_{i1},\dot{y}_{i2},...,\dot{y}_{iD}\}_{i=1}^{N}\) of variables \(\{\dot{Y}_{i},\dot{Y}_{2},...,\dot{Y}_{D}\}\) to the corresponding Laplace scale sample \(\{y_{i1},y_{i2},...,y_{iD}\}_{i=1}^{N}\) for variables \(\{Y_{1},Y_{2},...,Y_{D}\}\).
#### 2.4.1 Marginal transformation to standard Laplace scale
The marginal transformation to standard Laplace scale is achieved using the probability integral transform such that for \(i=1,2,...,N\), \(d=1,2,...,D\), \(b=1,2,...,B\)
\[F_{\text{GmmGP}}(\dot{y}_{id};\omega_{bd},\kappa_{bd},l_{bd},\xi_{d},\nu_{bd},\psi_{bd})=F_{\text{Lpl}}(y_{id})\text{ for }A(i)=b\]
where \(F_{\text{GmmGP}}\) is the marginal cumulative distribution function of storm peak variable \(\dot{Y}_{d}\), for sets of parameters of marginal gamma and generalised Pareto models from Equation 2, and \(F_{\text{Lpl}}\) is the cumulative distribution function of the standard Laplace distribution given by \(F_{\text{Lpl}}(y)=0.5y\exp(-|y|)\) for \(y\leq 0\) and \(=1-0.5y\exp(-|y|)\) otherwise.
#### 2.4.2 Conditional extremes modelling
The Laplace-scale sample \(\{y_{1},y_{i2},...,y_{iD}\}_{i=1}^{N}\) from random variables \(\{Y_{1},Y_{2},...,Y_{D}\}\) is next characterised using the conditional extremes model, for values \(y\) of the conditioning variate \(Y_{1}\) above a dependence threshold \(\phi(\tilde{\tau})=F_{\text{Lpl}}^{-1}(\tilde{\tau})\), for which the conditional extremes model is assumed to hold, for carefully specified non-exceedance probability \(\tilde{\tau}\). For \(y>\phi(\tilde{\tau})\), in covariate bin indexed \(\mathcal{B}=b\), the recommended non-stationary model takes the form
\[(Y_{2},Y_{3}\ldots Y_{D})|(Y_{1}=y,\mathcal{B}=b)=(\alpha_{b2},\alpha_{b3}, \ldots,\alpha_{bD})y+y^{(\beta_{2},\beta_{3},...,\beta_{D})}\boldsymbol{Z} \tag{5}\]
with linear slope parameters \(\alpha_{bd^{\prime}}\in[-1,1]\), \(d^{\prime}=2,3,...,D\) varying across covariate bins, scalar exponent parameters \(\beta_{d^{\prime}}\in[-\infty,1]\) common to all covariate bins, and residual random variable \(\boldsymbol{Z}=(Z_{2},Z_{3},...,Z_{D})\in\mathbb{R}^{D-1}\) whose distribution is unknown.
For estimation of slope and exponent parameters, we assume that each component \(Z_{d^{\prime}}\) of \(\boldsymbol{Z}\) is independently distributed according to
\[Z_{d^{\prime}}=\mu_{d^{\prime}}+\sigma_{d^{\prime}}W_{d^{\prime}}\]
for mean \(\mu_{d^{\prime}}\in\mathbb{R}\), scale \(\sigma_{d^{\prime}}>0\) and random variable \(W_{d^{\prime}}\in\mathbb{R}\) following a generalised Gaussian (or delta-Laplace) distribution with zero mean, unit variance and shape \(\delta_{d^{\prime}}\). For general mean \(m\) and variance \(s^{2}\), the corresponding density \(f_{\text{GG}}\) of the generalised Gaussian distribution is
\[f_{\text{GG}}(w;m,s^{2},\delta_{d^{\prime}})=\frac{\delta_{d^{\prime}}}{2 \kappa(\delta_{d^{\prime}})s\Gamma(1/\delta_{d^{\prime}})}\exp\left\{-\left| \frac{w-m}{\kappa(\delta_{d^{\prime}})s}\right|^{\delta_{d^{\prime}}}\right\},\]
where \(\kappa(\delta_{d^{\prime}})^{2}=\Gamma(1/\delta_{d^{\prime}})/\Gamma(3/\delta_ {d^{\prime}})\). The value of exponent \(\delta_{d^{\prime}}\in\{1,2\}\) is user-specified in covXtreme. \(\delta_{d^{\prime}}=1\) imposes a Laplace distribution (with zero mean and unit variance) on \(W_{d^{\prime}}\), appropriate when the dependence between \(Y_{d^{\prime}}\) and \(Y_{1}\) is low. \(\delta_{d^{\prime}}=2\) imposes a standard Gaussian distribution on \(W_{d^{\prime}}\), the original assumption of Heffernan and Tawn (2004), appropriate otherwise. For estimation purposes therefore, from the properties of the generalised Gaussian distribution, \(Y_{d^{\prime}}|(Y_{1}=y)\) is assumed to follow a generalised Gaussian distribution with mean \(m_{bd^{\prime}}=\alpha_{bd^{\prime}}y+\mu_{d^{\prime}}y^{\beta_{d^{\prime}}}\) and standard deviation \(\zeta_{d^{\prime}}=y^{\beta_{d^{\prime}}}\sigma_{d^{\prime}}\).
The conditional dependence likelihood \(\tilde{\mathcal{L}}_{d^{\prime}}\) for the model \(Y_{d^{\prime}}|(Y_{1}=y)\) and \(y>\phi(\tilde{\tau})\) can then be written
\[\tilde{\mathcal{L}}_{d^{\prime}}\left(\{\alpha_{bd^{\prime}}\}_{b=1}^{B}, \beta_{d},\mu_{d^{\prime}},\sigma_{d^{\prime}};\{y_{id}\}_{i=1,d=1}^{N,D}, \delta_{d^{\prime}},\tilde{\tau}\right)=\prod_{b=1}^{B}\prod_{\begin{subarray}{ c}i_{bd}=A(i)\\ y_{i1}>\phi(\tilde{\tau})\end{subarray}}f_{\text{GG}}\left(y_{id};\alpha_{bd^{ \prime}}y_{i1}+\mu_{d^{\prime}}y_{i1}^{\beta_{d^{\prime}}},(y_{i1}^{\beta_{d ^{\prime}}}\sigma_{d^{\prime}})^{2},\delta_{d^{\prime}}\right).\]
As for marginal models, we regulate the smoothness of \(\{\alpha_{bd^{\prime}}\}_{b=1}^{B}\) optimally on the covariate domain using a cross-validation procedure, selecting a value \(\tilde{\lambda}_{d^{\prime}}^{\circ}\) for roughness coefficient \(\tilde{\lambda}_{d^{\prime}}\) in roughness-penalised negative log likelihood
\[-\log\tilde{\mathcal{L}}_{d^{\prime}}^{*}\left(\{\alpha_{bd^{ \prime}}\}_{b=1}^{B},\beta_{d^{\prime}},\mu_{d^{\prime}},\sigma_{d^{\prime}};\{y _{id}\}_{i=1,d=1}^{N,D},\delta_{d^{\prime}},\tilde{\tau}\right) \tag{6}\] \[= -\log\tilde{\mathcal{L}}_{d^{\prime}}\left(\{\alpha_{bd^{\prime}} \}_{b=1}^{B},\beta_{d^{\prime}},\mu_{d^{\prime}},\sigma_{d^{\prime}};\{y_{id} \}_{i=1,d=1}^{N,D},\delta_{d^{\prime}},\tilde{\tau}\right)+\tilde{\lambda}_{d ^{\prime}}\left(\frac{1}{B}\sum_{b=1}^{B}\alpha_{bd^{\prime}}^{2}-\left[\frac {1}{B}\sum_{b=1}^{B}\alpha_{bd^{\prime}}\right]^{2}\right).\]
to maximise predictive performance on cross-validation hold-out samples. Further, it is important to confirm reasonable choice of the dependence threshold \(\tilde{\tau}\) by inspection of diagnostic plots, for example of stability of estimated parameters as a function of threshold.
Two algorithms are provided to minimise the penalised negative log likelihood in Equation 6. The default is a Newton-Raphson approach exploiting gradients; the alternative is a simplex search procedure.
Once parameter estimates are available, we represent the distribution of the \((D-1)\)-dimensional residual random variable \(\mathbf{Z}\) using the set \(\mathcal{E}_{b}=\{e_{i^{\prime}bd^{\prime}}\}\), for all \(d^{\prime}=2,3,...,D\), and all \(i^{\prime}=1,2,...,N\) such that \(y_{i^{\prime}1}>\phi(\tilde{\tau})\), \(b=A(i^{\prime})\), of residuals from the fit per covariate bin, with elements
\[e_{i^{\prime}bd^{\prime}}=\left(y_{i^{\prime}d^{\prime}}-\tilde{\alpha}_{bd^ {\prime}}y_{i^{\prime}1}-\hat{\mu}_{d^{\prime}}y_{i^{\prime}1}^{\hat{\beta}_{d ^{\prime}}}\right)/\left(\hat{\alpha}_{d^{\prime}}y_{i^{\prime}1}^{\hat{\beta} _{d^{\prime}}}\right).\]
During subsequent simulation under the fitted conditional extremes model, these residuals are resampled jointly as \(\{e_{i^{\prime}b2},e_{i^{\prime}b3},...,e_{i^{\prime}bD}\}\), thereby preserving the dependence between them, and hence the dependence between variables \(Y_{2},Y_{3},...,Y_{D}|(Y_{1}=y,\,y>\phi(\tilde{\tau}),\mathcal{B}=b)\) per covariate bin. Similarly, simulation below threshold \(\phi(\tilde{\tau})\) is achieved simply by resampling from the original Laplace-scale storm peak sample. We note that covXtreme also facilitates estimation of variants of this dependence model, for which any number of conditional extremes model parameters \(\alpha\), \(\beta\), \(\mu\) and \(\sigma\) are allowed to vary between covariate bins, and their overall roughness penalised for good performance by extending the penalty term in Equation 6 to include all "non-stationary" parameters; see Section 4.2 of the covXtreme user guide. It is also possible to pool estimates of residuals across covariate bins, useful when covariate bins with low occupancy are present.
**Uncertainty quantification** for extremal dependence inference is performed by extending the bootstrap procedure described in Section 2.3. Conditional extremes models are estimated for each of the bootstrap resembles based on which marginal models were estimated. The combined marginal and conditional extremes analysis therefore produces the set \(\{\{\xi_{d^{\prime}}^{\sigma},\{\nu_{bd^{\prime}}^{\sigma},\omega_{bd^{\prime} }^{\sigma},\omega_{bd^{\prime}}^{r},\omega_{bd^{\prime}}^{r},\}^{1}_{b=1}\}_{d= 1}^{D},\{\{\alpha_{bd^{\prime}}^{r}\}_{b=1}^{B},\beta_{d^{\prime}}^{r},\mu_{d^ {\prime}}^{r},\sigma_{d^{\prime}}^{r},\}^{1}_{d=2},\{\mathcal{E}_{b}^{\uparrow} \}_{b=1}^{B};\{\tau_{d}^{\uparrow},\lambda_{a}^{\downarrow}\}_{d=1}^{D},\{\tilde {\tau}_{d^{\prime}}^{\tau},\tilde{\lambda}_{d^{\prime}}^{\downarrow}\}_{d=2}^{D}\}\) of parameter estimates and residuals for each of \(R\) bootstrap resamples indexed by superscript \(r\). As for marginal modelling, the value of roughness coefficient \(\tilde{\lambda}_{d^{\prime}}^{\circ}\) is typically estimated using the original sample only, and adopted for all bootstrap resamples also, although the option to estimate a new optimal roughness coefficient per bootstrap resample is provided.
A sampling procedure is used to estimate the **conditional return value** distribution of the associated value for \(\hat{Y}_{d^{\prime}}\) given a \(T\)-year maximum event for \(\hat{Y}_{1}\) per covariate bin, and over unions of covariate bins. The approach combines importance sampling from the marginal \(T\)-year maximum distribution of \(\hat{Y}_{1}\) (exploiting Equation 4) with Monte Carlo sampling under the fitted conditional extremes model, managing transformations between marginal and standard Laplace scales. Further discussion of conditional return values can be found in Towe et al. (2023a).
### Estimation of design contours
Using the estimated marginal and conditional extremes models, covXtreme allows estimation of quantities such as return values for the dominant variable, conditional return values for associated variables, and environmental design contours. Haselsteiner et al. (2017), Ross et al. (2020) and Haselsteiner et al. (2021) provide recent reviews of contour estimation, encompassing a range of approaches. Algorithms to estimate three types of design contour are implemented in covXtreme. These are (a) constant exceedance contours; (b) direct sampling contours; and (c) conditional extremes (or "HT", in acknowledgement of the authors of the conditional extremes model, Heffernan and Tawn) constant density contours. Each of the three contour methods estimates a line (in 2-D) or surface (in 3-D) on which certain characteristics of the distributions of \(\hat{Y}_{2}|(\hat{Y}_{1}=y)\) (in 2-D, or \((\hat{Y}_{2},\hat{Y}_{3})|(\hat{Y}_{1}=y)\) in 3-D) for large \(y\) are preserved. For example, the constant exceedance contour in 2-D is a line on which the probability of exceedance in an "outward" sense (e.g. \(\mathbb{P}(\hat{Y}_{1}>\zeta_{1},\hat{Y}_{2}>\zeta_{2})\) in the first quadrant) is constant, for points \((\zeta_{1},\zeta_{2})\) on the contour; see Jonathan et al. (2014) for details. The direct sampling contour of Huseby et al. (2015) is similar, except that now the probability in the half plane defined by the tangent to the contour at any point of interest is constant, and the contour itself must be convex. In 2-D, the conditional extremes constant density contour defines a line on which the joint
density of the pair \((\dot{Y}_{1},\dot{Y}_{2})\) is constant. By construction, all contours pass through a so-called "lock point", defined as an extreme quantile (e.g. the \(T\)-year return value) of dominant variable \(Y_{1}\) and the corresponding conditional median of associated variate \(Y_{2}|(Y_{1}=T\)-year value). The lock point defines the value of the distributional characteristic preserved on the contour. Further discussion and illustrations are provided in Section 5 of the covXtreme user guide.
Mathematically, in 2-D, the constant exceedance contour \(\mathbf{\zeta}(\theta)=(\zeta_{1}(\theta),\zeta_{2}(\theta))\) for \(\theta\in\Theta\subseteq[0,360)\) can be defined by the equation
\[\mathbb{P}\left(\bigcap_{d=1}^{2}\left(r_{d}(\theta)\dot{Y}_{d}>r_{d}(\theta) \zeta_{d}(\theta)\right)\right)=p\]
where \(\mathbf{r}^{\circ}=(r_{1}^{\circ},r_{2}^{\circ})\) is a reference location (see e.g. Jonathan et al.2014), and \(r_{d}(\theta)=\zeta_{d}(\theta)-r_{d}^{\circ}\), \(d=1,2\), for some small probability \(p>0\). The direct sampling contour is defined analogously, using the equation
\[\mathbb{P}\left(\left(\dot{\mathbf{Y}}-\mathbf{\zeta}(\theta)\right)\cdot\mathbf{n}( \theta)\right)=p\]
where \(\dot{\mathbf{Y}}=(\dot{Y}_{1},\dot{Y}_{2})\) and \(\mathbf{n}(\theta)\) is the outward-pointing normal to the contour at \(\theta\), relative to \(\mathbf{r}^{\circ}\). The conditional extremes constant density contour consists of the set of points \(\mathbf{\zeta}\in\mathbb{R}^{2}\) for which the joint density \(f_{\dot{\mathbf{Y}}}(\mathbf{\zeta})=c\) for some small value \(c>0\). Algorithms for estimation of the three contour types exploit importance sampling (e.g. Section 3.4 of Towe et al.2021) for computational efficiency where possible. Contours can be estimated per covariate bin (e.g. as "directional" or "seasonal" contours), or integrated over covariate bins (e.g. to provide "omni" estimates over all bins).
The constant exceedance and direct sampling contours are convex curves by definition, in the sense that they enclose a convex region (of \(\mathbb{R}^{2}\) for 2-D contours), possibly with the help of the coordinate axes. Depending on the nature of the joint density, the conditional extremes constant density "contour" could exist in the form of a set of disjoint curves, some of which may be closed. The constant exceedance and direct sampling contours are invariant to transformations of variables, whereas the conditional extremes constant density contour is not (e.g. Ross et al.2020).
### Accessing the software, and performing a typical analysis
The covXtreme is available for download from GitHub at Towe et al. (2023b). Software is written using MATLAB object-oriented programming. Full specifications of classes, properties and methods are therefore provided, and available for user inspection.
A typical analysis involves executing each of the MATLAB scripts Stage1_PeakPicking, Stage2_SetBinEdges, Stage3_FitMargin (for each variable marginally), Stage4_FitHeffernanTawn and Stage5_Contour in sequence. As explained in detail in the covXtreme user guide, each stage of the analysis involves the specification of control parameters for that stage. Default values for control parameters are provided, but it is necessary for the user at each stage to assess whether values are appropriate by confirming that diagnostic plots generated have reasonable characteristics. It may be necessary to repeat a given stage multiple times until acceptable diagnostic characteristics are found, before proceeding to the next stage.
Sections 3 and 4 below provide brief descriptions of the application of covXtreme to a pair of random variables (specifically significant wave height and spectra peak period) varying with directional covariate (Sections 3) and to four random variables (significant wave height, spectral peak period, wind speed and overturning momemt) varying with 2-D directional-seasonal covariate.
## 3 Case study : Single covariate, bivariate response
This section outlines to application of covXtreme to estimation of design contours for extreme storm peak significant wave height and associated spectral peak period, both varying with storm direction. The analysis follows the five stages discussed in Section 2. The data correspond to approximately 35 years of time-series output from a hindcast simulator for a location in the northern North Sea.
Following isolation of storm peak significant wave height \(H_{S}\), associated spectral peak period \(T_{P}\) and storm peak direction using Stage 1, Figure 1 illustrates the output of Stage 2 of the analysis, showing storm peak \(H_{S}\) and associated \(T_{P}\) as a function of the direction from which storms emanate, in degrees clockwise from north. The figure also shows the user-input bin edges at \(0^{\circ}\), \(20^{\circ}\), \(60^{\circ}\), \(225^{\circ}\), \(270^{\circ}\) and \(315^{\circ}\), specified so that the variation of \(H_{S}\) and \(T_{P}\) is approximately independent of direction within each bin, but different between bins. Thus, for example, the bin including \(45^{\circ}\) corresponds to the land shadow of Norway, with resulting low values of storm peak \(H_{S}\), whereas the bins including \(250^{\circ}\) and \(340^{\circ}\) contain large storms from the Atlantic Ocean and Norwegian Sea. Figure 2 provides scatter
plots of associated \(T_{P}\) on storm peak \(H_{S}\) for each of the six directional bins. The rates of occurrence of storms, and the marginal characteristics of \(H_{S}\) and \(T_{P}\) are clearly different between bins. In directional bin \([225,270)\), the dependence of the \(H_{S}\) on \(H_{S}\) is not clear, but the \(H_{S}\) is not clear. The \(H_{S}\) is not clear, but the \(H_{S}\) is not clear, but the \(H_{S}\) is not clear. The \(H_{S}\) is not clear, but the \(H_{S}\) is not clear, but the \(H_{S}\) is not clear, but the \(H_{S}\) is not clear.
Figure 1: Directional variation of storm peak significant wave height (\(H_{S}\), top) and associated spectral peak period (\(T_{P}\), bottom). Also shown are directional bin edges (red) for 6 bins. The variation of \(H_{S}\) and \(T_{P}\) is approximately independent of direction within each bin, but different between bins.
Figure 2: Associated spectral peak period on storm peak significant wave height per directional bin. Panel titles indicate that the covariate is direction “D”, and give the angular interval corresponding to the bin. It is apparent that the dependence between \(H_{S}\) and \(T_{P}\) varies between bins.
between \(T_{P}\) and \(H_{S}\) appears to be particularly strong.
Figure 3 illustrates marginal extreme value models for storm peak \(H_{S}\), the panels describing the variation of the estimates of generalised Pareto scale \(\nu\), gamma shape \(\omega\) and scale \(\kappa\) with direction, in terms of means (solid) and 95% uncertainty bands (dashed) over bootstrap resamples and random choices of marginal threshold non-exceedance probabilities drawn from the interval \([0.7,0.9]\). The empirical density of corresponding (stationary) estimates of generalised Pareto shape \(\xi\) is shown in Figure 9 of the user guide to be Gaussian-like with mean at approximately -0.2. The variation of \(\nu\) and \(\kappa\) with direction appears consistent with expectations given Figures 1 and 2. In particular, given estimated \(\nu\), the largest return values for storm peak \(H_{S}\) would be expected to emanate from the Atlantic and Norwegian Sea. Indeed, inspection of directional maxima for storm peak \(H_{S}\) for return periods of 10 and 100 years in Figure 4 confirms this: the largest contributors to the omni-directional maximum (in black) are the covariate bins corresponding to the Norwegian Sea (\([315,0)\), magenta) and Atlantic (\([225,270)\), cyan).
It is critical to assess the diagnostic plots generated by covXtreme to confirm that model fit is adequate. A number of illustrative diagnostic plots corresponding to this case study are shown in the user guide. For the current application, a plot of the predictive negative log likelihood from the cross-validation procedure (see Equation 1) suggests that the optimal choice \(\lambda^{\circ}\) of roughness penalty lies at around three. The goodness of fit of the marginal model is assessed by examining the stability of the estimate for generalised Pareto shape parameter \(\xi\) as a function of extreme value threshold \(\tau\). Comparison of empirical tails directly from the sample, with corresponding tails (and their uncertainty) estimates under the extreme value model, per covariate bin and omni-directionally, also suggests the marginal model is reasonable. The corresponding full marginal analysis must also be performed for associated \(T_{P}\) with direction.
Using the marginal models, Figure 5 illustrates parameter estimates for the conditional extremes model for associated \(T_{P}\) given storm peak \(H_{S}\) on standard Laplace margins. The non-stationary estimate for slope parameter \(\alpha\) suggests that dependence between the variables is high (near the maximum possible of unity), especially in the covariate bin corresponding to Atlantic storms. The value of exponent parameter \(\beta\) is slightly larger than zero, indicating that the sizes of residuals "\(y^{\beta}Z\)" from the conditional extremes model fit (Equation 5) grow very slowly with the conditioning value \(y\) on Laplace scale. The empirical densities for residual parameters \(\mu\) and \(\sigma\) are typical in our experience; unimodal densities, with approximately Gaussian shape.
Diagnostic plots are again assessed to confirm reasonable goodness of fit for the conditional extremes model. For the current application, parameter estimates from \(\alpha\) per covariate bin are reasonably stable as a function of dependence threshold \(\tilde{\tau}\) on the interval \([0.7,0.85]\) (see Figure 20 of the user guide). Further, the distribution of residuals \(\mathcal{E}\) from
Figure 3: Marginal directional extreme value model for storm peak significant wave height. Variation of parameter estimates for GP scale \(\sigma\), gamma shape \(\omega\) and scale \(\kappa\) with direction. Mean estimates in solid black, and bootstrap 95% uncertainty bands in dashed black. Also shown are directional bin edges (red). Directional dependence in particularly clear for \(\sigma\) and \(\kappa\).
the model fit do not appear to be obviously dependent on the directional covariate (see Figure 18 of the user guide).
Figure 4: Cumulative distribution functions for the 10-year (left) and 100-year (right) maximum of storm peak significant wave height per directional bin and over all bins (“omni”, black). Horizontal dashed lines drawn at the \(\exp(-1)\) quantile and median.
Figure 5: Parameter estimates from the conditional extreme value mode for associated peak period given storm peak significant wave height. Top left: directional variation of \(\alpha\) summarised as mean and 95% bootstrap uncertainty band; top right: histogram of \(\beta\); bottom left: histogram of \(\mu\); bottom right: histogram of \(\sigma\).
The overall distribution of residuals from the model fit (see Figure 17 of the user guide) is typical; a Gaussian-like density with positive skew.
We simulate under the fitted models to generate the environmental design contours (see Section 2.5) shown in Figure 6 omni-directionally and Figure 7 per directional bin. The constant exceedance, direct sampling and HT density contours are labelled as "Exc", "Hus" and "HTDns" in the figures. The three different contour methods produce estimates which have similar characteristics in terms of describing the "extent" of the data cloud, reflecting the positive dependence between \(H_{S}\) and \(T_{P}\), and passing through the appropriate lock points (shown in green in the figures). However the methods also use different definitions of the environmental contour; it is not surprising therefore that the contours estimates do not agree fully. The HTDns contour in particular produces a more variable estimate, especially when sample size is small (e.g. in the [20,60] directional bin in Figure 7). For engineering design, points on the contour with large values of \(H_{S}\) would typically be adopted to test the integrity of models for the offshore or coastal structure of interest.
Figure 6: Omni-directional environmental contours of associated peak period and storm peak significant wave height, for 10- and 100-year maximum values of storm peak significant wave height shown as solid and dashed lines respectively, corresponding to the Exceedance (Exc, blue), Heffernan-Tawn density (HTDns, orange) and Huseby (Hus, yellow) contours. Lock points for the respective return periods are shown in green.
## 4 Case study : 2-D covariate, multivariate response
The second case study is an extension of that reported in Section 3, which incorporates the over-turning moment (OTM) experienced by an offshore structure subjected to wind and wave loading. Specifically, we seek to characterise the joint distribution of (storm peak) \(H_{S}\) and wind speed (WS) conditional on extreme values of OTM, subject to directional and seasonal covariate effects. The analysis again follows the 5 stages described in Section 2.
Stage 1 of the analysis is isolation of storm peak events from the underlying storm \(H_{S}\) time-series data, following the same procedure as in Section 3. For Stage 2, Figure 8 shows the directional and seasonal variation of the conditioning variate OTM, and associated variates \(H_{S}\) and WS together with the directional and seasonal bin edges specified. Note that a smaller number of directional bins is used for this case study, to limit the total number of directional-seasonal bins and hence the number of parameters to be inferred in the analysis. Nevertheless, we are careful to allow for possible directional effects due to storms from the Atlantic and Norwegian Sea. There are clear directional and seasonal effects present.
The resulting scatter plots of WS on OTM per covariate bin are shown in Figure 9 (and the corresponding plot for \(H_{S}\) on OTM in Figure 28 of the user guide). Illustrative diagnostic plots for the estimation of marginal models for each of \(H_{S}\), OTM, and WS with direction and seasonal covariates are given in Figures 30-32 of the user guide. Marginal model parameters show clear directional and seasonal variation, and comparison of empirical and model-based tails suggest reasonable model fit. The fitted marginal models are then used to estimate the cumulative distribution functions for \(T\)-year maxima of interest; estimates for the 10- and 100-year maximum of WS are given in Figure 10. Unsurprisingly, the "omni" distribution (estimated over all directional and seasonal bins) is dominated by winter storms from the Atlantic sector, and the most probable 100-year maximum WS is approximately 29 ms\({}^{-1}\).
Figure 12 gives estimated conditional return value distributions for \(H_{S}\) and WS given 10-year and 100-year maximum OTM for individual directional-seasonal covariate bins, and "omni" over all covariate bins. For \(H_{S}\), conditional return values are again largest for Atlantic winter storms. For WS however, we observe an interesting transition involving winter storms from directional sectors [275,315] to [230,275]. The most probable _conditional_ value of WS given a 100-year maximum OTM is approximately 26.5 ms\({}^{-1}\), lower than the marginal most probable maximum 100-year wind speed.
The resulting omni-covariate environmental design contours for \(H_{S}\) and WS, given occurrences of the 10-year and 100-year maximum OTM, are shown in Figure 13. The general features of the contours are similar to those of Figure 6. Corresponding illustrative plots of contours per covariate bin for WS are given in Figure 40 of the user guide.
Figure 8: Directional (left) and seasonal (right) variation of overturning moment (OTM, top), storm peak significant wave height (\(H_{S}\), second row) and associated wind speed (WS, bottom). Also shown are bin edges for three directional bins coupled with two seasonal bins (and hence a total of \(6=4\times 2\) directional-seasonal bins). The variation of OTM, \(H_{S}\) and WS is approximately independent of covariates within bins, but different between bins.
Figure 10: Cumulative distribution functions for the 10-year (left) and 100-year (right) maximum of associated wind speed per covariate bin and over all bins (“omni”, black). Horizontal dashed lines drawn at the \(\exp(-1)\) quantile and median.
Figure 9: Associated wind speed on storm peak overturning moment per directional-seasonal bin. Panel titles indicate that the angular intervals of direction “D” and season “S” for the bin “D”. It is apparent that the dependence between WS and OTM varies between bins, with obvious non-linearity in some bins. A similar plot is generated for associated significant wave height on storm peak overturning moment.
Figure 11: Parameter estimates from the conditional extreme value mode for associated wind speed given storm peak overturning moment. Top left: directional and seasonal variation of \(\alpha\) summarised as mean and 95% bootstrap uncertainty band; top right: histogram of \(\beta\); bottom left: histogram of \(\mu\); bottom right: histogram of \(\sigma\). Note the very high directional dependence for \(\alpha\) at around 280\({}^{\circ}\).
Figure 12: Conditional cumulative distribution functions of associated significant wave height (top) and associated wind speed (bottom) per directional bin and over all bins (“omni”, black), conditional on the 10-year (left) and 100-year (right) maximum of storm peak overturning moment. Horizontal dashed lines drawn at the exp(\(-1\)) quantile and median.
## 5 Discussion
This article introduces the covXtreme software for pragmatic multivariate extreme value analysis with covariate non-stationarity. The MATLAB software provides functionality to isolate temporal peaks from time-series, and to define an appropriate partition of the covariate domain. Using this partition, marginal generalised Pareto models are estimated for each variate independently assuming piecewise constant threshold and scale parameterisations within a penalised likelihood framework; optimal scale parameter roughness is estimated using cross-validation. Marginal models are then used to transform the sample to standard Laplace margins. A non-stationary conditional extremes model is then estimated with piecewise constant parameterisation for the slope ("\(\alpha\)") parameter, again using penalised likelihood estimation with cross-validation to estimate optimal roughness. Simulations and importance sampling are then used to estimate the distribution of \(T\)-year maxima for each variate, and environmental design contours conditional on a single conditioning variate. Uncertainty due to marginal and dependence threshold selection is quantified by fitting multiple models with randomly chosen thresholds within user-specified plausible intervals of threshold non-exceedance probability. Uncertainties in parameter estimates and subsequent inferences from model fitting are estimated using bootstrap resampling. As noted in Section 1, the software has already been used in a number of studies, mostly but not exclusively metocean-related.
The covXtreme methodology makes a number of simplifying assumptions, motivated by the authors' experience of extreme value analysis applied to the ocean environment using a range of methodologies of difference complexities. For example, covXtreme relies on sensible user-specified partitioning of the covariate domain into bins within which it is reasonable to assume common marginal tails and a common dependence structure; this simplifies inference considerably compared with competitor approaches. Moreover, we believe that inferences using covXtreme with good partitioning are competitive with alternatives using more sophisticated tools. Specifically, covXtreme marginally is equivalent to a Voronoi set representation with pre-specified covariate partition, which was demonstrated by Zanini et al. (2020) to be competitive with P-spline and Bayesian adaptive regression spline covariate representations. covXtreme further assumes that the generalised Pareto shape parameter \(\xi\) in each marginal model is constant with covariate; because of the relative difficulty of estimating the shape parameter compared with the scale, this would appear reasonable in the absence of strong evidence to the contrary, especially for small samples of data. Likewise, the \(\beta\), \(\mu\) and \(\sigma\) parameters of the conditional extremes model are assumed stationary. This appears reasonable since \(\beta\) is an exponent, again
Figure 13: Omni-directional-seasonal environmental contours of associated significant wave height (left) and associated wind speed (right) and storm peak overturning moment. Contours shows for 10- and 100-year maximum values of storm peak significant wave height, shown as solid and dashed lines respectively, corresponding to the Exceedance (blue), Heffernan-Tawn density (HTDs, orange) and Huseby (yellow) contours. Corresponding contours per covariate bin are also generated.
difficult to estimate. Moreover, \(\mu\) and \(\sigma\) are essentially nuisance parameters; any model misspecification caused by the assumption of stationarity will be accommodated to some extent by the adoption of residuals from model fitting for inferences under the model. It might be appropriate to relax some of the assumptions for specific applications, for example (a) when there is strong evidence that generalised Pareto \(\xi\) is unlikely to be constant (e.g. due to land shadow and fetch limitation effects on \(H_{S}\)), or (b) since parameter estimates for conditional extremes \(\alpha\) and \(\mu\) are highly correlated when \(\beta\) is close to unity.
We anticipate that covXtreme might provide a pragmatic starting point to studies of spatial and temporal dependence of extremes, since the underlying methodology of both conditional spatial extremes (e.g. Shooter et al. 2019, Shooter et al. 2022, Wadsworth and Tawn 2019) and Markov extremal and similar time-series models (e.g. Winter and Tawn 2017, Tendijck et al. 2019, Tendijck et al. 2023) involves each of Stages 1-4 of the covXtreme approach. We also hope that covXtreme might be useful generally in estimating joint characteristics of extremes from multivariate time-series.
## 6 Acknowledgement
The original version of this software was developed as part of a project part-funded by the European Union ERANET entitled "Environmental Contours for SAfe DESign of Ships and other marine structures" (ECSADES), and was summarised in a review paper on the definition and application of environmental contours (Ross et al. 2020). covXtreme software, user guide and test data sets are available at Towe et al. (2023b).
|
2309.16835 | Hyperfine structure of the $\mathbf{A^{1}Π}$ state of AlCl and its
relevance to laser cooling and trapping | The majority of molecules proposed for laser cooling and trapping experiments
have $\Sigma$-type ground states. Specifically, $^2\Sigma$ states have cycling
transitions analogous to D1-lines in alkali-metal atoms while $^1\Sigma$ states
offer both strong and weak cycling transitions analogous to those in
alkaline-earth atoms. Despite this proposed variety, to date, only molecules
with $^2\Sigma$-type ground states have successfully been confined and cooled
in magneto-optical traps. While none of the proposed $^1\Sigma$-type molecules
have been successfully laser cooled and trapped, they are expected to have
various advantages in terms of exhibiting a lower chemical reactivity and an
internal structure that benefits the cooling schemes. Here, we present the
prospects and strategies for optical cycling in AlCl -- a $^1\Sigma$ molecule
-- and report on the characterization of the $A^{1}\Pi$ state hyperfine
structure. Based on these results, we carry out detailed simulations on the
expected capture velocity of a magneto-optical trap for AlCl. Finally, using
{\it ab initio} calculations, we identify the photodissociation via a $3^1\Pi$
state and photoionization process via the $3^1\Sigma^+$ state as possible loss
mechanisms for a magneto-optical trap of AlCl. | J. R. Daniel, J. C. Shaw, C. Wang, L. -R. Liu, B. K. Kendrick, B. Hemmerling, D. J. McCarron | 2023-09-28T20:29:40Z | http://arxiv.org/abs/2309.16835v2 | Hyperfine structure of the A\({}^{1}\Pi\) state of AlCl and its relevance to laser cooling and trapping
###### Abstract
The majority of molecules proposed for laser cooling and trapping experiments have \(\Sigma\)-type ground states. Specifically, \({}^{2}\Sigma\) states have cycling transitions analogous to D1-lines in alkali atoms while \({}^{1}\Sigma\) states offer both strong and weak cycling transitions analogous to those in alkaline earth atoms. Despite this proposed variety, to date, only molecules with \({}^{2}\Sigma\)-type ground states have successfully been confined and cooled in magneto-optical traps. While none of the proposed \({}^{1}\Sigma\)-type molecules have been successfully laser cooled and trapped, they are expected to have various advantages in terms of exhibiting a lower chemical reactivity and an internal structure that benefits the cooling schemes. Here, we present the prospects and strategies for optical cycling in AlCl - a \({}^{1}\Sigma\) molecule - and report on the first characterization of the \(A^{1}\Pi\) state hyperfine structure. Based on these results, we carry out detailed simulations on the expected capture velocity of a magneto-optical trap for AlCl. Finally, using _ab initio_ calculations, we identify the two-photon photo-ionization process via the \(3^{1}\Sigma^{+}\) state as a possible loss mechanism for a MOT of AlCl.
+
Footnote †: preprint: APS/123-QED
## I Introduction
The ability to control the rich internal and external degrees of freedom of polar molecules has the prospect of enabling a large number of novel applications, including the search for new physics beyond the Standard Model and precision measurements [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21], controlled chemistry [22; 23; 24; 25], and quantum simulation and computation [26; 27; 28; 29; 30; 31; 32]. Realizing the necessary control for such applications can realistically only be achieved at low temperatures, where only a small number of quantum states are occupied, and with trapped samples that allow for long interaction times. One way to produce ultracold molecules is to associate laser cooled atoms with carefully controlled external fields. While this method has been successful, it is limited to molecules which consist of laser coable atoms [33; 34; 35; 36; 37; 38; 39; 40; 41; 42]. On the other hand, over the past two decades, a growing number of molecules have been identified with internal structures that allow for photon cycling to an extent that renders these molecules amenable to direct laser cooling and trapping [43; 44; 45; 46; 47; 48]. Among those species, a diverse range of diatomic molecules have been explored both theoretically [49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64] and experimentally [65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86]. Furthermore, this experience has helped to guide recent efforts extending these techniques to polyatomic species [87; 88; 89; 90; 80; 82; 84; 85; 86].
Nevertheless, at present, only the diatomic molecules SrF [97; 98], CaF [99; 100; 101], YO [102] and the polyatomic CaOH [93] have successfully been laser cooled and confined in a magneto-optical trap (MOT), a crucial milestone towards the growing list of applications. All of these molecules possess an unpaired electronic spin and a \({}^{2}\Sigma\)-type ground state with optical cycling transitions analogous to D1 lines in alkali atoms. By contrast, molecules with \({}^{1}\Sigma\) ground states are expected to offer certain advantages for laser cooling, as their closed shells render them less reactive and their internal structures show similarities to atoms in a type-II MOT. For \({}^{1}\Sigma\) molecules, both strong and weak optical cycling transitions may be available, analogous to the \({}^{1}S_{0}\rightarrow\,^{1}P_{1}\) and \({}^{1}S_{0}\rightarrow\,^{3}P_{J}\) transitions regularly used in alkaline earth atoms. To the best of our knowledge, the only three \({}^{1}\Sigma\)-type species currently being experimentally studied for laser cooling are TlF [67; 103; 104], AlF [82; 83] and AlCl [43; 56; 105; 106].
In this work, we spectroscopically study the hyperfine and the magnetic sub-structure of AlCl, discuss the implications of its properties on laser cooling and trapping and present theoretical estimates of the expected capture velocities of a MOT for AlCl. The metal halide AlCl has been proposed as an excellent candidate for laser cooling due to its high photon scattering rate of \(\approx 2\pi\times 25\,\mathrm{MHz}\)[107] and its almost unity Franck-Condon factors [105; 106; 108; 56; 109]. A key challenge to laser cooling and trapping AlCl is producing sufficient laser light for the optical cycling transition at \(261.5\,\mathrm{nm}\) which connects the electronic ground \(X^{1}\Sigma^{+}\) state with the excited \(A^{1}\Pi\) state. However, recent developments in UV laser technology, including work done by the authors, have shown that robust systems capable of more than \(1\,\mathrm{W}\) of laser power at this wavelength are now within reach [110; 111; 112; 113; 114].
AlCl was first laboratory-confirmed in 1913 and has since undergone many spectroscopic and chemical stud
ies [106; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143]. Moreover, these efforts have been complemented by many theoretical studies that explored the properties of AlCl in detail [105; 106; 108; 109; 144; 145; 146; 147; 148; 149; 150; 151; 152]. Its presence in the interstellar medium, carbon-rich stars, and as part of the models of exoplanets' atmospheres renders AlCl a molecule of astrophysical interest [145; 146; 148; 153; 154; 155; 156; 157; 158; 159; 160; 161]. Despite this extensive work, some fundamental properties, such as the dipole moment of AlCl, remain unknown and have only been estimated theoretically [146] or in some models substitute values from similar molecules are used [154; 155]. AlCl also has many applications outside the laboratory, for instance, it is utilized in the production of photovoltaic grade silicon [162; 163; 164], is spectroscopically found in rocket plumes [165; 166; 167] and can be used as a probe for the detection of chlorides in drinking water [168]. Another interesting and unique aspect of AlCl is that the disproportionation reaction channel, which forms the stable compound AlCl\({}_{3}\), can be blocked below \(180\,\mathrm{K}\) and solid densities of AlCl have been isolated using this characteristic [169]. This property can potentially provide the ideal starting point for producing a large number of molecules in the gas phase by applying nanosecond-pulsed laser ablation to a fabricated thin precursor film of AlCl, adding to the list of advantages of AlCl as a candidate for laser cooling and trapping and the described applications.
## II The AlCl structure
AlCl has two main isotopes with \({}^{35,37}\)Cl with natural abundances of \(\approx 76\%\) and \(\approx 24\%\) respectively. The large differences in the electronegativities between Al (1.61) and Cl (3.16) form a polar bond with a theoretically predicted electric dipole moment of \(1.6\,\mathrm{D}\) for the \(X^{1}\Sigma^{+}\) state [146]. The \(X^{1}\Sigma^{+}\) ground state is connected via a UV photon at \(261.5\)nm to the excited, short-lived \(A^{1}\Pi\) state, which has a lifetime of \(\approx 6\,\mathrm{ns}\)[107], and via a blue photon at \(\approx 407\,\mathrm{nm}\) to the intermediate triplet \(a^{3}\Pi\) state. The meta-stable intermediate state is split into four \(\Omega\)-sub states whose linewidths have been estimated to be on the order of 3-90 Hz [105], but no precise experimental data exists to date. The \(A^{1}\Pi\gets X^{1}\Sigma^{+}\) excitation, which promotes a valence electron from the \(9\sigma\)- to the \(4\pi\)-orbital, closely resembles the S-P transition in atomic aluminum [132]. The similar vibrational constants and bond lengths of the \(X^{1}\Sigma^{+}\) and the \(A^{1}\Pi\) states result in a calculated Franck-Condon factor of 99.88% for the \(v^{\prime\prime}=0\) band [106]. The vibrational levels are split in rotational states with a rotational constant of \(\approx 7.3\,\mathrm{GHz}\). The presence of the nuclear spins of both the chlorine atom (\(I_{\mathrm{Cl}}=3/2\)) and the aluminum atom (\(I_{\mathrm{Al}}=5/2\)) adds complex cascaded hyperfine splittings to each state. Though, the hyperfine splitting of the \(X^{1}\Sigma^{+}\) state is smaller than the natural linewidth and remains unresolved.
Fig. 1 shows the corresponding detailed level scheme relevant to optical cycling in AlCl. The quantum numbers of the states are defined in Section II.2. All Q-type transitions (\(\Delta J=0\)) can be used for laser cooling since they are rotationally closed due to dipole and parity selection rules.
### Spectroscopy on AlCl
For the analysis in this work, two sets of data were used from two separate experiments, one in the Hemmerling group at the University of California, Riverside (UCR), and the other in the McCarron group at the University of Connecticut (UConn).
In the UCR group, AlCl is produced in a cryogenic helium buffer-gas beam source (CBGB) [171; 172] at \(3.4\,\mathrm{K}\) via short-pulsed (\(5\,\mathrm{ns}\)) laser ablation of a Al:KCl mixture target [173] with a Nd:YAG laser (Mini-Lite II, Continuum) of \(\approx 10\,\mathrm{mJ}\) per shot. More details on the source are described in Ref. [106]. The fluorescence data presented here was acquired \(\approx 40\,\mathrm{cm}\) downstream from the source. The molecules were excited with laser light aligned orthogonal to the molecular beam direction and the induced fluorescence was collected with a photomultiplier tube (H10722-04, Hamamatsu). The excitation laser light of a few mW at \(261.5\,\mathrm{nm}\) was produced by frequency-doubling the output of a \(522\,\mathrm{nm}\) (VALO Vecsel, Vexlum) with a custom-built second-harmonic generation cavity. The laser frequency was scanned and stabilized by using the frequency readout of a wavelength meter (WS-7, High-Finesse).
At UConn, the experimental approach is similar, with pulses of cold AlCl produced from a cryogenic buffer-gas beam source at \(2.7\,\mathrm{K}\) via laser ablation (\(\approx 20\,\mathrm{mJ}\) at \(532\,\mathrm{nm}\)). The source has been described previously in Ref. [174]. Molecules are optically addressed 94 cm downstream of the source below an EMCCD camera using \(\approx 100\,\mathrm{\mu W}\) of laser light at \(261.5\,\mathrm{nm}\). This light is picked-off from a homebuilt laser system that generates
Figure 1: Electronic and rovibrational energy level structure of AlCl. The Q transitions, which are used for laser cooling, are rotationally closed, unlike the P and R transitions. The manifolds of the \(X^{1}\Sigma^{+}\) and \(A^{1}\Pi\) states include 72 and 144 levels, respectively, all of which are involved in laser cooling AlCl. Adapted from [170].
\(>1\) W in the fourth-harmonic from an infra-red fiber amplifier seeded by an external cavity diode laser (ECDL) at 1046 nm [114]. The EDCL is frequency stabilized and scanned using a transfer cavity locked to a frequency stabilized HeNe laser.
### Hamiltonian of AlCl
In this section, the hyperfine energy level structure of the \(X\)- and \(A\)-electronic states are described in detail and the new molecular constants, which were acquired by using data from the spectroscopy setups, are discussed. Similar to the approach taken for AlF [82], we choose to describe both the \(X^{1}\Sigma^{+}\) and \(A^{1}\Pi\) states using a common Hund's case (a) coupling scheme. To describe the electronic state of AlCl, the total spin \(\mathbf{J}=\mathbf{\Omega}+\mathbf{R}\), where \(\mathbf{R}\) is the rotational angular momentum and \(\mathbf{\Omega}=\mathbf{\Lambda}+\mathbf{\Sigma}\) is the sum of the projection of the electron orbital angular momentum \(\mathbf{L}\) and the electron spin angular momentum \(\mathbf{S}\) on the intermolecular axis. For the \(X^{1}\Sigma^{+}\) state, the projections are \(\Lambda=0\) and \(\Sigma=0\). For the \(A^{1}\Pi\) state, \(\Lambda=\pm 1\) and \(\Sigma=0\). The hyperfine structure is accounted for by coupling \(\mathbf{J}\) to the nuclear spin of the aluminum atom, \(\mathbf{F_{1}}=\mathbf{J}+\mathbf{I_{\Lambda l}}\), which is in turn coupled to the nuclear spin of the chlorine atom, \(\mathbf{F}=\mathbf{F_{1}}+\mathbf{I_{\Gamma l}}\).
X-StateThe Hamiltonian for the \(X^{1}\Sigma^{+}\) state has the form [175]
\[H_{\mathrm{X}}=H_{0}^{\mathrm{X}}+H_{\mathrm{EQ}}\quad, \tag{1}\]
where \(H_{0}^{\mathrm{X}}\) includes the electronic, vibrational, and rotational energy terms, and \(H_{\mathrm{EQ}}\) is the electric quadrupole term. \(H_{0}^{\mathrm{X}}\) is expressed in terms of the Dunham expansion for \(E(\nu,J)\) with equilibrium constants that have previously been measured [137].
The electric quadrupole interaction has previously been found to be the dominant hyperfine interaction in the \(X^{1}\Sigma^{+}\) state [128], given as
\[H_{\mathrm{EQ}} = \sum_{\alpha}\frac{\sqrt{6}(eQq_{0})_{\alpha}}{4I_{\alpha}(2I_{ \alpha}-1)}T_{0}^{2}(\mathbf{I}_{\alpha},\mathbf{I}_{\alpha})\quad, \tag{2}\]
where \(\alpha\) indicates the nucleus of aluminum and chlorine.
Higher order terms, such as the nuclear-spin-rotation and the nuclear-spin-nuclear-spin interaction term are neglected, given the broad linewidth of the \(A^{1}\Pi\gets X^{1}\Sigma^{+}\) transition that is used in this study and the fact that these terms are expected to be two orders of magnitude smaller than the quadrupole terms, comparable to the case for the similar molecule AlF [82].
A-StateFor the \(A^{1}\Pi\) state, the orbital angular momentum is non-zero. The orbital degeneracy of \(\Lambda=\pm 1\) is lifted due to the presence of the end-over-end rotation of the molecule and results in a splitting of the rotational states into two opposite parity states, also known as \(\Lambda\)-doubling, see Fig. 1. The Hamiltonian for the \(A^{1}\Pi\) state has the form [175]
\[H_{\mathrm{A}}=H_{0}^{\mathrm{A}}+H_{\mathrm{LI}}+H_{\Lambda}+H_{\mathrm{EQ}} +H_{\mathrm{Z}} \tag{3}\]
where \(H_{0}^{\mathrm{A}}\) includes the electronic, vibrational and rotational energy terms, \(H_{\mathrm{LI}}\) is the nuclear-spin-orbital hyperfine term, \(H_{\mathrm{A}}\) is the lambda-doubling term, \(H_{\mathrm{EQ}}\) is the electric quadrupole term, and \(H_{\mathrm{Z}}\) is the Zeeman term. \(H_{0}^{\mathrm{A}}\) is expressed in the form of the Dunham expansion with equilibrium constants that have been measured in a previous study [106].
Due to the singlet nature of the \(A^{1}\Pi\) state, the \(\Lambda\)-doubling term can be expressed as
\[H_{\Lambda}=-\sum_{k=\pm 1}e^{-2ik\phi}qT_{2k}^{2}(\mathbf{J},\mathbf{J}) \tag{4}\]
and the nuclear-spin-orbital hyperfine term can be expressed as
\[H_{\mathrm{LI}}=\sum_{\alpha}a_{\alpha}T^{1}(\mathbf{L})\cdot T^{1}(\mathbf{I }_{\alpha})\quad. \tag{5}\]
The quadrupole term for the \(A^{1}\Pi\) state has both a component along and a component perpendicular to the internuclear axis
\[H_{\mathrm{EQ}} = \sum_{\alpha}\frac{eQ_{\alpha}}{4I_{\alpha}(2I_{\alpha}-1)}\left[ \sqrt{6}q_{0,\alpha}T_{0}^{2}(\mathbf{I}_{\alpha},\mathbf{I}_{\alpha})\right. \tag{6}\] \[\left.+\sum_{k=\pm 1}e^{(-2ik\phi)}q_{2,\alpha}T_{2k}^{2}( \mathbf{I}_{\alpha},\mathbf{I}_{\alpha})\right]\quad.\]
\begin{table}
\begin{tabular}{|c|c|} \hline Constant & Value (MHz) \\ \hline \((eQq_{0})_{Al}\) & -29.8(50) \\ \((eQq_{0})_{Cl}\) & -8.6(10) \\ \hline \end{tabular}
\end{table}
Table 1: Experimental electric quadrupole constants \(eQq_{0}\) for the \(X^{1}\Sigma^{+}\) state as measured in previous work [128].
Figure 2: Normalized fluorescence data (red circles: UCR, blue diamonds: UConn) and model (black solid line) of **(a)** R(0), **(b)** R(1), **(c)** R(2), and **(d)** R(3) of AlCl. The vertical black lines represent the different transitions predicted by our Hamiltonian model with their heights corresponding to their relative line strengths.
The electric quadrupole constants are defined in terms of nuclear quadrupole moment, \(eQ\), and electric-field-gradient at each nucleus, with \(q_{0}\) being equal to the \(V_{zz}\) component and \(q_{2}\) being equal to \(2\sqrt{6}(V_{xx}-V_{yy})\)[175]. Based on our previous _ab initio_ calculations [106], we performed additional _ab initio_ calculations of the electric field gradients to get a theoretical estimate of the quadrupole constants, as shown in Tab. 2 and Tab. 3. Using the quadrupole moments of 147.7 mb for the Al nucleus [147] and 85 mb for the \({}^{35}\)Cl nucleus [176], we find reasonable agreement between the theoretical and experimental values for the \(X^{1}\Sigma^{+}\) state shown in Tab. 1. This result increases our confidence in our _ab initio_ values for the \(A^{1}\Pi\) state.
Given the broad linewidth of the \(A^{1}\Pi\gets X^{1}\Sigma^{+}\) transition, we use the following procedure to estimate the new equilibrium constants for the \(A^{1}\Pi\) state. Starting with the R(0)-transition, we first use a least-squares fit to extract the hyperfine constants, \(a_{\rm Al}\) and \(a_{\rm Cl}\), since the structure of this line is dominated by these parameters and much less affected by others. Then, with the hyperfine parameters set to their optimum values, we use a least-square fit to determine an upper limit of the \(\Lambda\)-doubling constant \(q\) by fitting the R(0)-R(3) transitions simultaneously. During the whole procedure, we keep the quadrupole constants, \(eQq_{0}\) and \(eQq_{2}\), for both nuclei set to the values determined by the _ab initio_ calculations.
The resulting equilibrium constants are presented in Tab. 4. The errors of the constants correspond to a \(\approx 2.5\%\) deviation of the absolute value of the residuals of the fit and model from the optimal value. We note that the value for the \(\Lambda\)-doubling constant is an upper limit since it is mainly determined by the width of the broad features of R(2) and R(3). Our approach yields a good agreement between both independent data sets, UCR and UConn, and the Hamiltonian model using the combination of fitted and _ab initio_ values for the molecular constants, as shown in Fig. 2. We note that a small discrepancy between the two data sets shown in Fig. 2 (a). We attribute this to the nonlinearity of the transfer cavity used in the UConn laser frequency stabilization scheme. Here, the frequency of the ECDL inherits the nonlinearity of the transfer cavity, which is exacerbated by the o-rings used in its design [177]. This nonlinearity depends on the cavity DC value and was not a significant effect in the other R-line scans. For this reason, only the UCR data were used to fit the R(0) transition. This highlights that, while transfer cavities are effective for frequency stabilization, care must be taken when using this approach for spectroscopy, especially where subsequent stages of second- or fourth-harmonic generation amplify these non-linearities.
Finally, overlaying the model with the parameters acquired through the R-transitions with the fluorescence measurements of the Q-branch at UCR and UConn yields a reasonable agreement, as shown in Fig. 3. We note that the Q-branch fit has no free parameters besides the overall frequency offset, the rotational temperature (2.5 K/1.6 K for the UCR/UConn data) and the overall amplitude. Finally, the density of the lines of the Q-transitions illustrates the similarity of the rotational constants of the \(X^{1}\Sigma^{+}\) and \(A^{1}\Pi\) states, which in part leads to the predicted highly diagonal Franck-Condon factors of AlCl [106].
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Nucleus & State & \(V_{zz}\) (a.u.) & \(eQq_{0}\) (MHz) \\ \hline Al & \(X^{1}\Sigma^{+}\) & -0.809 & -28.1 \\ & \(A^{1}\Pi\) & -0.220 & -7.6 \\ \hline Cl & \(X^{1}\Sigma^{+}\) & -0.675 & -13.5 \\ & \(A^{1}\Pi\) & -2.554 & -51.0 \\ \hline \end{tabular}
\end{table}
Table 2: _Ab initio_ calculations of the electric field gradients and the quadrupole constants \(eQq_{0}\).
\begin{table}
\begin{tabular}{|c|c|} \hline Constant & Value (MHz) \\ \hline \(a_{\rm Al}\) & \(131.9\,\binom{+3.6}{-3.3}\) \\ \(a_{\rm Cl}\) & \(42.0\,\binom{-7.0}{-7.0}\) \\ \(q\) & \(<-3.0\) \\ \hline \end{tabular}
\end{table}
Table 4: Molecular constants for the \(A^{1}\Pi\) state obtained in this work.
Figure 3: Normalized fluorescence data (red circles: UCR data, blue diamonds: UConn data) and model (black solid line) of the Q(0)-Q(5) transitions of AlCl. The models use the fitting parameters of the R-lines from Tab. 4 and only optimizes for the overall signal amplitude, the rotational temperature (\(T_{\rm rot}=2.5\) K solid line, \(T_{\rm rot}=1.6\) K dashed line) and the absolute frequency offset. The vertical black lines represent the different transitions predicted by our Hamiltonian model with their heights corresponding to their relative line strengths.
### Zeeman Splitting
To set up a magneto-optical trap for a new species, it is important to fully understand the Zeeman splitting of the cooling transition in order to design the magnetic field gradients appropriately. In the case of AlCl, the dominant Zeeman term in the \(A^{1}\Pi\) state is the interaction between the electron-orbital-angular-momentum, \(\mathbf{L}\), and the applied magnetic field, \(\mathbf{B}\). This interaction has the form
\[H_{\mathrm{Z}}=g_{L}\mu_{B}T^{1}(\mathbf{L})\cdot T^{1}(\mathbf{B}) \tag{7}\]
where \(g_{L}=1\) and \(\mu_{B}\) is the Bohr magneton. The dominant Zeeman interaction in the \(X^{1}\Sigma^{+}\) state is the nuclear-spin-Zeeman interaction, \(\mathbf{I}_{\alpha}\cdot\mathbf{B}\), which has a magnetic dipole moment that is smaller than the \(A^{1}\Pi\) state by a factor of \(m_{e}/m_{p}\), where \(m_{e}\) is the electron mass and \(m_{p}\) is the proton mass. Thus, the Zeeman splitting of the \(X^{1}\Sigma^{+}\) state is negligible and Zeeman shifts on the cycling transition are fully determined by the splitting of the \(A^{1}\Pi\). We show the calculated Zeeman splitting of the \(A^{1}\Pi,(v^{\prime}=0,J^{\prime}=1)\) state as a function of an external magnetic field in Fig. 4. This is our principle target excited state for optical cycling in AlCl using the \(Q(1)\) cycling transition. However, the close proximity of other Q-transitions means that a single laser frequency will likely address and lead to optical cycling for molecules in multiple low-lying rotational states within the electronic ground state, see Fig. 3.
In the low field regime, magnetic sublevels are shifted linearly according to Lande \(g\)-factors that vary in both magnitude and sign for different hyperfine states, see table 10.
\(\mathbf{y}\) for the \(g\)-factors of the target even parity \(A^{1}\Pi,(v^{\prime}=0,J^{\prime}=1)\) excited state. While in principle this structure can lead to magnetically tunable transitions for use in a Zeeman slower, the large hyperfine spread of the \(J^{\prime}\) states within the \(A^{1}\Pi\) state combined with the lack of a type-I transition, can make addressing individual velocity classes challenging.
Realizing confining transitions in a MOT of AlCl to the \(F_{1}=5/2\) and \(7/2\) manifolds will require an orthogonal circular laser polarization compared to confining transitions to the \(F_{1}=3/2\) manifold since these states have \(g\)-factors with opposite sign. We emphasize that a MOT of AlCl appears similar in nature to atomic type-II MOTs. Here the magnetically tunable states that enable confinement are in the electronic excited state and decay rapidly via spontaneous emission to the unresolved and unperturbed \(X^{1}\Sigma^{+}\) ground state. By contrast, in \({}^{2}\Sigma\)-type molecules, the dominant Zeeman shift is in the ground state, which can lead to stationary magnetic dark states that require either static dual-frequencies [178] or rapid synchronous switching of the field gradient and laser polarizations [179; 180] to generate substantial confining forces. While the general level structure and Zeeman shifts within molecules with \({}^{1}\Sigma\) ground states may simplify magneto-optical trapping methods, coherent dark states may still need to addressed, see Section III.4 and Ref. [181].
In Tab. 5, we list the Lande factors for low magnetic fields (\(<10\,\mathrm{Gauss}\)) of the different hyperfine states in the even parity \(A^{1}\Pi,(v^{\prime}=0,J^{\prime}=1)\) manifold. We note that a MOT of AlCl will require a large magnetic field gradient on the order of \(100\,\mathrm{G/cm}\) axially to realize confining forces due to the small excited state \(g\)-factors and the large transition linewidth. This is similar to MOTs using strong transitions in alkaline earth and alkaline earth-like atoms [182].
Finally, in the high-field regime, the different spins decouple and the overall state structure simplifies into three manifolds, each of which has common Lande factors. This Paschen-Back regime has been proposed for a Zeeman slowing scheme for CaF at \(\approx 300\,\mathrm{Gauss}\)[183]. Though, AlCl would require even higher fields and field-gradients to completely isolate the manifolds, we note that MOTs with gradients of \(\approx 1\,\mathrm{kGauss/cm}\) have been realized [184; 185; 186].
## III Optical Cycling
The strong optical cycling \(A^{1}\Pi-X^{1}\Sigma^{+}\) (\(v=0,v^{\prime}=0\)) transition near \(261.5\) nm combines a large linewidth (\(2\pi\times 25\) MHz) with a large photon recoil velocity (\(2.5\) cm/s) to offer access to strong radiative forces.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline \(F_{1}\) & \(F\) & \(g_{F}\) & \(F_{1}\) & \(F\) & \(g_{F}\) & \(F_{1}\) & \(F\) & \(g_{F}\) \\ \hline
3/2 & & & 5/2 & & 7/2 & & \\ \hline & 0 & - & & & & \\ & 1 & -0.15 & & 1 & 0.15 & & \\ & 2 & -0.13 & & 2 & 0.07 & 2 & 0.23 \\ & 3 & -0.12 & & 3 & 0.05 & 3 & 0.15 \\ & & & 4 & 0.03 & & 4 & 0.12 \\ & & & & & 5 & 0.10 \\ \hline \end{tabular}
\end{table}
Table 5: Calculated Landé factors of the even parity \(A^{1}\Pi\) (\(v=0,J^{\prime}=1\)) states of AlCl.
Figure 4: The Zeeman splitting of the \(A^{1}\Pi,(v^{\prime}=0,J^{\prime}=1)\) manifold of AlCl as a function of the external magnetic field is shown.
These forces could potentially slow a molecular beam from a cryogenic source to below the capture velocity of a MOT in just a few centimeters, rather than the \(\approx 1\) m required for today's experiments with \({}^{2}\Sigma\) molecules [93; 97; 100; 102; 99]. Such an improvement would increase the solid-angle and trappable flux from a cryogenic source by several orders-of-magnitude and tackle inefficient MOT loading, which remains a key bottleneck in the field of molecular laser cooling and trapping. Alternatively, new slowing techniques, such as bichromatic slowing [187; 188; 189; 190; 191; 192], travelling-wave Stark deceleration [193; 194; 195; 196; 197; 198; 199] or Zeeman-Sisyphus deceleration [91], may offer solutions to this challenge. Strong, short-wavelength optical transitions, such as those in AlCl, AlF and MgF, are attractive for laser cooling and trapping but demand high laser intensity since the saturation intensity scales as \(I_{\rm sat}\propto\Gamma/\lambda^{3}\). The AlCl optical cycling transition at 261.5 nm is particularly fortuitous since ytterbium fiber lasers and amplifiers offer high power (10-100 W) at the fundamental of the forth-harmonic (1046 nm) and 261.5 nm is close enough to the frequency quadrupled Nd:YAG that optics are well-developed and commercially available. In the following, we outline the prospects of optical cycling in the different ro-vibrational and hyperfine manifolds of AlCl and compare the effects to other molecules, AlF and TlF.
### Vibrational branching
AlCl is expected to have highly diagonal Franck-Condon factors (FCFs) which limit decay into excited vibrational levels within the \(X^{1}\Sigma^{+}\) manifold during optical cycling. Previous work by multiple groups predict a FCF in the \(v^{\prime\prime}=0\) band of \(q_{00}>0.99\)[105; 106; 56; 149] with \(\sim 3\) lasers (one cycling laser and two repumps) being sufficient to scatter the \(\sim 10^{4}\) photons required to slow, laser cool and trap [90; 93]. Experimental work to directly confirm this diagonal vibrational branching is underway [200], similar to previous work with TlF [201], BH [202] and AlF [82], other candidate molecules for laser cooling experiments with \({}^{1}\Sigma\) ground states. We note in passing that the single unpaired valence electron within each \({}^{2}\Sigma\) molecule laser cooled to-date offers an intuitive picture behind the origin of diagonal FCFs, i.e. that the optically-addressable electron plays a negligible role in the binding of the molecule. By contrast, to the best of our knowledge, no similar intuition can be used to identify closed-shell \({}^{1}\Sigma\) molecules with diagonal FCFs.
### Rotational branching
Rotational branching within AlCl can be tamed using selection rules that dictate the allowed changes in angular momentum and wavefunction symmetry (parity) during electronic transitions [68]. Namely, the electric dipole transitions with nonzero transition dipole matrix elements (TDME) are limited to \(\Delta J=0,\pm 1\) and, because the dipole operator is a rank 1 tensor (odd), the wavefunctions of the connected states must have opposite parity. For \({}^{1}\Sigma\) ground state molecules, such as AlCl, these selection rules enable rotational closure for all Q-transitions (\(\Delta J=0\)) (see Fig. 1).
A loss channel via rotational branching can be introduced by a small electric field to mix the closely spaced opposite parity \(\Lambda\)-doublets in the \(A^{1}\Pi\) state and hence break the parity selection rule. In this case for AlCl, spontaneous emission would then populate dark rotational states via the P- and R-branches (\(\Delta J=-1\) and \(+1\), respectively). This loss mechanism was reported in a radio-frequency MOT of SrF molecules [180] and, notably, has been investigated for AlF [203], with this loss channel becoming negligible for stray fields below 1 V/cm. For AlCl, the level of electric field suppression required remains unclear as our spectra can only place an upper bound on the \(\Lambda\)-doubling parameter \(q\) that dictates the spacing between \(\Lambda\)-doublets, see Tab. 4.
An additional loss mechanism, enabling transitions with \(|\Delta J|>1\), can result from mixing between states with the same total angular momentum \(F\) from different rotational levels within the excited electronic state. In both AlCl and AlF, this mixing in the \(A^{1}\Pi\) state is predominantly due to the magnetic hyperfine interaction of the Al nuclear spin and is expected to result in a small loss channel of order \(10^{-6}\)[170; 82]. By contrast, this mixing and rotational branching in TlF can be substantial and poses a challenge to optical cycling in this molecule [67].
### Hyperfine Structure
The Al and Cl nuclear spins result in AlCl having a complex hyperfine structure, with 12 hyperfine states for \(J=1\), 18 for \(J=2\), 22 for \(J=3\) and 24 for \(J\geq 4\), respectively (see Fig. 1). In the \(X^{1}\Sigma^{+}\) state, the lack of spin-orbit coupling results in the hyperfine structure being small and unresolved to the strong \(A^{1}\Pi-X^{1}\Sigma^{+}\) transition (\(\Gamma\approx 2\pi\times 25\) MHz). For example, in \(J=1\) all 12 hyperfine states span just \(\approx 11\) MHz [128]. While this allows all ground state hyperfine levels for a given \(J\) to be conveniently addressed by a single laser frequency, it can also lead to the formation of slowly evolving dark states which prevent rapid optical cycling (see Section III.4).
In the \(A^{1}\Pi\) state, the hyperfine structure is at best only partially resolved for low-lying rotational states (see Fig. 2). Our analysis shows that, similar to AlF [82], this structure is primarily due to the nuclear spin-electron orbit interaction with the \(eQq_{0}\) and \(eQq_{2}\) constants dictating that the electric quadrupole interaction only plays a small role. Interestingly, the \(A^{1}\Pi\) (\(v^{\prime}=0,J^{\prime}=1\)) hyperfine structure spans \(\sim 500\) MHz, equivalent to a Doppler spread of \(\sim 130\) m/s at 261.5 nm. This, combined with power broadening, may enable a single laser frequency to address the decreasing Doppler shift of molecules during
laser slowing without the need for frequency chirping or phase modulating the slowing light.
For convenience, we use a Hund's case (a) basis to describe the AlCl ground and excited states using quantum numbers \(F\) and \(F_{1}\) (see above). However, we emphasize that \(F_{1}\) is not a good quantum number and states with common \(F\) from different \(F_{1}\) are mixed. In AlCl, this mixing arises in the \(X^{1}\Sigma^{+}\) state due to the quadrupole interaction with the Cl nuclear spin and in the \(A^{1}\Pi\) state due to the Cl nuclear spin-electron orbit interaction. While this mixing does not lead to loss from the optical cycle, it does skew hyperfine branching ratios away from the unmixed case [170].
### Dark States
Rotational structure in molecules dictates that optical cycling requires type-II transitions which naturally introduce dark states within the ground electronic state [204, 44]. In general, these can be either stationary angular momentum eigenstates, in which molecules accumulate and leave the optical cycle, or a coherent superposition of these eigenstates, which naturally precess between bright and dark states, limiting the maximum photon scattering rate [205, 206].
For \({}^{2}\Sigma\) molecules, it is common to remix dark states using a Zeeman shift to lift the degeneracy of ground state sublevels by \(\sim\Gamma\). This approach is impractical for \({}^{1}\Sigma\) molecules due to their small ground state magnetic moments which, coupled with their unresolved ground state hyperfine splittings, can result in robust, slowly-evolving coherent dark states which significantly limit photon scattering rates. Fortunately, rapid polarization switching offers an alternative method to address stationary dark states and also destabilize coherent dark states, provided that the number of ground states is less than three times the number of available excited states, i.e. a single laser polarization addresses more than \(\frac{1}{3}\) of ground states. While this can be the case for Q-transitions in AlCl and AlF [82], it is not the case for TlF [103, 104, 180] where the excited state hyperfine structure is well resolved. In this case, additional switched microwave fields linking rotational states in the electronic ground state are required.
The dark state composition in AlCl was calculated following the method in Refs. [204, 46], with the number of dark states for the \(Q(1)\) transition depending on the number of partially resolved excited states addressed. Assuming power broadening is adequate to address the entire \(A^{1}\Sigma(v^{\prime}=0,J^{\prime}=1)\) state, we find that, in the absence of a magnetic field, \(\pi\)-transitions driven by linearly polarized light lead to 24 coherent dark states and no stationary dark states, indicating no leakage from the optical cycle but a constraint on the maximum photon scattering rate for a fixed laser polarization.
A similar calculation for the dark state composition in AlF is described in Ref. [203]. Here dark states are formed by linear superpositions of states with different values of \(F_{1}\) and the limit on the scattering rate is consistent with the smallest splitting among them. In AlCl, the dark state composition is more complicated, with dark states described instead by superpositions between states with different \(F_{1}\), \(F\), and \(m_{F}\) quantum numbers, leading to dark states between both magnetic and hyperfine levels. Ongoing experimental work will test these results by measuring photon scattering rates in AlCl with and without polarization modulation [200].
## IV Estimate of the Capture Velocity of a Magneto-Optical Trap for AlCl
To load AlCl into a magneto-optical trap, molecules need to be slowed to (or below) the MOT capture velocity. This step is necessary for molecular MOTs since, in contrast to their atomic counterparts, the beam sources that are bright enough for a realistic experiments typically are not effusive in nature. Instead, the molecular sources, e.g. a cryogenic buffer gas beam or a super-sonic beam, have a boosted forward velocity distribution with higher average values and widths that are too narrow to provide a sufficient flux of molecules below the MOT's capture velocity. Hence, currently a slowing stage is essential before molecules can be trapped in a MOT. In current MOT experiments, both white-light [207, 208] and chirped slowing [209, 100] are successfully used to prepare beams for trapping. Since the momentum imparted by each photon recoil is small, it is necessary to cycle many photons (\(\gtrsim 10^{4}\)), see Section III. We note that alternative slowing methods that avoid the need for repeated photon scattering, are being explored in the community. Examples include the bichromatic force [187, 188, 189, 190, 191, 192], travelling-wave Stark deceleration [193, 194] and Zeeman-Sisyphus deceleration [91].
To estimate the MOT capture velocity, and threshold that needs to be reached by the slowing process, we numerically simulate the dynamics of AlCl molecules entering a MOT. Here, we use a standard 3D-MOT configuration comprised of a quadrupole magnetic field gradient of \(75\,\mathrm{G/cm}\) axially and three pairs of retro-reflected laser beams, each with a Gaussian beam profile. We then implement the Hamiltonian from Section II and the MOT configuration in the open source Python package PyLCP [210] and solve for the time evolution of the trajectories of AlCl.
The presence of both hyperfine spins in AlCl leads to a large number of quantum states in each rotational manifold that must be included to fully describe the system. In general, capturing the effects of coherences in simulations requires evaluating the optical Bloch equations [181, 211, 212]. However, the optical cycling transition \(X^{1}\Sigma^{+}\left|v=0,J=1\right\rangle\leftrightarrow A^{1}\Pi\left|v^{ \prime}=0,J^{\prime}=1\right\rangle\) involves 144 magnetic sublevels rendering a full simulation computationally challenging. As a result, we perform the following estimates of the capture velocity using rate
equations and therefore expect this approach to break down as laser intensity grows and coherent dark states begin to limit excitation. By trading intensity for MOT beam diameter, we will operate near the saturation intensity (\(\approx 232\,\mathrm{mW/cm^{2}}\) for AlCl), where other molecules have been shown to be well described by rate equations [178; 203].
We simulate the molecular trajectories for different initial velocities and laser powers and extract the maximum molecular velocity that is captured for a range of MOT beam diameters. The cooling lasers that address the \(X^{1}\Sigma^{+}\left|v=0,J=1\right\rangle\leftrightarrow A^{1}\Pi\left|v^{ \prime}=0,J^{\prime}=1\right\rangle\) transition are detuned by \(-\frac{\Gamma}{2}\) from the \(F_{1}^{\prime}=7/2\) level. We further assume that no vibrational branching occurs during the simulation, i.e. the repump lasers have been applied accordingly. We also reduce the calculated equilibrium force by a factor of two at each timestep of the simulation to account for the \(\Lambda\)-system created between the cycling and the first repump lasers [213].
The results of these simulations are shown in Fig. 5. We find that a MOT capture velocity \(v_{cap}\geq 30\) m/s requires a mean intensity in the range \(\sim 0.1-1\) W/cm\({}^{2}\) per beam, depending on the MOT beam diameter (\(d\)), with smaller beams requiring higher intensity (and higher scattering rates, \(R_{sc}\)) to account for shorter interaction times. These results follow the general results of a simplified two-level model, as discussed in Ref. [170]. In brief, for a fixed beam diameter \(d\), \(v_{cap}\propto\sqrt{d\cdot R_{sc}}\). At low intensity \(I\), for fixed laser power, \(R_{sc}\propto I\propto d^{-2}\) and so \(v_{cap}\propto d^{-1/2}\). At higher intensity towards saturation, this dependence is weakened since now \(R_{sc}\propto d^{-m}\) where \(0\leq m<2\), with \(m=0\) representing when the transition is fully saturated, and so \(v_{cap}\propto d^{(1-m)/2}\).
While a large MOT capture velocity is desirable, it is also important to consider the spatial overlap between the MOT volume and the slowed molecular beam, which has a solid angle \(\Omega_{s}\propto d^{2}\). In general, the number of trapped molecules \(N_{\mathrm{MOT}}\propto\Omega_{s}\cdot v_{cap}^{\kappa}\) where \(\kappa\) is determined by the slowed beam's velocity profile. Typically, \(\kappa>1\) since there can be many more molecules available for capture at higher velocities due to reduced transverse beam divergence [207] and this gives rise to two regimes. When below saturation, due to limited laser power, as could be the case in the deep UV for AlF, smaller MOT beams are desirable since here \(N_{\mathrm{MOT}}\propto d^{2-\kappa/2}\) with \((2-\kappa/2)<0\). By contrast, when transitions can be saturated using high laser power, \(N_{\mathrm{MOT}}\propto d^{2+\kappa(1-m)/2}\) with \(2+\kappa(1-m)/2>0\), and large MOT beams are optimal. This high power, large diameter regime is typically where today's molecular MOTs using \({}^{2}\Sigma\) molecules operate when loading. Our goal is to also operate towards this regime for AlCl using \(\sim 1\) cm \(1/e^{2}\) diameter beams each with \(\sim 0.5\) W. However, our ongoing work probing the scattering rate vs intensity [200] will ultimately guide our MOT beam parameters to use the available laser power [174] most efficiently to maximize \(N_{\mathrm{MOT}}\).
While high intensity and large scattering rates are beneficial for MOT loading, it is common to reduce the intensity immediately after loading. This reduces the scattering rate which both cools the trapped atoms by limiting Doppler heating and increases the trap lifetime for state preparation or transfer to a conservative trap. This step will also likely be important for an AlCl MOT since the Doppler temperature is 600 \(\mu K\). A blue-detuned MOT of AlCl, as recently demonstrated for YO [214] and SrF [48], could potentially cool below this limit towards the recoil temperature of 5 \(\mu K\), though substantial vibrational closure would be needed for efficient transfer from the red-detuned MOT since these blue-detuned MOTs operate at high intensity and require \(>20\) ms to load [48].
For short wavelength transitions at high intensity, one also needs to consider the possibility of significant loss via photo-dissociation and photo-ionization of the molecule. The latter effect has been known to dominate loss processes in atomic MOTs [215]. Here, we carry out _ab initio_ calculations on the cross-sections of AlCl for these processes to characterize their effect.
## V Photo-dissociation and ionization of AlCl
In this section, we analyze the two-photon process that can lead to photo-induced dissociation and ionization. Fig. 6 plots selected potential energy curves (PECs) for AlCl. The lowest three black curves are based on _ab initio_ calculations that have been fit to Morse potentials [106]. These PECs correlate asymptotically for large internuclear separation (\(r\)) to the dissociation energy of neutral Al + Cl. The two relevant excited electronic state PECs are plotted in blue and red for the \(2\,^{1}\Pi\) and \(3\,^{1}\Sigma^{+}\) states, respectively. Asymptotically for large \(r\) the \(2\,^{1}\Pi\) PEC approaches the dissociation energy of neutral Al + Cl whereas the \(3\,^{1}\Sigma^{+}\) PEC approaches the dissociation en
Figure 5: Simulated capture velocities plotted as a function of laser power per beam for different beam diameters. The beam diameter is defined as the \(1/e^{2}\) diameter of the Gaussian beams. Figure reproduced from Ref. [170].
ergy of ionic Al\({}^{+}\) + Cl\({}^{-}\). These PECs are based on fitting repulsive exponential functions (\(V=V_{o}\exp[\alpha(r-r_{o})^{2}]\)) to the _ab initio_ data reported in Ref. [109]. The \(V_{o}\), \(\alpha\), and \(r_{o}\) are adjustable fitting parameters and were optimized to minimize the root-mean-square error between the analytic curves and the _ab initio_ data.
A simplex fitting algorithm was used (AMOEBA [216]) and the optimal parameters were determined to be \(V_{o}=4.40249\,\times 10^{4}\,\mathrm{cm}^{-1}\), \(\alpha=0.174950\,\mathrm{\AA}^{-2}\), and \(r_{o}=3.73886\,\mathrm{\AA}\) for the \(2\,^{1}\Pi\) state and \(V_{o}=4.53576\times 10^{4}\,\mathrm{cm}^{-1}\), \(\alpha=6.76293\times 10^{-2}\mathrm{\AA}^{-2}\), and \(r_{o}=4.89511\,\mathrm{\AA}\) for the \(3\,^{1}\mathrm{\Sigma}^{+}\) state (the fitted parameters quoted above include several extra digits for numerical reasons to ensure that the potential curves can be accurately reproduced). The vertical black and red dashed arrows represent the two-photon excitation process that can lead to dissociation along the repulsive \(2\,^{1}\Pi\) PEC (blue) or ionization along the repulsive \(3\,^{1}\mathrm{\Sigma}^{+}\) PEC (red). From these PECs we can compute intensity profiles for the dissociation and ionization cross sections by calculating the Franck-Condon overlaps between the ground ro-vibrational eigenfunction of the \(A\,^{1}\Pi\) state with the continuum eigenstates of the \(2\,^{1}\Pi\) and \(3\,^{1}\mathrm{\Sigma}^{+}\) states, respectively.
The reflection technique is used where the continuum eigenfunctions are represented by delta functions located at the classical turning points along the repulsive PECs [217]. The intensity profiles are then simply proportional to \(\nu\,\psi_{0}^{2}\) where \(\nu\) is the excitation energy and \(\psi_{0}\) is the ground ro-vibrational wave function evaluated at the \(r\) corresponding the classical turning point for the energy \(\nu\). We can derive the classical turning points \(r_{c}\) as a function of \(\nu\) from the exponential functions given above by setting \(V=\nu\) to obtain \(r_{c}=r_{o}-\sqrt{\ln(\nu/V_{o})/\alpha}\).
An energy grid in \(\nu\) was constructed using 100 points between 6.0 and 8.5 cm\({}^{-1}\) (\(\times 10^{4}\)). The ground ro-vibrational wavefunction for the \(A\,^{1}\Pi\) state (computed in our previous work [106]) was then evaluated at each of the corresponding \(r_{c}\) values and the resulting intensity profiles (normalized) are plotted in Fig. 7 for both dissociation (solid blue) and ionization (solid red). For reference, the energies of the two relevant laser wavelengths (265 and 261 nm) are plotted with black vertical lines. From these profiles, it is clear that these laser wavelengths could lead to photo-ionization via the excited \(3^{1}\Sigma^{+}\) state but dissociation via the \(2^{1}\Pi\) state is unlikely. The sensitivity of the profiles plotted in Fig. 7 on the exponential fits was quantified by increasing and decreasing the slopes of the PECs by approximately 1.5% (the dashed and long-short dashed red and blue curves in Fig. 6). The corresponding cross sections are plotted with dashed and long-short dashed red and blue curves in Fig. 7.
## VI Conclusion
We have characterized the hyperfine structure of the \(A^{1}\Pi\) state in AlCl and reported the first measurement of the nuclear spin-electron orbit interaction strength of the excited state. We discuss strategies and possible loss mechanisms for efficient optical cycling and compare the advantages of AlCl to other molecules. Both AlCl and AlF have similar internal structures and are expected to allow for similar cooling schemes. While the larger linewidth and shorter wavelength of AlF results in access to larger optical forces, the AlF saturation intensity is \(4\times\) higher than AlCl which poses significant demands on the laser technology and optics required. An additional advantage of AlCl (and AlF) is that transitions
Figure 6: Five selected potential energy curves (PECs) are plotted for AlCl as a function of the internuclear distance \(r\). The three black and one blue PECs correlate to neutral Al + Cl dissociation for large \(r\) whereas the red PEC leads to ionization Al\({}^{+}\) + Cl\({}^{-}\). The two-photon process is indicated by the vertical dashed black and red arrows.
Figure 7: The normalized photo-dissociation (blue) and photo-ionization (red) cross sections are plotted as a function of the excitation energy. The energies of the two relevant laser wavelengths (261 nm: main cooling transition, and 265 nm: 1st vibrational repumper) are indicated by the vertical black lines. The dashed and short-long dashed curves quantify the sensitivity of the cross sections to the slopes of the potential energy curves of the repulsive excited electronic states (see Fig. 6).
can simultaneously be driven to multiple excited states, allowing polarization modulation to destabilize coherent dark states, in contrast to TlF. The larger ground state hyperfine splittings in AlCl and AlF also result in these dark states naturally precessing into bright states more rapidly than in the case of TlF.
Finally, the hyperfine structure in the \(A^{1}\Pi\) state studied in this work spans between \(\sim 250-500\) MHz for \(J^{\prime}=1-4\) while the \(X^{1}\Sigma^{+}\) hyperfine structure is unresolved for the cycling transition. While such a broad excited state spread makes addressing individual velocity classes challenging using a type-II transition in, for example, a Zeeman slower, it does potentially offer two advantages. First, increased scattering rates may be accessed by targeting different excited hyperfine states with the cycling and first repump lasers to avoid coupling these lasers and creating a \(\Lambda-\)system. This approach is similar to that used with alkali atoms. Second, laser slowing may be simplified since a single laser frequency can simultaneously address a broad range of velocities, similar to white light or frequency chirped slowing, but without the need to spectrally or temporally dilute the laser intensity applied to each velocity. This may allow a single laser frequency to slow molecules directly from a single-stage CBGB and reduce the technical complexity.
We use numerical simulation of the full AlCl Hamiltonian to estimate the capture velocity of a magneto-optical trap for AlCl. Our results yield capture velocities of up to \(30-40\) m/s when using \(\approx 1\) W of laser power per MOT beam, suggesting that a significant part of a CBGB source with a slowing cell [218] could be directly loaded into a MOT without slowing. This result highlights another advantage of AlCl, but should also be understood in the context of MOT capture velocities of other molecules, where magnitudes for CaF of \(5-20\) m/s [48; 101; 178; 219], for SrF of \(9-13\) m/s [48], and for MgF of \(26\) m/s [220] have been either calculated or measured. Optimization strategies for increasing these values to higher magnitudes have been explored as well [221].
Finally, a possible limit preventing a high-intensity MOT for AlCl is that two-photon excitation may lead to substantial trap loss. Using _ab initio_ calculations of the excitation cross-sections, we identified that photoionization via a dissociative \(2^{1}\Sigma^{+}\) state could be a non-negligible loss process when using the main cycling and the first vibrational repump transition for the MOT. To the best of our knowledge, as yet there are no available experimental data on this process, which is required to verify if photo-ionization will indeed limit the lifetime of an AlCl MOT.
###### Acknowledgements.
J.R.D., C.W., L.-R.L., and B.H. acknowledge funding from the NSF grant number 1839153 and from the AFRL grant number FA9550-21-1-0263. B.K.K. acknowledges that part of this work was done under the auspices of the U.S. Department of Energy under Project No. 20170221ER of the Laboratory Directed Research and Development Program at Los Alamos National Laboratory. Los Alamos National Laboratory is operated by Triad National Security, LLC, for the National Nuclear Security Administration of the U.S. Department of Energy (Contract No. 89233218CNA000001). J.C.S. and D.J.M. gratefully acknowledge funding from the NSF (grant number 1848435), and the University of Connecticut College of Liberal Arts and Sciences and the Office of the Vice President for Research.
|
2309.07938 | An Assessment of ChatGPT on Log Data | Recent development of large language models (LLMs), such as ChatGPT has been
widely applied to a wide range of software engineering tasks. Many papers have
reported their analysis on the potential advantages and limitations of ChatGPT
for writing code, summarization, text generation, etc. However, the analysis of
the current state of ChatGPT for log processing has received little attention.
Logs generated by large-scale software systems are complex and hard to
understand. Despite their complexity, they provide crucial information for
subject matter experts to understand the system status and diagnose problems of
the systems. In this paper, we investigate the current capabilities of ChatGPT
to perform several interesting tasks on log data, while also trying to identify
its main shortcomings. Our findings show that the performance of the current
version of ChatGPT for log processing is limited, with a lack of consistency in
responses and scalability issues. We also outline our views on how we perceive
the role of LLMs in the log processing discipline and possible next steps to
improve the current capabilities of ChatGPT and the future LLMs in this area.
We believe our work can contribute to future academic research to address the
identified issues. | Priyanka Mudgal, Rita Wouhaybi | 2023-09-14T04:09:27Z | http://arxiv.org/abs/2309.07938v1 | # An Assessment of ChatGPT on Log Data
###### Abstract
Recent development of large language models (LLMs), such as ChatGPT has been widely applied to a wide range of software engineering tasks. Many papers have reported their analysis on the potential advantages and limitations of ChatGPT for writing code, summarization, text generation, etc. However, the analysis of the current state of ChatGPT for log processing has received little attention. Logs generated by large-scale software systems are complex and hard to understand. Despite their complexity, they provide crucial information for subject matter experts to understand the system status and diagnose problems of the systems. In this paper, we investigate the current capabilities of ChatGPT to perform several interesting tasks on log data, while also trying to identify its main shortcomings. Our findings show that the performance of the current version of ChatGPT for log processing is limited, with a lack of consistency in responses and scalability issues. We also outline our views on how we perceive the role of LLMs in the log processing discipline and possible next steps to improve the current capabilities of ChatGPT and the future LLMs in this area. We believe our work can contribute to future academic research to address the identified issues.
Keywords:log data, log analysis, log processing, ChatGPT, log analysis using LLM, large language model, deep learning, machine learning
## 1 Introduction
In recent years, the emergence of generative AI and large language models (LLMs) such as OpenAI's ChatGPT have led to significant advancements in NLP. Many of these models provide the ability to be fine-tuned on custom datasets [1], [2], [3] and achieve the state-of-the-art (SOTA) performance across various tasks. A few of the LLMs such as GPT-3 [4] have demonstrated in-context-learning capability without requiring any fine-tuning on task-specific data. The impressive performance of ChatGPT and other LLMs [81, 5, 6, 7, 8, 9] in zero-shot and few-shot learning scenarios is a major finding as this helps LLMs to be more efficient [80, 79, 78, 77, 76]. With such learning methodologies, the LLMs can be used as a service [10] to empower a set of new real-world applications.
Despite the impressive capability of ChatGPT in performing a wide range of challenging tasks, there remain some major concerns about it in solving real-world problems like log analysis [95]. Log analysis is a vast area, and much
research has been done. It mainly comprises three major categories, namely, log parsing, log analytics, and log summarization. Log parsing is an important initial step of system diagnostic tasks. Through log parsing, the raw log messages are converted into a structured format while extracting the template [14, 13, 12, 11]. Log analytics can be used to identify the system events and dynamic runtime information, which can help the subject matter experts to understand system behavior and perform system diagnostic tasks, such as anomaly detection [18, 17, 16, 15], log classification [19], error prediction [21, 20], and root cause analysis [22, 23]. Log analytics can further be used to perform advanced operations e.g., identify user activities, and security analysis e.g., detect logged-in users, API/service calls, malicious URLs, etc. As logs are huge in volume, log summarization enables the operators to provide a gist of the overall activities in logs and empowers the subject matter experts to read and/or understand logs faster. Recent studies leverage pre-trained language models [17, 25, 24] for representing log data. However, these methods still require either training the models from scratch [26] or tuning a pre-trained language model with labeled data [17, 24], which could be impractical due to the lack of computing resources and labeled data.
More recently, LLMs such as ChatGPT [95] have been applied to a variety of software engineering tasks and achieved satisfactory performance [27, 28]. With a lack of studies to analyze ChatGPT's capabilities on log processing, it is unclear whether it can be performed well on the logs. Although many papers have performed the evaluation of ChatGPT on software engineering tasks [29, 30, 33], specific research is required to investigate its capabilities in system log area. We are aware that the LLMs are fast evolving, with new models, versions, and tools being released frequently, and each one is improved over the previous ones. However, our goal is to assess the current situation and to provide a set of experiments that can enable the researchers to identify possible shortcomings of the current version for analyzing logs and provide a variety of specific tasks to measure the improvement of future versions. Hence, in this paper, we conduct an initial level of evaluation of ChatGPT on log data. Specifically, we divide
Figure 1: An example of log code, log message, and structured log from [34]
the log processing [32] into three subsections: log parsing, log analytics, and log summarization. We design appropriate prompts for each of these tasks and analyze ChatGPT's capabilities in these areas. Our analysis shows that ChatGPT achieves promising results in some areas, but limited outcomes in others and contains several real-world challenges in terms of scalability. In summary, the major contributions of our work are as follows:
\(\bullet\) To the best of our knowledge, we are the first to study and analyze ChatGPT's ability to analyze the log data in multiple detailed aspects.
\(\bullet\) We design the prompts for multiple scenarios in log processing and record ChatGPT's response.
\(\bullet\) Based on the findings, we outline several challenges and prospects for ChatGPT-based log processing.
## 2 Related Work
### Log data
With the increasing scale of software systems, it is complex to manage and maintain them. To tackle this challenge, engineers enhance the system observability [31, 101] with logs.
Logs capture multiple system run-time information such as events, transactions, and messages. A typical piece of log message is a time-stamped record that captures the activity that happened over time (e.g., software update events or received messages). Logs are usually generated when a system executes the corresponding logging code snippets. An example of the code snippet and generated code is shown in Fig. 1. A system with mature logs essentially facilitates the system behavior understanding, health monitoring, failure diagnosis, etc. Generally, there are three standard log formats, i.e., structured, semi-structured, and
Figure 2: Various prompt designs to address the research questions.
unstructured logs [74]. These formats share the same components: a timestamp and a payload content.
Structured logs usually keep a consistent format within the log data and are easy to manage. Specifically, the well-structured format allows easy storing, indexing, searching, and aggregation in a relational database. The unstructured log data achieves its high flexibility at the expense of the ease of machine processing. The characteristic of free-form text becomes a major obstacle for efficient query and analysis on unstructured or semi-structured logs. For instance, to count how often an API version appears in unstructured logs, engineers need to design a complex query with ad-hoc regular expressions to extract the desired information. The manual process takes lots of time and effort and is not scalable.
### Log Processing
Logs have been widely adopted in software system development and maintenance. In industry, it is a common practice to record detailed software runtime information into logs, allowing developers and support engineers to track system behaviors and perform postmortem analysis. On a high level, log processing can be categorized in three types as discussed below.
#### 2.2.1 Log Parsing
Log parsing is generally the first step toward automated log analytics. It aims at parsing each log message into a specific log event/template and extracting the corresponding parameters. Although there are many traditional regular expression-based log parsers, but, they require a predefined knowledge about the log template. To achieve better performance in comparison to traditional log parsers, many data-driven [41, 42, 40, 39, 38, 37, 12] and deep learning based approaches [24, 26] have been proposed to automatically distinguish template and parameter parts.
#### 2.2.2 Log Analytics
Modern software development and operations rely on log monitoring to understand how systems behave in production. There is an increasing trend to adopt artificial intelligence to automate operations. Gartner [99] refers to this movement as AIOps. The research community, including practitioners, has been actively working to address the challenges related to extracting insights from log data also being referred to as "Log Analysis" [98]. Various insights that can be gained are in terms of log mining [87], error detection and root cause analysis, security and privacy, anomaly detection, and event prediction.
**Log Mining** Log mining seeks to support understanding and analysis utilizing abstraction and extracting useful insights. However, building such models is a challenging and expensive task. In our study, we confine ourselves to posing specific questions in terms of most API/service calls that can be extracted out of raw log messages. This area is well studied from a deep learning aspect and most of those approaches [50, 56, 53, 57, 51, 54, 55, 52] require to first parse the logs and then process them to extract the detailed level of knowledge.
**Error Detection and Root Cause Analysis** Automatic error detection from logs is an important part of monitoring solutions. Maintainers need to investigate what caused that unexpected behavior. Several studies [49, 48, 47, 46, 44, 43] attempt to provide their useful contribution to root cause analysis, accurate error identification, and impact analysis.
**Security and Privacy** Logs can be leveraged for security purposes, such as malicious behaviour and attack detection, URLs, and IP detection, logged-in user detection, etc. Several researchers have worked towards detecting early-stage malware and advanced persistence threat infections to identify malicious activities based on log data [58, 62, 59, 60, 61].
**Anomaly Detection** Anomaly detection techniques addresses to identify the anomalous or undesired patterns in logs. The manual analysis of logs is time-consuming, error-prone, and unfeasible in many cases. Researchers have been trying several different techniques for automated anomaly detection, such as deep learning [66, 63, 64, 65] and data mining, statistical learning methods, and machine learning [73, 72, 71, 70, 69, 68, 67].
**Event Prediction** The knowledge about the correlation of multiple events, when combined to predict the critical or interesting event is useful in preventive maintenance or predictive analytics that can reduce the unexpected system downtime and result in cost saving [84, 82, 83]. Thus, the event prediction method is highly valuable in real-time applications. In recent years, many rule-based and deep learning based approaches [94, 93, 92, 91, 90, 85] have evolved and performing significantly.
#### 2.2.1 Log Summarization
Log statements are inserted in the source code to capture normal and abnormal behaviors. However, with the growing volume of logs, it becomes a time-consuming task to summarize the logs. There are multiple deep learning-based approaches [45, 98, 100, 19] that perform the summarization, but they require time and compute resources for training the models.
### ChatGPT
ChatGPT is a large language model which is developed by OpenAI [95, 96]. ChatGPT is trained on a huge dataset containing massive amount of internet text. It offers the capability to generate text responses in natural language that are based on a wide range of topics. The fundamental of ChatGPT is generative pre-training transformer (GPT) architecture. GPT architecture is highly effective for natural language processing tasks such as translation in multiple languages, summarization, and question answering (Q & A). It offers the capability to be fine-tuned on specific tasks with a smaller dataset with specific examples. ChatGPT can be adopted in a variety of use cases including chatbots, language translation, and language understanding. It is a powerful tool and possesses the potential to be used across wide range of industries and applications.
### ChatGPT Evaluation
Several recent works on ChatGPT evaluation have been done, but most of the papers target the evaluations on general tasks [75, 33], code generation [27], deep learning-based program repair [28], benchmark datasets from various domains [29], software modeling tasks [30], information extraction [89], sentiment analysis of social media and research papers [86] or even assessment of evaluation methods [88]. The closest to our work is [35], but they focus only on log parsing.
We believe that the log processing area is huge and a large-level evaluation of ChatGPT on log data would be useful for the research community. Hence, in our work, we focus on evaluating ChatGPT by conducting an in-depth and wider analysis of log data in terms of log parsing, log analytics, and log summarization.
## 3 Context
In this paper, our primary focus is to assess the capability of ChatGPT on log data. In line with this, we aim to answer several research questions through experimental evaluation.
### Research Questions
#### 3.1.1 Log Parsing RQ1.
How does ChatGPT perform on log parsing?
#### 3.1.2 Log Analytics RQ2.
Can ChatGPT extract the errors and identify the root cause from raw log messages?
**RQ3.** How does ChatGPT perform on advanced analytics tasks e.g., most called APIs/services?
**RQ4.** Can ChatGPT be used to extract security information from log messages?
**RQ5.** Is ChatGPT able to detect anomalies from log data?
**RQ6.** Can ChatGPT predict the next events based on previous log messages?
#### 3.1.3 Log Summarization RQ7.
Can ChatGPT summarize a single raw log messages?
**RQ8.** Can ChatGPT summarize multiple log messages?
#### 3.1.4 General RQ9.
Can ChatGPT process bulk log messages?
**RQ10.** What length of log messages can ChatGPT process at once?
To examine the effectiveness of ChatGPT in answering the research questions, we design specific prompts as shown in Fig 2. We append the log messages in each of the prompts (in place of the slot '[LOG]').
### Dataset
To perform our experiments, we use the datasets provided from the Loghub benchmark [13, 34]. This benchmark covers log data from various systems, including, windows and linux operating systems, distributed systems, mobile systems, server applications, and standalone software. Each system dataset contains 2,000 manually labeled and raw log messages.
### Experimental Setup
For our experiments, we are using the ChatGPT API based on the gpt-3.5-turbo model to generate the responses for different prompts [95]. As shown in Fig. 3, we send the prompts appended with log messages to ChatGPT from our system with Intel(r) Xeon(r) E3-1200 v5 processor and Intel(r) Xeon(r) E3-1500 v5 processor and receive the response. To avoid bias from model updates, we use a snapshot of gpt3.5-turbo from March 2023 [97].
### Evaluation Metrics
As our study demands a detailed evaluation and in some cases, there was no state-of-the-art tool, we evaluated the output by our manual evaluation.
## 4 Experiments and Results
Each of the subsections below describes the individual evaluation of ChatGPT in different areas of log processing.
### Log Parsing
In this experiment, we assess the capability of ChatGPT in parsing a raw log message and a preprocessed log message and find the answer to **RQ1**. For the first experiment, we provide a single raw log message from each of the sixteen publicly available datasets [34] and ask ChatGPT to extract the log template. We refer to it as first-level log parsing. ChatGPT performs well in extracting the specific parts of log messages for all sixteen log messages. One of the examples of ChatGPT's response for first-level log parsing is shown in Fig. 4. Next, we
Figure 3: Flow Diagram.
preprocess the log message, extract the content, and ask chatGPT to further extract the template from the log message. ChatGPT can extract the template and variables from the log message successfully on all sixteen log messages with a simple prompt. One of the examples of ChatGPT's response is shown in Fig. 5.
### Log Analytics
To evaluate ChatGPT's capability in log analytics, we perform several experiments in each of the categories described in section 2.2.
**Log Mining** In this experiment, we are seeking the answer of **RQ2** by investigating if ChatGPT can skim out the knowledge from raw logs without building an explicit parsing pipeline. We perform our experiments in several parts. We provide a subset of log messages containing 5, 10, 20, and 50 log messages from Loghub benchmark [34] and ask ChatGPT to identify the APIs. Fig 6 shows an example of ChatGPT response when a smaller set of log messages were passed. We notice that ChatGPT consistently missed identifying some APIs from the log messages irrespective of the count of log messages, but still shows 75% or more accuracy in all cases. Results are reported in Table 1.
Figure 4: Log parsing of raw log message.
**Error Detection and Root Cause Analysis** In this experiment, we explicitly ask ChatGPT [97] to identify the errors, warnings, and possible root causes of those in the provided log messages and address **RQ3**. Aligning towards our study structure, we first provide five log messages from the Loghub dataset [34] and later increase the size of log messages to ten, twenty, and fifty. Fig 7 shows the identified errors from five log messages and a detailed report for all the combinations with their response time is being reported in Table 1. It is evident from Table 1 that ChatGPT successfully identifies the errors and warnings on a smaller set of log messages than a larger set.
**Security and privacy** In this experiment, we focus on addressing **RQ4** and investigate if ChatGPT can identify the URLs, IPs, and logged users from the logs and extract knowledge about malicious activities. We use the open source dataset from Loghub [102] and follow the same approach of sending the set of
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline Log & API & API & API & \begin{tabular}{c} API \\ Response \\ Time (s) \\ \end{tabular} & \begin{tabular}{c} Error \\ Count \\ \end{tabular} & \begin{tabular}{c} Error \\ Accuracy \\ \end{tabular} & \begin{tabular}{c} Error \\ Count \\ \end{tabular} & \begin{tabular}{c} Error \\ Accuracy \\ \end{tabular} &
\begin{tabular}{c} Error \\ Response \\ time (s) \\ \end{tabular} \\ \hline
5 & 5 & 4 & 80 & 2.48 & 2 & 2 & 100 & 18.49 \\
10 & 10 & 8 & 80 & 3.96 & 3 & 3 & 100 & 27.61 \\
20 & 20 & 15 & 75 & 6.44 & 5 & 3 & 60 & 36.38 \\
50 & 50 & 46 & 92 & 5.66 & 13 & 5 & 38.46 & 46.46 \\ \hline \hline \end{tabular}
\end{table}
Table 1: **ChatGPT’s performance to identify the APIs, errors and root cause from Loghub dataset [34].**
Figure 5: Log parsing of preprocessed log message.
five, ten, twenty, and fifty log messages to chatGPT to detect the URLs, IPs, and users from them. We use the 'Prompt 4' from Fig. 2 to ask if there are any malicious activities present in the logs. As shown in Table 2, ChatGPT extracts out the IPs and logged-in users with high accuracy irrespective of the length of log messages. An example of ChatGPT's response is shown in Fig. 8. The detailed report is shown in Table. 2.
**Anomaly Detection**
To evaluate ChatGPT's capability to detect anomalies in logs and to address **RQ5**, we use 'Prompt 5' from Fig. 2. As detecting anomalies through log messages would require context, we append 200 log message entries and ask ChatGPT to detect anomalies from it. Without showing any examples to ChatGPT of how an anomaly might look like, it still tries to identify the possible anomalies and provide its analysis in the end. One of the examples is shown in Fig. 9.
**Event Prediction**
It is interesting to evaluate ChatGPT's performance in predicting future events in log messages. Typically, for future event prediction, a context of past
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline Log & URLs & URLs & URLs & User & User & User & Response \\ Message & Count & count captured & (\%) & Count captured & (\%) & (\%) & time (s) \\ \hline
5 & 4 & 4 & 100 & 2 & 2 & 100 & 13.77 \\
10 & 9 & 9 & 100 & 7 & 7 & 100 & 46.41 \\
20 & 13 & 13 & 100 & 14 & 14 & 100 & 112.14 \\
50 & 24 & 20 & 83.33 & 16 & 14 & 87.5 & 163.76 \\ \hline \hline \end{tabular}
\end{table}
Table 2: **ChatGPT performance to extract urls, IPs, and users from the log messages from Loghub dataset [34].**
Figure 6: ChatGPT response to extract the APIs from log messages.
event is required, hence, we append 200 log messages to 'Prompt 6' from Fig. 2 and ask ChatGPT to predict the next 10 messages for simplicity. This experiment addresses the **RQ6**. While ChatGPT predicts the next 10 events in log format, it fails to predict even a single log message correctly when compared with the ground truth. ChatGPT's response is shown in Fig. 10.
### Log Summarization
This experiment is designed to understand if ChatGPT could succinctly summarize logs. We perform this study in two steps. First, To address the **RQ7**, we provide a single log message from each of the sixteen datasets of opensource benchmark [34] to ChatGPT to understand its mechanics. This is useful to understand the log message in natural language. Fig. 11 shows one of the log messages from the Android subset of the Loghub dataset [34] and ChatGPT response. It is evident from the response that ChatGPT provides a detailed explanation of the log message. Next, to address the **RQ8**, we provide a set of ten log messages from each of the sixteen subsets of the Loghub dataset [34] to ChatGPT and ask to
Figure 7: ChatGPT response to identify the errors and root cause from set of 5 log messages from Loghub dataset [34].
summarize the logs. ChatGPT generates a concrete summary collectively from the provided log messages as shown in Fig. 12. In Fig. 12, we only show a few log messages for visual clarity. ChatGPT generates an understandable summary for all the sixteen subsets.
## 5 Discussion
Based on our study, we highlight a few challenges and prospects for ChatGPT on log data analysis.
### Handling unstructured log data
For our experiments, we send the unstructured raw log messages to ChatGPT to analyze its capabilities on various log-specific tasks. Our study indicates that ChatGPT shows promising performance in processing the raw log messages. It is excellent in log parsing and identifying security and privacy information, but encounters difficulty in case of API detection, event prediction, and summarizing. It misses out on several APIs and events from raw log messages.
Figure 8: ChatGPT response to extract urls, IPs, and users from set of 5 log messages from Loghub dataset [102].
### Performance with zero-shot learning
We perform our experiments with zero-shot learning. Our experimental results show that ChatGPT exhibits good performance in the areas of log parsing, security, and privacy, and average performance in the case of API detection, incident detection, and root cause identification. As ChatGPT supports few-shot learning, it remains an important future work to select important guidelines to set effective examples and evaluate ChatGPT's performance with them.
### Scalability - Message Cap For GPT
Most of the intelligent knowledge extraction from logs depends on processing a large amount of the logs in a short period. As ChatGPT 3.5 can only process limited tokens at once, it poses a major limitation in feeding the bigger chunk of log data. For our experiments, we could only send 190 to 200 log messages appended (addressing **RQ9 and RQ10**) with the appropriate prompt at once. As most of
Figure 9: ChatGPT response for anomaly detection for a sample from Loghub dataset [34].
the real-time applications would require to continuously send larger chunks of log messages to a system for processing, this limitation of ChatGPT 3.5 may pose a major hindrance in terms of scalability making them less suitable for tasks that require up-to-date knowledge or rapid adaptation to changing contexts. With the newer versions of ChatGPT, the number of tokens may be increased which would make it more suitable for its application in the log processing area.
### Latency
The response time of ChatGPT ranges from a few seconds to minutes when the number of log messages is increased in the prompt. The details about response time are shown in Table 1 and 2. Most of the intelligent knowledge extraction from logs depends on the processing time of the large amount of the logs. With the current state of response time, ChatGPT would face a major challenge in real-time applications, where a response is required in a shorter period. As currently, we have to call openAI API to get ChatGPT's response, with the newer
Figure 10: ChatGPT response for event prediction from Loghub dataset [34].
versions of ChatGPT, it may be possible to deploy these models close to applications and reduce the latency significantly.
### Privacy
Log data often contains sensitive information that requires protection. It is crucial to ensure that log data is stored and processed securely to safeguard sensitive information. It is also important to consider appropriate measures to mitigate any potential risks.
## 6 Conclusion
This paper presents the first evaluation to give a comprehensive overview of ChatGPT's capability on log data from three major areas: log parsing, log analytics and log summarization. We have designed specific prompts for ChatGPT to reveal its capabilities in the area of log processing. Our evaluations reveal that the current state of ChatGPT exhibits excellent performance in the areas of log parsing, but poses certain limitations in other areas i.e., API detection, anomaly detection, log summarization, etc. We identify several grand challenges and opportunities that future research should address to improve the current capabilities of ChatGPT.
## 7 Disclaimer
The goal of this paper is mainly to summarize and discuss existing evaluation efforts on ChatGPT along with some limitations. The only intention is to foster a better understanding of the existing framework. Additionally, due to the swift
Figure 11: Summary generated by ChatGPT for single log message from Loghub dataset [34]. |
2309.13795 | Modelling and Search-Based Testing of Robot Controllers Using Enzymatic
Numerical P Systems | The safety of the systems controlled by software is a very important area in
a digitalized society, as the number of automated processes is increasing. In
this paper, we present the results of testing the accuracy of different lane
keeping controllers for an educational robot. In our approach, the robot is
controlled using numerical P systems and enzymatic numerical P systems. For
tests generation, we used an open-source tool implementing a search-based
software testing approach. | Radu Traian Bobe, Florentin Ipate, Ionuţ Mihai Niculescu | 2023-09-25T01:13:18Z | http://arxiv.org/abs/2309.13795v1 | # Modelling and Search-Based Testing of Robot Controllers Using Enzymatic Numerical P Systems
###### Abstract
The safety of the systems controlled by software is a very important area in a digitalized society, as the number of automated processes is increasing. In this paper, we present the results of testing the accuracy of different lane keeping controllers for an educational robot. In our approach, the robot is controlled using numerical P systems and enzymatic numerical P systems. For tests generation, we used an open-source tool implementing a search-based software testing approach.
**Keywords**: tests generation, numerical P systems, enzymatic numerical P systems, search-based software testing, cyber-physical systems, membrane computing
## 1 Introduction
Due to the remarkable technological progress of late years, software applications tend to have a considerable role in solving most problems of everyday life. The medical, financial or automotive fields are just three of the main areas in which software products are intensively used. Given the importance of these areas in every individual's life, ensuring product quality and functionality is an essential step in the development process. The safety of software systems for large-scale use is ensured by testing. Software testing aims to validate the fulfillment of the requirements defined for the developed product, as well as to identify possible unwanted behaviors triggered by simulating certain operational contexts.
In this paper, we propose an approach for testing two different lane keeping controllers designed to move an educational robot called _E-puck_[9]. Both controllers are based on numerical P systems, introduced by G. Paun and R. Paun in [12]. We also provide an equivalent version for the models using enzymatic numerical P systems, an extension of numerical P systems, defined by A. Pavel et al. in [10]. For this experiment we used some reliable tools which will be introduced in the following sections.
The paper is structured as described: Section 2 presents the P system variants to be used in the paper. Section 3 introduces the working environment, including the tools used. Section 4 describes the models and the main differences between them, while Section 5 illustrates the testing approach along with the results. In the end, Section 6 presents the future work and conclusions.
## 2 Preliminaries
_Membrane computing_ is a field of research introduced by Gh. Paun in [11, 13]. The computational paradigm was originally inspired by the structure and functionality of the living cells. Several classes of membrane systems (P systems) have been later defined and investigated, being classified according to the structure of the membranes as _cell-like_, _tissue-like_ and _neural-like_ P systems. Membrane computing has
made significant breakthroughs in the last decades in fields like computer science, economics or biology. Depending on the requirements, extensions of the main concept were introduced and our experiment involves two of these types: numerical P systems and enzymatic numerical P systems.
### (Enzymatic) Numerical P System
The (enzymatic) numerical P systems [11, 9] are computational models that only inherit the membrane structure from the membrane systems, more exactly a cell-like membrane structure. The membranes contain _variables_ and their values are processed by the _programs_ on every time unit. The whole system is synchronized by a global clock in discrete time units.
The (enzymatic) numerical P system (EN P system) is defined by the tuple:
\[\Pi=(m,H,\mu,(\mathit{Var}_{I},\mathit{Pr}_{I},\mathit{Var}_{I}(0)),\ldots,( \mathit{Var}_{m},\mathit{Pr}_{m},\mathit{Var}_{m}(0))) \tag{1}\]
where:
* \(m\geq 1\) is degree of the system \(\Pi\) (the number of membranes);
* \(H\) is an alphabet of labels;
* \(\mu\) is membrane structure;
* \(\mathit{Var}_{i}\) is a set of variables from membrane \(i,I\leq i\leq m\);
* \(\mathit{Var}_{i}(0)\) is the initial values of the variables from region \(i,I\leq i\leq m\);
* \(\mathit{Pr}_{i}\) is the set of programs from membrane \(i,I\leq i\leq m\).
The program \(\mathit{Pr}_{I_{i},i}\), \(1\leq l_{i}\leq m_{i}\) has one of the two following forms:
* non-enzymatic \[F_{I_{i},i}(x_{I,i},\ldots,x_{k,i})\to c_{I,i}|v_{I}+c_{2,i}|v_{2}+ \cdots+c_{m_{i},i}|v_{m_{i}}\] where \(F_{I_{i},i}(x_{I,i},\ldots,x_{k,i})\) is the production function, \(c_{I,i}|v_{I}+c_{2,i}|v_{2}+\cdots+c_{m_{i},i}|v_{m_{i}}\) is the repartition protocol, and \(x_{I,i},\ldots,x_{k,i}\) are variables from \(\mathit{Var}_{i}\). Variables \(v_{1},v_{2}\ldots v_{m_{i}}\) can be from the region where the programs are located, and to its upper and inner compartments, for a particular region \(i\). If a compartment contains more than one program, only one will be chosen in non-deterministically manner.
* enzymatic \[F_{I_{i},i}(x_{I,j},\ldots,x_{k,i})|_{e_{i}}\to c_{I,i}|v_{I}+c_{2,i}|v_{2}+ \cdots+c_{m_{i},i}|v_{m_{i}}\] where \(e_{i}\) is an enzymatic variable from \(\mathit{Var}_{i}\), \(e_{i}\notin\{x_{I,i},\ldots,x_{k,i},v_{I},\ldots,v_{m_{i}}\}\). The program can be applied at time \(t\) only if \(e_{i}>min(x_{I,i}(t),\ldots,x_{k,i}(t))\). The programs that meet this condition in a region will be applied in parallel.
When the program is applied by the system at time \(t\geq 0\), the computed value
\[q_{l_{i},i}(t)=\frac{F_{l_{i},i}(x_{I,i}(t),\ldots,x_{k,i}(t))}{\sum_{j=I}^{n_{i} }c_{j,i}}\]
represents the _unitary portion_ that will be distributed to the variables \(v_{I},\ldots,v_{n}\), proportional to coefficients \(c_{I,i},\ldots,c_{m_{i},i}\), where \(c_{j,i}\in\mathbf{N}^{+}\) and the received values will be \(q_{l_{i},i}(t)\cdot c_{I,i},\ldots,q_{l_{i},i}(t)\cdot c_{m_{i},i}\).
The values of variables, from \(t-I\), present in the production functions are _consumed_, reset to zero, and their new value is the sum of the proportions distributed to variable through the repartition protocols, if they appear in them or remain at the value zero.
## 3 Experimental environment
In this section we will provide brief descriptions of the tools we integrated in our experiment. Firstly, we used an open-source software which allows the simulation of numerical P systems and enzymatic numerical P systems. The simulator is called PeP and will be introduced later in this section. Since we don't have the physical education robot involved in this study, we also used a dedicated platform for robot simulations, called Webots. For tests generation, we used a tool which won the _SBST Tool Competition 2022_[4]. We will discuss later in this section the arguments for using a search-based testing tool.
### PeP simulator
PeP simulator [14] is an open-source product developed by A.Florea and C.Buiu, used for simulations based on numerical P systems and enzymatic numerical P systems. The program is written in Python and receives numerical P systems as an input file. The input file includes the membrane structure and the contents of each membrane, being stored in memory and executed.
PeP can be used as a stand-alone tool for simple simulations and run from the command line with some options, like the number of simulations steps or a csv document generation containing the values at each step of the simulation. As observed in [14], the tool comes with a set of basic input files examples, both numerical P systems and enzymatic numerical P systems.
Besides the simplicity of running this tool, another advantage which can be taken into account when using PeP is that it can be used as an integrated module in complex projects. We used this approach in our experiment in order to make a controller accepted by the robot simulation platform and able to receive information from the platform. In our lane keeping experiments, the simulation ends when the robot drives off the generated lane or when the lane is kept until the end.
### Webots and E-puck
Webots is a robotics simulation software which allows the user to construct a complex environment for programming, modelling and simulating mobile robots. The environment can include multiple scene objects with different properties which can be set from the graphic interface or from the generation files [7]. In addition, the robots can be equiped with a large number of objects called nodes, like sensors, camera, GPS, LED, light sensor etc.
In our approach, additionally to the original equipments of E-puck, we used a GPS attached to the turret
slot in order to examine the coordinates at each step of the simulation. The scenes, called "worlds" in Webots, are containing the road that the robot will try to follow. The roads are generated with Ambiegen, a tool that will be described later in this section. Each world is defined by a _.wbt_ file. The objects can be edited in this file and we used this option in order to place the road object in the scene with coordinates exported from Ambiegen. Additional functionalities, like sensors or GPS can also be added to the robot by editing the world file which will be imported in Webots, obtaining the visual representation of the scene.
As mentioned before, for this experiment we used a robot widely known from educational and research purposes, called E-puck. At the moment, the robot has some capabilities that are not implemented in Webots, but considering the fact that both hardware and software components of E-puck are open source, this remains a challenging opportunity [1].
E-puck has eight infrared proximity sensors placed around the body [8]. For lane keeping simulation, we used just six of them: the two sensors placed in front and the four placed two on each side. This aspect can be easily adapted by changing the membrane structure and creating new membranes if more sensors are needed or deleting a few of them if required. Each sensor has a corresponding membrane in the numerical P system model and the association was made in the controller. The robot has two motors attached to the body along with two wheels, and the speed value is also changeable from the controller.
### Ambiegen
Ambiegen is an open-source tool that utilizes evolutionary search for the generation of test scenarios for autonomous systems. It can be used in experiments involving lane keeping assist systems and robots navigating a room with obstacles [6]. The software is developed in Python and uses evolutionary search [16] for tests generation. The main goal of Ambiegen in this approach is to generate roads as test cases in order to challenge E-puck to keep the lane. The tool exports the roads in separate text files as a sequence of points, representing the road spine. From this points, we can build the road with a proportional size to E-puck.
Challenging different LKAS (Lane Keeping Assist Systems) involves a large diversity of road topologies in order to detect the behavior in limit situations, such as narrow curves. Ambiegen figures out the solution for diversity by using a multi-objective genetic algorithm for search-based test generation, called Non-dominated Sorting Genetic Algorithm-II (NSGA-II) [2]. In Ambiegen implementation, NSGA-II has two-objectives: to increase the fault revealing power of test cases and to preserve their diversity [5]. This multi-objective approach, which combines roads generation with a high attention to diversity along with remarkable results at the competition mentioned above, attracted our curiosity to integrate Ambiegen with Webots and testing E-puck on the roads resulted.
### Experimental Procedure
Considering the above information, we will detail the way we worked with the presented tools. PeP and Ambiegen are developed in Python and so is the robot controller.
First of all, we could easily integrate PeP with E-puck controller using the PeP module which allowed us to parse the numerical P system model as an input file for controller. Achieving this, the model membranes were associated with controller variables. Names and constant values (e.g., robot cruise speed) were taken using a text file containing membrane's values of the variables. The values were chosen empirically.
Next is a pseudocode version of the main loop in our controller, which performs the simulation steps.
```
1:repeat
2:for i=1 to number_of_sensors do
3:\(sensor\_membrane(i)\leftarrow\mathit{value}(i)\)
4: run one simulation step
5: read \(\mathit{lw}\), \(\mathit{rw}\) from P system
6:\(\mathit{leftMotor}\gets\mathit{lw}\)
7:\(\mathit{rightMotor}\gets\mathit{rw}\)
8:until the end of the road or E-puck goes out of the road
```
**Algorithm 1** Simulation steps performing algorithm
Another challenge for us was to move the robot on the roads exported from Ambiegen with the above presented approach. Ambiegen exports the roads as _json_ files along with informations like test outcome, maximum curvature coefficient etc. We took the road points from the file and wrote them in the world file.
Webots provides the possibility to extend the set of scene nodes by adding custom nodes created by users. The mechanism is called PROTO and is described in [13]. After a node is extending with the PROTO interface, it can be instantiated from the Webots graphic interface.
We used this technique to retrieve the points forming the spines of the roads generated by Ambiegen and putting them into the _wayPoints_ of the _Road_ node. Using javascript, used as scripting language by the PROTO, we constructed new nodes illustrating the roads from Ambiegen. Then, in the graphic interface of Webots, the road is represented in accordance with the road from Ambiegen. With minimal Python code additions we plotted each generated road with the corresponding spine to confirm that the shape illustrated in Webots respects the original one.
## 4 Models
In this section we will present two models used to control the robot, the core of the controller. The controller receives data from proximity sensors, that measure distances to obstacles from the environment, to determine the direction of movement of a differential two wheeled robot, E-puck, in our case.
The proximity sensor has a range of 4 cm; if the obstacles are further than this limit the sensor returns the value of 0. The proximity sensors are placed on the left and right side of the robot in the direction of its movement at different angles.
The first model was taken from [3] and adapted. The equations that calculate the linear and angular velocity are shown below:
\[\mathit{leftSpeed}= \mathit{cruiseSpeed}+\sum_{i=1}^{n}\mathit{weightLeft}_{i}\cdot \mathit{prox}_{i}\] \[\mathit{rightSpeed}= \mathit{cruiseSpeed}+\sum_{i=1}^{n}\mathit{weightRight}_{i}\cdot \mathit{prox}_{i}\]
The _leftSpeed_ and _rightSpeed_ are the speeds of the two wheels of the robot. The enzymatic numerical P system described below encapsulates this behavior.
The first model is defined as follows:
\[\Pi_{M_{1}}=(m,H,\mu,(\mathit{Var}_{1},\mathit{Pr}_{1},\mathit{Var}_{1}(0)), \ldots,(\mathit{Var}_{m},\mathit{Pr}_{m},\mathit{Var}_{m}(0)))\]
where:
* \(m=k\cdot 3+3,k=6\), where \(k\) is the number of proximity sensors;
* \(H=\{s,s_{c}\}\cup\bigcup_{i=1}^{k}\{c_{i},s_{i},w_{i}\}\);
* \(\mu=[[[[\![_{s_{1}}]\![_{w_{1}}]\!]_{c_{1}}\!\ldots[[\![_{s_{k_{k}}}]\![_{w_{ 1}}]\!]_{c_{k}}[\!]_{s_{c}}]_{s}\);
* \(\mathit{Var}_{s}=\{x_{s_{1}},x_{s_{2}}\},\mathit{Var}_{s_{c}}=\{x_{s_{c}}\}\), \(\mathit{Var}_{c_{i}}=\{x_{c_{i},s_{1}},x_{c_{i},s_{2}},x_{c_{i},w_{i}},x_{c_{i} },x_{c_{i}}\},I\leq i\leq k\), \(\mathit{Var}_{s_{i}}=\{x_{s_{i},i}\},I\leq i\leq k\), \(\mathit{Var}_{w_{i}}=\{x_{w_{i},w_{i}},x_{w_{i},w_{i}},x_{w_{i}}\},I\leq i\leq k\);
* \(\mathit{Var}_{i}(0)=0,I\leq i\leq k\);
* \(\mathit{Pr}_{s}=\{0\cdot x_{s_{1}}\cdot x_{s_{2}}\to I|x_{s_{1}}+I|x_{s_{2}}\}\); \(\mathit{Pr}_{s_{c}}=\{3x_{s_{c}}\to I|x_{s_{c}}+I|x_{s_{1}}+I|x_{s_{2}}\}\); \(\mathit{Pr}_{c_{i}}=\{x_{c_{i},s_{1}}\cdot x_{c_{i},w_{i}}|e_{c_{1}}\to I|x_{s_{ 1}},\) \(\mathit{x}_{c_{i},s_{2}}\cdot x_{c_{i},w_{i}}|e_{c_{2}}\to I|x_{s_{2}}\},I \leq i\leq k\); \(\mathit{Pr}_{s_{1}}=\{3x_{s_{1},i}\to I|x_{s_{1}},j+I|x_{c_{i},s_{1}}+I|x_{c _{i},s_{2}}\},I\leq i\leq k\); \(\mathit{Pr}_{w_{i}}=\{2x_{w_{i},w_{i}}|e_{w_{i}}\to I|x_{w_{i},w_{i}}+I|x_{c _{i},w_{i}},\) \(\mathit{2x}_{w_{i},w_{i}}|e_{w_{i}}\to I|x_{w_{i},w_{i}}+I|x_{c _{i},w_{i}}\},I\leq i\leq k\);
The meaning of the variables from the model is the following:
* \(x_{s_{1}}\) and \(x_{s_{r}}\) from the region \(s\) represent _leftSpeed_ and _rightSpeed_, the sum of the products are accumulated in \(s\) ;
* \(x_{s_{c}}\) from the compartment \(s_{c}\) is _cruiseSpeed_;
* each pair of weights, _weightLeft\({}_{i}\)_ and _weightRight\({}_{i}\)_, resides in the regions \(w_{i}\), \(I\leq i\leq k\);
* for each proximity sensor, _prox\({}_{i}\)_, a compartment is defined, namely \(s_{i}\), containing a single variable, \(x_{s_{i},i}\), \(I\leq i\leq k\);
* the products are calculated by two distinct programs, _weightLeft\({}_{i}\cdot\mathit{prox}_{i}\)_, and _weightRight\({}_{i}\cdot\mathit{prox}_{i}\)_, \(I\leq i\leq k\), in the compartments \(c_{i}\).
The second model is an improvement on the first one. First we define the function
\[f(x)=\begin{cases}1,&\text{if }x=0\\ 0,&\text{otherwise}\end{cases}\]
This function will be used in the equations describing the behavior of the model and in the production functions from the programs.
The equations describing the behavior are:
\[\begin{split} weightLeft&=\sum_{i=1}^{n} weightLeft_{i}\cdot prox_{i}\\ weightRight&=\sum_{i=1}^{n} weightRight_{i}\cdot prox_{i}\\ leftSpeed&=\] \[\begin{split} rightSpeed&=\] \[\begin{split}&\text{cruiseSpeed}\cdot weightLeft+f( weightLeft)\cdot cruiseSpeed\\ &\text{cruiseSpeed}\cdot weightRight+f(weightRight)\cdot cruiseSpeed \end{split}\]
The model is defined as follows:
\[\Pi_{M_{2}}=(m,H,\mu,(\text{Var}_{I},Pr_{I},\text{Var}_{I}(0)),\dots,(\text{ Var}_{m},Pr_{m},\text{Var}_{m}(0)))\]
where:
* \(m=3k+3,k=6\);
* \(H=\{s,w,s_{c}\}\cup\bigcup_{i=l}^{k}\{c_{i},s_{i},w_{i}\}\);
* \(\mu=[[[[]_{s_{I}}][_{w_{I}}]_{c_{I}}\dots[[]_{s_{k}}][_{w_{k}}]_{c_{k}}[]_{s_ {c}}]_{w}]_{s}\);
* \(\text{Var}_{s}=\{x_{s_{I}},x_{s_{I}}\},\text{Var}_{w}=\{x_{w_{I}},x_{w_{r}},e_ {w}\},\text{Var}_{s_{c}}=\{x_{s_{c}}\}\), \(\text{Var}_{c_{i}}=\{x_{c_{i},s_{I}},x_{c_{i},s_{I}},x_{c_{i},w_{r}},e_{c_{i}} \},I\leq i\leq k\), \(\text{Var}_{s_{i}}=\{x_{s_{i},i}\},I\leq i\leq k\), \(\text{Var}_{w_{i}}=\{x_{w_{i},w_{I}},x_{w_{i},w_{r}},e_{w_{i}}\},I\leq i\leq k\);
* \(\text{Var}_{i}(0)=0,I\leq i\leq k\);
* \(Pr_{s}=\{0\cdot x_{s_{I}}\cdot x_{s_{I}}\to I|x_{s_{I}}+I|x_{s_{I}}\}\); \(Pr_{w}=\{x_{s_{c}}\cdot x_{w_{I}}+f(x_{w_{I}})\cdot x_{s_{I}}|_{e_{w}}\to I|x_ {s_{I}},\) \(x_{s_{c}}\cdot x_{w_{r}}+f(x_{w_{r}})\cdot x_{s_{c}}|_{e_{w}}\to I|x_ {s_{r}}\}\); \(Pr_{s_{c}}=\{x_{c_{I}}\to I|x_{s_{c}}\}\); \(Pr_{c_{I}}=\{x_{c_{I},s_{I}}\cdot x_{c_{i},w_{I}}|_{e_{c}}\to I|x_ {s_{I}},\) \(x_{c_{I},s_{r}}\cdot x_{c_{i},w_{I}}|_{e_{c}}\to I|x_{s_{r}}\},I\leq i\leq k\); \(Pr_{s_{i}}=\{3x_{s_{i},i}\to I|x_{s_{i},j}+I|x_{c_{i},s_{I}}+I|x_{c_{i},s_{r}}\}, I\leq i\leq k\); \(Pr_{w_{i}}=\{2x_{w_{i},w_{I}}|_{e_{w_{I}}}\to I|x_{w_{i},w_{I}}+I|x_{c_{i},w_{I}},\) \(2x_{w_{i},w_{r}}|_{e_{w_{I}}}\to I|x_{w_{i},w_{r}}+I|x_{c_{i},w_{r}}\},I\leq i \leq k\);
As we see from the definition of the \(\Pi_{M_{j}}\) and its equations, in the proximity of an obstacle, roadside, the speed of the wheel on the side with the obstacle increases according to the weights. In certain circumstances, particularly related to the geometry of a tight road curve, the sum between travel speed and the weights results in a rotation in the opposite direction of the obstacle, but a slight forward motion still continues, causing the robot to leave the road.
The second model, \(\Pi_{M_{2}}\), overcomes this problem by calculating the product of the travel speed and the sum of the weights. In a similar situation, the second model performs an angular rotation, in the opposite direction to the obstacle, and when the proximity sensors no longer detect obstacles, it continues moving forward with a constant velocity. This behavior is modeled by compartment \(w\).
In the next section, it can be seen that the \(\Pi_{M_{2}}\) model has better behavior in similar situations compared to the first model.
## 5 Simulation Results
Having presented the way we integrate the tools along with the models we will detail in this section the testing stage. Ambiegen offers the possibility to configure the setup for its genetic algorithms, choosing the values for parameters like population size, number of generations, mutation rate or crossover rate. Also, aspects like the amount of time allocated for tests generation, map size, out of bound percent from which the test is considered as failed can be set easily from the command line, when executing the main Python file used in [4].
After series of trials we kept the default values, so we used the population size 100 with 75 generations. Mutation rate is 0.4 and crossover rate is 1. These values can be changed from the internal configuration file of Ambiegen. From the command line we chose the time budget allocated to generation and execution to be 1800 seconds and the map size to 200x200 meters which is the default value. For the out of bound percentage we also kept the default value (95%). After each simulation, Ambiegen exported roads spines coordinates as text files and we also plotted each generated road, as mentioned before. Then we could easily chose different roads based on the number of curves and their angle as the main criteria to diversify the tests that were supposed to be given to E-puck controller in Webots.
Next we report some roads along with the simulation results. The road is marked with grey, whilst the trajectory using the first model is represented with red. The trajectory of the second model (the improved one) is colored with green.
In Figure 1 we can see the results of different roads tests. We observe that Test 1, being the simplest test of the above presented, is passed by both models. In the next scenarios, the complexity of the road increases and only the improved model can pass.
From all the experiments, we noticed that the first model (the one marked with red in results) cannot pass a huge majority of tests with curves, whilst the improved model performs a rotation movement when
Figure 1: Illustration of different road tests
the road is curved and for this reason it manages to advance until the end of the road. Additional tests were added to [15].
## 6 Conclusions and Future Work
In this paper we presented an approach to test different enzymatic numerical P systems models using modern tools and search-based generated tests. We evaluated our approach on a research and educational robot called E-puck, virtually represented in Webots simulator. We set up our working environment incorporating the tools with different scripts created to ensure a smooth integration between them and also a better data processing. We formally described each model involved and then showed the differences between the lane-keeping controllers resulted when using each of them. As future work, we will investigate the possibility to dynamically calculate the values for weights, which are at the moment empirically assigned. Based on the controller behavior during the previous tests, the weights values will be automatically adapted. Also, we will try to develop a method to generate more complex roads in order to better challenge the controllers.
|
2309.03586 | Programmable access to microresonator solitons with modulational
sideband heating | Dissipative Kerr solitons formed in high-$Q$ optical microresonators provide
a route to miniaturized optical frequency combs that can revolutionize
precision measurements, spectroscopy, sensing, and communication. In the last
decade, a myriad of integrated material platforms have been extensively studied
and developed to create photonic-chip-based soliton combs. However, the
photo-thermal effect in integrated optical microresonators has been a major
issue preventing simple and reliable soliton generation. Several sophisticated
techniques to circumvent the photo-thermal effect have been developed. In
addition, instead of the single-soliton state, emerging applications in
microwave photonics and frequency metrology prefer multi-soliton states. Here
we demonstrate an approach to manage the photo-thermal effect and facilitate
soliton generation. The approach is based on a single phase-modulated pump,
where the generated blue-detuned sideband synergizes with the carrier and
thermally stabilizes the microresonator. We apply this technique and
demonstrate deterministic soliton generation of 19.97 GHz repetition rate in an
integrated silicon nitride microresonator. Furthermore, we develop a program to
automatically address to target $N-$soliton state, in addition to the
single-soliton state, with near 100% success rate and as short as 10 s time
consumption. Our method is valuable for soliton generation in essentially any
platforms even with strong photo-thermal effect, and can promote wider
applications of soliton frequency comb systems for microwave photonics,
telecommunication and frequency metrology. | Huamin Zheng, Wei Sun, Xingxing Ding, Haoran Wen, Ruiyang Chen, Baoqi Shi, Yi-Han Luo, Jinbao Long, Chen Shen, Shan Meng, Hairun Guo, Junqiu Liu | 2023-09-07T09:27:28Z | http://arxiv.org/abs/2309.03586v1 | # Programmable access to microresonator solitons with modulational sideband heating
###### Abstract
Dissipative Kerr solitons formed in high-\(Q\) optical microresonators provide a route to miniaturized optical frequency combs that can revolutionize precision measurements, spectroscopy, sensing, and communication. In the last decade, a myriad of integrated material platforms have been extensively studied and developed to create photonic-chip-based soliton combs. However, the photon-thermal effect in integrated optical microresonators has been a major issue preventing simple and reliable soliton generation. Several sophisticated techniques to circumvent the photo-thermal effect have been developed. In addition, instead of the single-soliton state, emerging applications in microwave photonics and frequency metrology prefer multi-soliton states. Here we demonstrate an approach to manage the photo-thermal effect and facilitate soliton generation. The approach is based on a single phase-modulated pump, where the generated blue-detuned sideband synergizes with the carrier and thermally stabilizes the microresonator. We apply this technique and demonstrate deterministic soliton generation of 19.97 GHz repetition rate in an integrated silicon nitride microresonator. Furthermore, we develop a program to automatically address to target \(N-\)soliton state, in addition to the single-soliton state, with near 100% success rate and as short as 10 s time consumption. Our method is valuable for soliton generation in essentially any platforms even with strong photo-thermal effect, and can promote wider applications of soliton frequency comb systems for microwave photonics, telecommunication and frequency metrology.
Dissipative Kerr solitons formed in high-\(Q\) optical microresonators [1; 2] constitute miniaturized optical frequency combs with broad bandwidths and repetition rates in the microwave to millimeter-wave domain. Commonly referred to as "soliton microcombs", they have been already used in many system-level information and metrology applications, such as coherent telecommunication [3; 4; 5], ultrafast ranging [6; 7], astronomical spectrometer calibration [8; 9], dual-comb spectroscopy [10; 11], low-noise microwave generation [12; 13], photonic neural networks [14; 15], datacenter circuit switch [16], microwave photonics [17; 18], frequency synthesizers [19], and optical atomic clocks [20]. Critical to the rapid progress of soliton microcomb technology is the development and continuous optimization of various photonic integrated platforms including silica [21; 22], silicon nitride (Si\({}_{3}\)N\({}_{4}\)) [23; 24], high-index doped silica (Hydex) [25; 26], aluminium nitride (AlN) [27; 28], lithium niobate (LiNbO\({}_{3}\)) [29; 30], tantala pentaoxide (Ta\({}_{2}\)O\({}_{5}\)) [31], silicon carbide (SiC) [32; 33], chalcogenide [34], aluminium gallium arsenide (AlGaAs) [35; 36] and gallium phosphide (GaP) [37]. Besides, the introduction and successful implementation of hybrid and heterogeneous integration [38; 39; 40; 41; 42; 43; 44] further enable complex control schemes, extra nonlinearity and efficient amplification for integrated soliton microcombs.
Despite these advances, one issue that currently prevents a wider deployment of soliton microcombs is to deterministically access and control the soliton state. This issue is caused by the photo-thermal effect, arising from light absorption and thermal-optic effect in optical microresonators [45; 46], particularly for those built on integrated platforms. When the CW pump's frequency is scanned through a microresonator resonance, from the blue-detuned side to the red-detuned side, a self-organized pulse waveform (i.e. a soliton state), is formed [1]. However, the photo-thermal effect leads to serious thermal instability that often annihilates the soliton state immediately. Therefore, sophisticated techniques to manage this effect have been developed, such as power kicking [47; 48], single-sideband suppressed-carrier frequency shifters [49], dual-laser pump [50; 51; 52], pump modulation [53; 54; 55; 56], pulse pumping [58], or laser self-cooling [59; 60]. Besides, cryogenic operation can be helpful [36]. In addition, instead of the single-soliton state, efforts have been also made on employing multi-soliton states for reconfigurable photonic microwave filters [18] and synthesis of terahertz frequency [61]. For frequency metrology applications, the local comb line power enhancement in multi-soliton state further facilitates the realization of self-referencing [62].
Here we demonstrate an approach to manage the photo-thermal effect, and to automatically and deterministically access to a soliton state. This method, employing single-sideband heating, overcomes the photo-thermal effect in integrated microresonators. We apply |
2310.20228 | Reconstructing Human Pose from Inertial Measurements: A Generative
Model-based Compressive Sensing Approach | The ability to sense, localize, and estimate the 3D position and orientation
of the human body is critical in virtual reality (VR) and extended reality (XR)
applications. This becomes more important and challenging with the deployment
of VR/XR applications over the next generation of wireless systems such as 5G
and beyond. In this paper, we propose a novel framework that can reconstruct
the 3D human body pose of the user given sparse measurements from Inertial
Measurement Unit (IMU) sensors over a noisy wireless environment. Specifically,
our framework enables reliable transmission of compressed IMU signals through
noisy wireless channels and effective recovery of such signals at the receiver,
e.g., an edge server. This task is very challenging due to the constraints of
transmit power, recovery accuracy, and recovery latency. To address these
challenges, we first develop a deep generative model at the receiver to recover
the data from linear measurements of IMU signals. The linear measurements of
the IMU signals are obtained by a linear projection with a measurement matrix
based on the compressive sensing theory. The key to the success of our
framework lies in the novel design of the measurement matrix at the
transmitter, which can not only satisfy power constraints for the IMU devices
but also obtain a highly accurate recovery for the IMU signals at the receiver.
This can be achieved by extending the set-restricted eigenvalue condition of
the measurement matrix and combining it with an upper bound for the power
transmission constraint. Our framework can achieve robust performance for
recovering 3D human poses from noisy compressed IMU signals. Additionally, our
pre-trained deep generative model achieves signal reconstruction accuracy
comparable to an optimization-based approach, i.e., Lasso, but is an order of
magnitude faster. | Nguyen Quang Hieu, Dinh Thai Hoang, Diep N. Nguyen, Mohammad Abu Alsheikh | 2023-10-31T07:13:11Z | http://arxiv.org/abs/2310.20228v3 | # Reconstructing Human Pose from Inertial Measurements:
###### Abstract
The ability to sense, localize, and estimate the 3D position and orientation of the human body is critical in virtual reality (VR) and extended reality (XR) applications. This becomes more important and challenging with the deployment of VR/XR applications over the next generation of wireless systems such as 5G and beyond. In this paper, we propose a novel framework that can reconstruct the 3D human body pose of the user given sparse measurements from Inertial Measurement Unit (IMU) sensors over a noisy wireless environment. Specifically, our framework enables reliable transmission of compressed IMU signals through noisy wireless channels and effective recovery of such signals at the receiver, e.g., an edge server. This task is very challenging due to the constraints of transmit power, recovery accuracy, and recovery latency. To address these challenges, we first develop a deep generative model at the receiver to recover the data from linear measurements of IMU signals. The linear measurements of the IMU signals are obtained by a linear projection with a measurement matrix based on the compressive sensing theory. The key to the success of our framework lies in the novel design of the measurement matrix at the transmitter, which can not only satisfy power constraints for the IMU devices but also obtain a highly accurate recovery for the IMU signals at the receiver. This can be achieved by extending the set-restricted eigenvalue condition of the measurement matrix and combining it with an upper bound for the power transmission constraint. Our framework can achieve robust performance for recovering 3D human poses from noisy compressed IMU signals. Additionally, our pre-trained deep generative model achieves signal reconstruction accuracy comparable to an optimization-based approach, i.e., Lasso, but is an order of magnitude faster.
Compressive sensing, generative models, inertial measurement units, human pose estimation, edge computing.
## I Introduction
### _Motivation_
The ability to estimate human body movements plays a key role in emerging human-computer interaction paradigms such as virtual reality (VR) and extended reality (XR) [1]. By correctly estimating the 3D position and orientation of the human body, VR/XR applications such as gaming, virtual offices, and smart factories can offer a more interactive and immersive experience for users. Highly accurate solutions for estimating 3D human movements usually rely on images or videos, which typically require multi-camera calibrated systems [1, 2]. However, the multi-camera systems are limited to capturing human outdoor activities (e.g., due to sensitive information conveyed in the images/videos) and severely degraded with poor lightning conditions [1]. Specifically, for VR/XR applications deployed over wireless systems, e.g., 5G and beyond, leveraging such images and videos from multi-camera systems for human body estimation purposes is costly in terms of bandwidth utilization and computing efficiency [3, 4]. This demands a more effective approach to achieve highly accurate estimation of human body movements in VR/XR applications deployed over wireless systems [5].
Fortunately, the inertial measurement unit (IMU) (i.e., accelerometer, gyroscope, and magnetometer) offers a promising solution to this problem. The systems based on IMU do not suffer from limitations in camera-based systems. The IMU sensors can track human movements by measuring the acceleration and orientation of human body parts, e.g., head orientation or arm/leg movement, regardless of image sensitivity information and lightning conditions, making them more suitable for indoor and outdoor VR/XR applications [6, 7]. As the IMU sensors are typically worn on the body, e.g., wrists, head, or ankles, the information measured from the IMU can help to track the movement of the body segments relative to each other. For example, utilizing IMU information such as the orientation of VR headsets can help the system better predict the user preferences in VR streaming applications [8, 9]. Moreover, acceleration readings from the IMU sensors can help to track user step count, thereby increasing the accuracy of outdoor pedestrian localization [10]. Furthermore, combining IMU information with a kinematic model of the human body can simulate the entire body movement of the user in a complete positioning and sensing system [7, 11]. With such enormous potential, the IMU sensors have been widely deployed as a standard setting inside mobile phones, tablets, VR headsets, and VR controllers.
### _Related Works_
Unlike solutions for reconstructing movements of independent parts of the human body, e.g., head or arm, estimating a full body movement of the user usually requires a set of IMU sensors placed on different parts of the body or attached to a suit [7]. With a set of IMU sensors, ranging from 3 to 17 sensors, the full body movements can be fully reconstructed with the help of optimization-based techniques [6, 12] and learning-based techniques [13, 14, 15]. In [12], a Kalman Filter was utilized to correct the kinematics of the 3D human model, given the joint uncertainties of sensor noise, angular velocity, and acceleration of the IMU sensors. In [6], the authors proposed a new optimization approach based on exponential mapping, which transforms the orientation and
acceleration values into equivalent energy functions. After carefully calibrating between the IMU sensors' coordinate frames and the 3D human body's coordinate frames, the optimization objective can be formulated as minimizing the set of energy functions over the entire sequence of collected data.
Different from offline optimization approaches [6, 12], learning-based approaches can achieve real-time estimation based on pre-trained deep learning models [13, 14, 15]. In [13], the authors proposed a deep learning approach based on a recurrent neural network that trains on the entire sequence of data at the training phase. During the testing phase, the pre-trained model can estimate the corrected body pose of the user in a shorter time window. In [14], the authors extended this idea by using a recurrent neural network combined with a physics-aware motion optimizer that enhances the tracking accuracy for a longer time window. The authors in [15] reported similar advantages of using a physics-aware motion optimizer with fewer IMU sensors being used.
### _Challenges and Proposed Solutions_
Although there has been significant effort in improving the precision of human pose estimation in 3D environments with IMU sensors [6, 12, 13, 14, 15], there is a lack of human pose estimation approach for VR/XR applications deployed over wireless networks, where the estimation ability can be strictly constrained by channel quality, power transmission, and tradeoff between latency and accuracy of the solution. Deploying the human pose estimation frameworks over the wireless networks is a non-trivial problem as the transmitted data is more exposed to channel noise, e.g., due to channel quality and channel interference. At this point, the existing works overlook the presence of noise in the received IMU data as the noise may significantly degrade the reconstruction accuracy of the human pose estimation task, resulting in a poor quality of experience for the user in a virtual environment. In addition, the current approaches do not consider the potential redundancy of the IMU data before transmitting it to the receiver. The potential redundancy of information from each IMU sensor, such as orientation and acceleration values at high frequencies (see Fig. 2), can further reduce the number of data samples that need to be transmitted. As a result, exploiting the data redundancy can enhance channel utilization and reduce power consumption for the wireless systems [16].
To the best of our knowledge, there is a lack of studies that address the above problems, i.e., the noisy IMU data transmission and the potential redundancy of the IMU data in reconstructing human pose over wireless systems. To address these problems, we propose a novel framework based on compressive sensing and generative modeling. On the one hand, the compressive sensing technique is utilized to downsample the IMU signals before transmitting the signals over the wireless channel [17]. Based on rigorous down-sampling (or projection) techniques, the compressive sensing framework is promising to reduce the redundant IMU signals. As a result, this approach can not only reduce the energy consumption associated with IMU data acquisition and transmission but also enhance channel utilization as the transmitted data is in a more compressed form. On the other hand, a generative model (e.g., a variational auto-encoder [18]) deployed at the receiver can help the receiver make a robust estimation of the human body pose (e.g., through data denoising and data recovery capabilities), given the noisy compressed IMU measurements transmitted over a wireless channel. Unlike the optimization-based techniques [6, 12] and learning-based techniques [13, 14, 15], our proposed generative model can handle noisy data more effectively and also can exploit the potential sparsity patterns in the data. To summarize, the main contributions of our work are as follows:
* We propose an innovative framework based on compressive sensing and generative modeling techniques for human pose estimation from IMU sensors toward VR/XR applications deployed over wireless systems. The proposed framework can accurately recover the original IMU signals from noisy compressed signals transmitted over a wireless channel. The combination of compressive sensing and generative modeling generates potential benefits to the system such as enhancing channel utilization, effective data sensing, and power-efficient wireless communications.
* We develop a novel design for the measurement matrix at the transmitter, which helps to transform the high-dimensional signal into a compressed form. Our measurement matrix design extends the set-restricted eigenvalue condition of existing generative model-based compressive sensing approaches to a general setting, which considers the impacts of a wireless communication channel on data recovery. With rigorous analysis, we prove that our proposed measurement matrix enables our proposed framework to outperform other deep learning and optimization approaches, in terms of accuracy and latency of signal reconstruction process.
* We show that the proposed framework can achieve signal reconstruction accuracy comparable to the optimization-based approach, i.e., Lasso, with an order of magnitude faster than Lasso. The fast reconstruction ability of the generative model makes it a promising solution for VR/XR applications with stringent latency requirements.
* We demonstrate a practical use case of the generative model that can generate missing IMU signals, thus creating synthetic body movements for the users without using input IMU signals. This ability of the generative model is very useful for potential VR/XR applications over wireless systems as the missing input data usually happens due to the lossy nature of the wireless environment.
The organization of the paper is as follows. Section II describes the overview of the system model and preliminaries of compressive sensing and generative modeling. In Section III, we formulate the problem as a reconstruction error minimization problem, subject to a power transmission constraint. In Section IV, we extensively evaluate the performance of the proposed framework with other baselines, such as an optimization-based approach and a deep learning-based approach. We also show that the proposed generative
model can generate missing IMU data features, thus directly creating smooth synthetic body movements for the users. Finally, Section V concludes the paper.
## II System Overview and Preliminaries
The proposed system model is illustrated in Fig. 1. At the transmitter side, the user's body is equipped with a set of 17 IMU sensors placed on standard positions as in commercial systems [7]. The set of synchronized IMU sensors produces a sequence of data, e.g., orientation and acceleration, which is usually aggregated together at a central IMU node (e.g., IMU sensor placed on the user's spine). Compressive sensing downsamples the data sequence into a shorter sequence through matrix multiplication. The down-sampled data sequence is transmitted over a wireless noisy channel, e.g., a Gaussian channel. On the receiver side, the edge server uses a deep generative model to recover the original data sequence from the noisy down-sampled data sequence. From the recovered IMU data and a kinematic human body model (e.g., SMPL [19]), the generative model can further generate the 3D avatar model with the corrected pose.
As described, the proposed framework consists of two main components that are (i) compressive sensing for the transmitter-receiver communication and (ii) a generative model for recovering the signals at the receiver, i.e., the edge server. In the following, we describe the fundamentals of compressive sensing, generative models, and generative model-based compressive sensing for an end-to-end learning system.
### _Compressive Sensing_
As illustrated in Fig. 1, we have a sequence of data is a real-valued, finite-length one-dimensional signal \(\mathbf{x}^{*}\in\mathbb{R}^{n}\). With compressing sensing, we want to down-sample the signal \(\mathbf{x}^{*}\) before transmitting it to the receiver. For that, we have a measurement matrix \(\mathbf{A}\in\mathbb{R}^{m\times n}\) to make a linear projection from a higher dimensional vector \(\mathbf{x}^{*}\in\mathbb{R}^{n}\) to a lower dimensional vector \(\mathbf{y}\in\mathbb{R}^{m}\) (\(m<n\)). Usually, \(n\) is referred to as the length of the original vector and \(m\) is the number of measurements from that vector. In particular, the \(m\)-dimensional signal being transmitted over the channel is:
\[\mathbf{y}=\mathbf{A}\mathbf{x}^{*}. \tag{1}\]
The received signal at the receiver can be corrupted by noise. In the case of a Gaussian channel, the received signal at the receiver is [17, 20]:
\[\mathbf{\hat{y}}=\mathbf{A}\mathbf{x}^{*}+\boldsymbol{\eta}, \tag{2}\]
where \(\boldsymbol{\eta}\in\mathbb{R}^{m}\) is a Gaussian noise vector with zero mean and \(\sigma_{N}\) standard deviation, i.e., element \(\eta_{i}\) (\(i=1,2,\ldots,m\)) of \(\boldsymbol{\eta}\) follows a Gaussian distribution \(\eta_{i}\sim\mathcal{N}(0,\sigma_{N}^{2})\). As observed from equation (2), the signal \(\mathbf{\hat{y}}\in\mathbb{R}^{m}\) is a compressed form of \(\mathbf{x}^{*}\in\mathbb{R}^{n}\). To recover the signal \(\mathbf{x}^{*}\) from the received signal \(\mathbf{\hat{y}}\), the receiver needs to solve the following quadratically constrained optimization problem [20]:
\[\mathcal{P}_{0}: \quad\min_{\mathbf{x}}\|\mathbf{x}\|_{1},\] (3a) subject to \[\quad\|\mathbf{A}\mathbf{x}-\mathbf{\hat{y}}\|_{2}\leq\|\boldsymbol{ \eta}\|_{2}, \tag{3b}\]
where the term \(\|\mathbf{x}\|_{p}\) denotes the \(l_{p}\) norm (\(p=0,1,2,\ldots\)) of the vector \(\mathbf{x}\), i.e., [20]
\[\|\mathbf{x}\|_{p}=(\sum_{j=1}^{n}|x_{j}|^{p})^{1/p}. \tag{4}\]
The problem \(\mathcal{P}_{0}\) in (3) forms an underdetermined system, that is, a system in which there are multiple solutions to the system. To guarantee the unique recovery of the signal, compressive sensing relies on the two main assumptions about the signal \(\mathbf{x}^{*}\) and the measurement matrix \(\mathbf{A}\). First, the signal \(\mathbf{x}^{*}\) has sparsity property. Second, the measurement matrix \(\mathbf{A}\) satisfies specific conditions that are either Restricted Isometry Property (RIP) or Restricted Eigenvalue Condition (REC). The definitions of sparsity, RIP, and REC are as follows [17, 20].
**Definition 1** (Sparsity).: _The support of a vector \(\mathbf{x}\in\mathbb{R}^{n}\) is the index set of its nonzero entries, i.e.,_
\[\sup(\mathbf{x}):=\big{\{}j\in\{1,2,\ldots,n\}:x_{j}\neq 0\big{\}}.\]
_The vector \(\mathbf{x}\in\mathbb{R}^{n}\) is called \(k\)-sparse if at most \(k\) of its entries are nonzero, i.e., if_
\[\|\mathbf{x}\|_{0}=\mathbf{card}\big{(}\sup(\mathbf{x})\big{)}\leq k,\]
_where \(\mathbf{card}(\cdot)\) is the cardinality (number of elements) and \(\|\cdot\|_{0}\) is \(l_{0}\) norm._
In practice, the sparsity property of the interested signal \(\mathbf{x}\) is usually relaxed to nearly \(k\)-sparse, meaning that there are \(n-k\)
Fig. 1: An illustration of our proposed system model. A set of synchronized IMU sensors produces a sequence of data, e.g., orientation and acceleration, and compressive sensing down-samples the data sequence into a shorter sequence. The down-sampled sequence of IMU data is transmitted over a noisy channel. The receiver uses a deep generative model to recover the original data sequence from received signals.
entries of the vector \(\mathbf{x}\) are approximately zero. In Fig. 2, we illustrate the nearly \(k\)-sparse acceleration data from the IMU dataset in [13]. Note that we use the Fast Fourier Transform (FFT) in Fig. 2 for illustration purposes only, and our proposed learning algorithm will not utilize the FFT.
**Definition 2** (Restricted Isometry Property).: _Let \(S_{k}\subset\mathbb{R}^{n}\) be the set of \(k\)-sparse vectors. For some parameter \(\delta\in(0,1)\), a matrix \(\mathbf{A}\in\mathbb{R}^{m\times n}\) is said to satisfy RIP\((k,\delta)\) if \(\forall\mathbf{x}\in S_{k}\),_
\[(1-\delta)\|\mathbf{x}\|_{2}\leq\|\mathbf{A}\mathbf{x}\|_{2}\leq(1+\delta)\| \mathbf{x}\|_{2}.\]
**Definition 3** (Restricted Eigenvalue Condition).: _Let \(S_{k}\subset\mathbb{R}^{n}\) be the set of \(k\)-sparse vectors. For some parameter \(\gamma>0\), a matrix \(\mathbf{A}\in\mathbb{R}^{m\times n}\) is said to satisfy REC\((k,\gamma)\) if \(\forall\mathbf{x}\in S_{k}\),_
\[\|\mathbf{A}\mathbf{x}\|_{2}\geq\gamma\|\mathbf{x}\|_{2}.\]
Intuitively, RIP implies that \(\mathbf{A}\) approximately preserves Euclidean norms (\(l_{2}\) norms) for sparse vectors, and REC implies that sparse vectors are far from the nullspace of \(\mathbf{A}\)[21]. Given the sparsity of \(\mathbf{x}^{*}\) and RIP/REC property of the chosen matrix \(\mathbf{A}\), it has been shown that the recovered signal \(\mathbf{\hat{x}}\) is the unique solution of the problem \(\mathcal{P}_{0}\) in (3), i.e., \(\mathbf{\hat{x}}\approx\mathbf{x}^{*}\)[20]. As the sparsity of the signal depends on the natural domain of the signal, the solution for \(\mathcal{P}_{0}\) depends on two aspects that are (i) the choice of measurement matrix \(\mathbf{A}\) and (ii) the choice of recovery method, i.e., optimization solver for \(\mathcal{P}_{0}\). In conventional compressive sensing methods, the common choices for such aspects are (i) a Gaussian matrix and (ii) a convex optimization solver like Lasso (Lease absolute shrinkage and selection operator) [17, 22]. Note that in this work, we do not explicitly analyze the sparsity of the IMU signal but rely on approximation methods, such as Lasso and generative models, to solve the optimization problem. In this way, we do not need to pay the extra cost of signal processing through transformations, such as Fourier transform or Wavelet transform, at the transmitter [17, 20, 21, 23].
By using the definition of the \(l_{p}\) norm in (4), \(\|\mathbf{x}\|_{1}\) is a convex function, and \(\mathcal{P}_{0}\) is a \(l_{1}\) minimization problem with quadratic constraint. The solution of \(\mathcal{P}_{0}\) is equivalent to the output of the Lasso, which consists of solving \(\mathcal{P}_{1}\), for some parameter \(\tau\geq 0\)[20]:
\[\mathcal{P}_{1}: \min_{\mathbf{x}}\|\mathbf{A}\mathbf{x}-\mathbf{\hat{y}}\|_{2},\] (5a) subject to \[\|\mathbf{x}\|_{1}\leq\tau. \tag{5b}\]
In practice, the solution of Lasso is equivalent to solving the Lagrangian of the problem \(\mathcal{P}_{1}\) above, for some parameter \(\lambda\geq 0\), i.e., [20, Equation 3.1]:
\[\min_{\mathbf{x}}\|\mathbf{A}\mathbf{x}-\mathbf{\hat{y}}\|_{2}^{2}+\lambda\| \mathbf{x}\|_{1}. \tag{6}\]
Intuitively, the \(l_{1}\) penalty term \(\lambda\|\mathbf{x}\|_{1}\) in (6) enforces sparsity (Definition 1) by adding penalty proportional to the absolute values of the coefficients of \(\mathbf{x}\). As a result, the sparsity assumption (\(k\)-sparse or nearly \(k\)-sparse) in the structure of the signal has an impact on the performance of the Lasso solver. Recall that in this work, we do not explicitly analyze the sparsity of the IMU signal, which usually causes extra costs through the Fourier/Wavelet transform at the transmitter. In addition, the solutions relying on the sparsity assumption are known to yield poor recovery performance when the linear measurement is not sufficient (i.e., the value of \(m\) is too small), or the considered signal has a small number of dimensions (i.e., the value of \(n\) is not sufficiently large) [21, 23]. This motivates us to utilize generative models, such as variational auto-encoders (VAEs) [18] and generative adversarial networks (GANs) [24], as an alternative for the use of sparsity assumption with a convex optimization solver like Lasso.
### _Generative Models_
Generative models are a type of machine learning that can be used for modeling the complex distribution of large-scale datasets. In the context of compressive sensing, a generative model can be used to estimate the distribution of the input signals. After the generative model is trained with a training set, it can generate a new data sample that is similar to the samples drawn from the original set [25]. Intuitively, the generative model can learn and synthesize the underlying distribution of the high dimensional and complex data, which eliminates the sparsity assumption about the data structure of conventional compressive sensing techniques.
A generative model describes a probability density function \(p\): \(\mathcal{X}\rightarrow\mathbb{R}\) (\(\mathcal{X}\) is a finite set) through an unobserved, or "latent", variable \(\mathbf{z}\). The probability density function is then calculated by:
\[p(\mathbf{x})=\int_{\mathbf{z}}p(\mathbf{x}|\mathbf{z})p(\mathbf{z})d\mathbf{ z}, \tag{7}\]
where \(\forall\mathbf{x}\in\mathcal{X}\), the probability \(p(\mathbf{z})\) is the prior, and the forward probability \(p(\mathbf{x}|\mathbf{z})\) is the likelihood [25]. In practice, this probability density function is usually parameterized by a model \(\mathbf{\theta}\) (e.g., a deep neural network). In such a case, equation
Fig. 2: Illustration of acceleration reading from an IMU sensor placed on the left wrist of the user (top figure) and the Fast Fourier Transform (FFT) of the x-axis acceleration data (bottom figure). The FFT reveals nearly \(k\)-sparse property of the IMU signal in which a few low-frequency coefficients have dominant values. As a result, the redundancy of the data can be approximated by considering the \(k\) largest coefficients and assuming the rest coefficients are zero.
(7) can be rewritten as follows [25]:
\[p_{\mathbf{\theta}}(\mathbf{x})=\int_{\mathbf{z}}p_{\mathbf{\theta}}(\mathbf{x}|\mathbf{z })p(\mathbf{z})d\mathbf{z}. \tag{8}\]
The integral in (8) cannot easily be computed as the likelihood \(p(\mathbf{x}|\mathbf{z})\) is computationally expensive with conventional methods such as maximum likelihood, especially for large-scale datasets. In this work, we develop our generative model based on a popular class of generative models, which are called variational auto-encoders (VAEs), first introduced in [18]. As opposed to other generative models such as GANs [24], VAEs can generate more dispersed samples over the data and can learn complex data distributions [25]. In addition, VAEs are better for data inference, which is suitable for our generative model that wants to exploit the hidden "sparsity" patterns in the IMU data.
In VAEs, besides the likelihood parameterized by a decoder (deep neural network), the probability density function \(p_{\mathbf{\theta}}(\mathbf{x})\) is conditioned through an encoder parameterized by another deep neural network \(q_{\mathbf{\phi}}(\mathbf{z}|\mathbf{x})\). The encoder approximates the true but intractable posterior \(p_{\mathbf{\theta}}(\mathbf{z}|\mathbf{x})\). To train a VAE, we optimize a variational lower bound on \(\log p_{\mathbf{\theta}}(\mathbf{x})\), called evidence lower-bound (ELBO). It is defined as follows [25]:
\[\mathcal{L}_{\mathbf{\theta},\mathbf{\phi}}(\mathbf{x})=\int_{\mathbf{z}}q_{\mathbf{\phi} }(\mathbf{z}|\mathbf{x})\log\frac{p_{\mathbf{\theta}}(\mathbf{x}|\mathbf{z})p( \mathbf{z})}{q_{\mathbf{\phi}}(\mathbf{z}|\mathbf{x})}d\mathbf{z}. \tag{9}\]
The newly introduced density function \(q_{\mathbf{\phi}}(\mathbf{z}|\mathbf{x})\) is referred to as the variational (approximate) posterior with \(\mathbf{\phi}\) defined as the variational parameters.
As the encoder \(q_{\mathbf{\phi}}(\mathbf{z}|\mathbf{x})\) is used to approximate the posterior \(p_{\mathbf{\theta}}(\mathbf{z}|\mathbf{x})\), exact sampling from the posterior is straightforward through an unbiased Monte Carlo estimate of \(\mathcal{L}\)[25]:
\[\hat{\mathcal{L}}_{\mathbf{\theta},\mathbf{\phi}}(\mathbf{x})=\log\frac{p_{\mathbf{\theta }}(\mathbf{x}|\mathbf{z})p(\mathbf{z})}{q_{\mathbf{\phi}}(\mathbf{z}|\mathbf{x})}, \text{ where }\mathbf{z}\gets q_{\mathbf{\phi}}(\cdot|\mathbf{x}). \tag{10}\]
The notation \(\mathbf{z}\gets q_{\mathbf{\phi}}(\cdot|\mathbf{x})\) means that \(\mathbf{z}\) is sampled from the approximate posterior distribution \(q_{\mathbf{\phi}}(\mathbf{z}|\mathbf{x})\). If the process used to generate \(\mathbf{z}\) from \(q_{\mathbf{\phi}}(\mathbf{z}|\mathbf{x})\) is differentiable with respect to \(\mathbf{\phi}\), the function \(\hat{\mathcal{L}}\) can be differentiated with respect to \(\mathbf{\theta}\) and \(\mathbf{\phi}\) by using a stochastic gradient decent estimator. Once \(\mathcal{L}\) in (9) is optimized, we can approximate the true probability density function \(p(\mathbf{x})\) through the learned neural network with parameters \(\mathbf{\theta}\), i.e., \(p_{\mathbf{\theta}}(\mathbf{x})\approx p(\mathbf{x})\). In other words, we can generate the new data samples from the learned probability density function.
### _Generative Model-based Compressive Sensing_
In the context of compressive sensing, the data sample \(\mathbf{x}\) from the training set is, however, not fully observable, i.e., our model can only observe the noisy compressed or down-sampled version \(\mathbf{\hat{y}}\). Replacing \(\mathbf{x}\) with \(\mathbf{y}=\mathbf{A}\mathbf{x}\), the unbiased Monte Carlo estimation of the ELBO in (10) is rewritten as:
\[\hat{\mathcal{L}}_{\mathbf{\theta},\mathbf{\phi}}(\mathbf{y})=\log\frac{p_{\mathbf{\theta }}(\mathbf{A}\mathbf{x}|\mathbf{z})p(\mathbf{z})}{q_{\mathbf{\phi}}(\mathbf{z}| \mathbf{A}\mathbf{x})},\text{ where }\mathbf{z}\gets q_{\mathbf{\phi}}(\cdot| \mathbf{y}=\mathbf{\hat{y}}). \tag{11}\]
As observed from the above equation, the generative model cannot directly generate the data sample \(\mathbf{x}\) from the compressed observation \(\mathbf{\hat{y}}\) without prior knowledge of the measurement matrix \(\mathbf{A}\). In other words, the measurement matrix \(\mathbf{A}\) is assumed to be known at the generative model [21, 23]. Recall that the generative model is deployed at the receiver, therefore, the assumption about sharing prior information, e.g., a codebook, between the transmitter and receiver, is commonly used in the source and channel coding methods [26, 27]. Given the setting above, the solution of (5) is equivalent to the output of the generator \(G(\mathbf{z})\) of the problem \(\mathcal{P}_{2}\), for some \(\upsilon\geq 0\), as follows [21]:
\[\mathcal{P}_{2}: \quad\min_{\mathbf{z}}\|\mathbf{A}G(\mathbf{z})-\mathbf{\hat{y} }\|_{2},\] (12a) subject to \[\quad\|G(\mathbf{z})\|_{1}\leq\upsilon. \tag{12b}\]
The generator \(G(\mathbf{z})\) is defined as a function \(G:\mathbb{R}^{k}\rightarrow\mathbb{R}^{n}\) mapping a latent vector \(\mathbf{z}\) to the mean of the conditional distribution \(p_{\mathbf{\theta}}(\mathbf{x}|\mathbf{z})\). Given the observation \(\mathbf{\hat{y}}\) as the input of the model, the latent vector \(\mathbf{z}\) is obtained by sampling from the posterior distribution \(q_{\mathbf{\phi}}(\cdot|\mathbf{y})\) in (11). After that, the generator \(G(\mathbf{z})\) produces the output vector \(\mathbf{\hat{x}}\) from this latent vector \(\mathbf{z}\), i.e., \(G(\mathbf{z})=\mathbf{\hat{x}}\). As a result, the minimizer \(\mathbf{z}^{*}\) of the optimization problem of \(\mathcal{P}_{2}\) in (12) makes \(G(\mathbf{z}^{*})\approx\mathbf{x}^{*}\)[23].
In comparison with (5), the variable vector \(\mathbf{x}\) is now replaced by the generative function \(G(\mathbf{z})\) in (12). As a result, when one can optimize the objective \(\mathcal{P}_{2}\) in (12), the generator \(G(\mathbf{z})\) can generate the new samples which are similar to the original vector \(\mathbf{x}^{*}\). Recall that under the compressive sensing setting, our generative model does not observe the full observation of the signals as in the conventional setting of generative modeling, i.e., learning directly \(p_{\mathbf{\theta}}(\mathbf{x})\approx p(\mathbf{x})\). In the compressive sensing setting, the generative model can only observe the noisy and compressed signal \(\mathbf{\hat{y}}\). Therefore, the optimization objective defined in (12) is to indirectly optimize the generative model via the observation \(\mathbf{\hat{y}}\), given the measurement matrix \(\mathbf{A}\). Thereafter, the space of signals that can be recovered with the generative model is given by the range of the generator function, i.e.,
\[S_{G}=\{G(\mathbf{z}):\mathbf{z}\in\mathbb{R}^{k}\}. \tag{13}\]
As the range of the signals is now transformed into the latent space \(\mathbf{z}\in\mathbb{R}^{k}\), the RIP and REC properties of the measurement matrix \(\mathbf{A}\) no longer guarantee the accuracy of the recovered signals. With the generative model-based compressive sensing, the measurement matrix \(\mathbf{A}\) is required to satisfy a Set-Restricted Eigenvalue Condition (S-REC), which is a generalized version of REC [23], i.e.,
**Definition 4** (Set-Restricted Eigenvalue Condition).: _Let \(S\subseteq\mathbb{R}^{n}\), for some parameters \(\gamma>0\) and \(\kappa\geq 0\), a matrix \(\mathbf{A}\in\mathbb{R}^{m\times n}\) is said to satisfy the S-REC\((S,\gamma,\kappa)\) if \(\forall\mathbf{x}_{1},\mathbf{x}_{2}\in S\),_
\[\|\mathbf{A}(\mathbf{x_{1}}-\mathbf{x_{2}})\|_{2}\geq\gamma\|\mathbf{x_{1}}- \mathbf{x_{2}}\|_{2}-\kappa.\]
Intuitively, the S-REC property generalizes the REC property to an arbitrary set of vectors \(S\) instead of considering the set of approximately sparse vectors \(S_{k}\)[21]. This generaliza
tion makes S-REC a nice property for solving a compressive sensing problem with a stochastic gradient estimator via deep neural networks.
## III Problem Formulation and Proposed Learning Algorithm
In this section, we utilize the generative model-based compressive sensing framework for our system model in Fig. 1. The presence of the communication channel between the transmitter and the receiver makes the reconstruction of IMU signals with generative model-based compressive sensing much more challenging. In particular, using the measurement matrices in [23] and [21] cannot guarantee the power constraint of the transmitter. Conventional normalization techniques like \(l_{2}\) normalization [28] are not applicable as they yield nonlinear projection from \(\mathbf{x}\) to \(\mathbf{y}\), thus making the receiver cannot recover the original signal. For this, we propose a new measurement matrix that (i) ensures the power constraint for the transmitter and (ii) satisfies the S-REC property of generative model-based compressive sensing. The learning algorithm with the newly designed measurement matrix is described in the following.
The proposed learning process, which we refer to as "CS-VAE" (Compressive Sensing-based Variational Auto-Encoder), is illustrated in Fig. 3. At the transmitter, we have the vector \(\mathbf{x}^{*}\in\mathbb{R}^{n}\) and a measurement matrix \(\mathbf{A}\in\mathbb{R}^{m\times n}\). The output of the measurement matrix is the signal \(\mathbf{y}\in\mathbb{R}^{m}\) in which \(m<n\). The signal \(\mathbf{y}=\mathbf{A}\mathbf{x}^{*}\) is subjected to the power constraint at the transmitter, i.e., \(\frac{1}{m}\|\mathbf{y}\|_{2}^{2}\leq P_{T}\), where \(P_{T}\) is the transmission power constraint on a single channel use [27, 29]. Details of the power constraint for our framework are further discussed in Appendix A. In particular, the optimization problem of the proposed learning model is similar to (12) with an additional power constraint as follows:
\[\mathcal{P}_{3}: \min_{\mathbf{z}}\|\mathbf{A}G(\mathbf{z})-\mathbf{\hat{y}}\|_{2},\] (14a) subject to \[\frac{1}{m}\|\mathbf{y}\|_{2}^{2}\leq P_{T}, \tag{14b}\] \[\|G(\mathbf{z})\|_{1}\leq\upsilon. \tag{14c}\]
The power constraint in (14b) poses additional challenges in designing the measurement matrix \(\mathbf{A}\) to ensure that the recovered signal is unique and similar to the original signal. Specifically, this is a very challenging quadratically constrained problem [20], and designing a measurement matrix that satisfies the duo-constraint, i.e., S-REC property and power constraint, has not been investigated in the literature. Existing generative model-based compressive sensing approaches [21, 23, 30, 31] cannot be directly applied to this problem. To address this duo-constraint optimization problem \(\mathcal{P}_{3}\) in (14), we first design a new measurement matrix \(\mathbf{A}\) in Proposition 1 that makes \(\mathbf{y}=\mathbf{A}\mathbf{x}\) satisfy the power constraint \(\frac{1}{m}\|\mathbf{y}\|_{2}^{2}\leq P_{T}\). After the power constraint is eliminated, we use the Lagrangian of \(\mathcal{P}_{3}\) as a loss function to train the generative model in a similar manner as in [23].
The proposed measurement matrix for the problem \(\mathcal{P}_{3}\) is stated as follows.
**Proposition 1** (S-REC with power constraint).: _The recovered signal obtained by the generative model-based compressive sensing method under the power constraint is guaranteed to be a unique solution if_
* \(\mathbf{A}\) _satisfies S-REC property, and_
* _Each element_ \(A_{ij}\) _(element_ \(j\)_-th of the_ \(i\)_-th row) of_ \(\mathbf{A}\) _is drawn i.i.d from a Gaussian distribution with zero mean and variance_ \(\sigma_{a}^{2}=\frac{P_{T}}{n^{2}d^{2}(d\sigma_{x}+\mu_{x})^{2}}\)_, i.e.,_ \[A_{ij}\sim\mathcal{N}\Big{(}0,\frac{P_{T}}{n^{2}d^{2}(d\sigma_{x}+\mu_{x})^{2}} \Big{)},\]
_where \(\sigma_{x}^{2}\) and \(\mu_{x}\) are the statistical variance and mean of the source signals \(\mathbf{x}\in\mathbb{R}^{n}\), respectively, and \(d>0\) is a real number derived from the Chebyshev's inequality._
The proof of Proposition 1 can be found in Appendix A.
**Remark**.: _The normal distribution used to generate the random matrix \(\mathbf{A}\) in Proposition 1 contains the mean and variance values of the source signals \(\mathbf{x}\). This assumption of knowing statistical variance and mean values of the source signals is common in source-channel coding schemes [26]. For example, in the case of i.i.d Gaussian source with power constraint \(P\), these values are \(\sigma_{x}^{2}\approx P\) and \(\mu_{x}=0\)[26, Section 9.1]. In the view of compressive sensing as a source-channel coding scheme, \(\mathbf{A}\) can be considered as an encoding function [27]. In addition, as training deep learning models usually requires
Fig. 3: The proposed CS-VAE learning algorithm with a novel measurement matrix at the transmitter and the generative model, i.e., a VAE, at the receiver. The transmitted signal at the transmitter is the \(m\)-dimensional vector \(\mathbf{y}\), which is a compressed version of the original \(n\)-dimensional vector \(\mathbf{x}^{*}\). At the receiver, the VAE recovers the original signal, i.e., \(\hat{\mathbf{x}}\approx\mathbf{x}^{*}\), from a noisy and compressed measurement \(\mathbf{\hat{y}}\).
access to the training set for pre-processing and learning, the assumption of knowing the mean and variance values of the signals is more reasonable and practical than using the i.i.d Gaussian source. Another parameter in Proposition 1 is \(d>0\), a real number that restricts the random variable \(x_{j}\) (element \(j\)-th of the source signal \(\mathbf{x}\)) to the interval \([\mu_{x}-d\sigma_{x},\mu_{x}+d\sigma_{x}]\) with probability is at least \(1-\frac{1}{d^{2}}\). Details of the parameters are discussed in Appendix A._
As the measurement matrix \(\mathbf{A}\) is designed based on the Proposition 1, the power constraint in (14) is guaranteed and thus can be reduced. As a result, the optimization problem in (14) is equivalent to solving the following problem:
\[\mathcal{P}_{4}:\min_{\mathbf{z}}\|\mathbf{A}G(\mathbf{z})-\mathbf{\hat{y}} \|_{2}^{2}+\lambda\|G(\mathbf{z})\|_{1}, \tag{15}\]
where \(\lambda\) is the Lagrange multiplier. As \(\mathbf{z}\) is differentiable with respect to the generative model's parameters (e.g., using reparameterization trick [25]), one can use the loss function based on (15) to train the generative model [21]. Once the generative model is trained to obtain the solution for (14), denoted by \(\mathbf{z}^{*}\), the reconstruction error can be bounded with probability \(1-e^{-\Omega(m)}\) by [23]:
\[\|G(\mathbf{z}^{*})-\mathbf{x}^{*}\|_{2}\leq 6\min_{\mathbf{z}\in\mathbb{R}^{ 4}}\|G(\mathbf{z})-\mathbf{x}^{*}\|_{2}+3\|\boldsymbol{\eta}\|_{2}+2\epsilon, \tag{16}\]
where \(\epsilon\) is an additive error term caused by the use of gradient decent-based optimizers.
Pseudo codes of our training algorithm are in Algorithm 1, which can be described as follows. We first initialize the measurement matrix \(\mathbf{A}\) following the Proposition 1, together with random parameters for the inference network, i.e., encoder, \(q_{\boldsymbol{\phi}}(\mathbf{z}|\mathbf{y})\), and the generative model, i.e., decoder, \(p_{\boldsymbol{\theta}}(\mathbf{x}|\mathbf{z})\) (i.e., lines 1-3 of the Algorithm 1). For each training loop, a batch of sample \(\mathbf{x}^{*}\) is i.i.d sampled from the training set (line 5). The input of the VAE's encoder is \(\mathbf{\hat{y}}\) obtained by using (2) (line 6). The latent vector \(\mathbf{z}\) is obtained in line 7, and the output of the VAE's decoder is \(\mathbf{\hat{x}}\) in line 8. After that, the training loss \(L(\mathbf{z})\) of the VAE can be computed as in line 9, which optimizes the problem \(\mathcal{P}_{4}\) using the Adam optimization solver. After training, the pre-trained VAE can reconstruct signal \(\mathbf{\hat{x}}=G(\mathbf{z}^{*})\) (\(\mathbf{z}^{*}\) is fixed during testing with the test set) with reconstruction error bounded by (16) (line 14).
```
1Input: Initialize measurement matrix \(\mathbf{A}\) that satisfies Proposition 1.
2 Initialize encoder of the VAE with inference model \(q_{\boldsymbol{\phi}}(\mathbf{z}|\mathbf{y})\).
3 Initialize decoder of the VAE with generative model \(p_{\boldsymbol{\theta}}(\mathbf{x}|\mathbf{z})\).
4fort = 0, 1, 2,...do
5 Sample \(\mathbf{x}^{*}\) from the training set.
6 Obtain \(\mathbf{\hat{y}}\) from (2).
7 Obtain latent vector \(\mathbf{z}\gets q_{\boldsymbol{\phi}}(\cdot|\mathbf{y})\), with \(\mathbf{y}=\mathbf{\hat{y}}\).
8 Obtain \(\mathbf{\hat{x}}\gets p_{\boldsymbol{\theta}}(\mathbf{x}|\mathbf{z})\) at the output of the generator \(G(\mathbf{z})\).
9 Compute the loss based on (15), i.e., \[L(\mathbf{z})=\|\mathbf{A}G(\mathbf{z})-\mathbf{\hat{y}}\|_{2}^{2}+\lambda\|G (\mathbf{z})\|_{1},\] (17)
10 Update the neural network parameters by using backpropagation with the loss \(L(\mathbf{z})\).
11
12Output:\(\mathbf{z}\rightarrow\mathbf{z}^{*}\).
13Reconstructed signal \(\mathbf{\hat{x}}=G(\mathbf{z}^{*})\).
14Reconstruction error: \(\|G(\mathbf{z}^{*})-\mathbf{x}^{*}\|_{2}\) bounded by (16).
```
**Algorithm 1**CS-VAE: Training VAE to reconstruct signals from noisy compressed measurements
## IV Performance Evaluation
### _Dataset and Simulation Settings_
#### Iv-A1 Dataset and VAE's Parameters
We use the IMU data from the DIP-IMU dataset [13], designed specifically for capturing 3D body human motion with calibrated IMU sensors. The dataset contains acceleration and orientation information of 17 IMU sensors placed on the participants. The entire dataset consists of 64 data sequences of 10 participants, equivalent to 330,178 frames of motion under various activities. The frames are recorded at the rate of 60 frames per second. The output of the \(j\)-th IMU is a combination of orientation information, denoted by \(\mathbf{o}_{t}^{(j)}\in\mathbb{R}^{9}\), and acceleration information, denoted by \(\mathbf{a}_{t}^{(j)}\in\mathbb{R}^{3}\). One frame in the dataset at time step \(t\) is denoted by \(\mathbf{x}_{t}=\left[\mathbf{o}_{t}^{(1)},\mathbf{a}_{t}^{(1)},\mathbf{o}_{t} ^{(2)},\mathbf{a}_{t}^{(2)},\ldots,\mathbf{o}_{t}^{(17)},\mathbf{a}_{t}^{(17)}\right]\). As a result, one frame \(\mathbf{x}_{t}\) in the dataset has \(17\times 9+17\times 3=204\) features. We use the data sequences collected from 8 participants as the training set, and the data sequences collected from the other 2 participants as the test set. After removing all the data samples that have missing features (i.e., NaN values), the training set and test set contain 220,076 and 56,990 data samples, respectively. To stabilize the training process, we normalize the data \(\mathbf{x}_{t}\) in the training and test sets within the range \((-1,1)\).
By following the training process illustrated in Fig. 3, the desired signal \(\mathbf{x}^{*}\) is represented as \(\mathbf{x}_{t}\in\mathbb{R}^{204}\). Hereafter, we remove the time step \(t\) notation for the sake of simplicity. The signal \(\mathbf{x}^{*}\) is then multiplied with the measurement matrix \(\mathbf{A}\) to get the signal \(\mathbf{y}\) with fewer features, i.e., \(m<204\). The signal \(\mathbf{y}\) is then passed through a simulated channel with Gaussian noise \(\boldsymbol{\eta}\in\mathbb{R}^{m}\). The noise vector \(\boldsymbol{\eta}\) has \(m\) elements, and each element follows a Gaussian distribution \(\mathcal{N}(0,\sigma_{N}^{2})\).
The noisy signal \(\mathbf{\hat{y}}\in\mathbb{R}^{m}\) is then used as the input of the VAE. The reconstructed signal at the output of the VAE is \(\mathbf{\hat{x}}\in\mathbb{R}^{204}\). For this, we design the network architecture of the VAE as follows. The encoder is a fully connected network with an input layer having \(m\) neurons, one hidden layer having 64 neurons, and one latent layer having 10 neurons. The decoder is a fully connected network that has two hidden layers, each of which has 64 neurons. Finally, the output layer has 204 neurons. The activation function used for the hidden layers is ReLu and the activation function used for the output layer is Tanh. We train the model for 50 epochs with a batch size of 60. We then use the trained model to evaluate the performance on the test set with the same batch size. All the parameter settings are described in Table I.
#### Iv-A2 3D Avatar Model
Based on the reconstructed signals from the proposed CS-VAE model, we further transform
the signals into a 3D human avatar. For this, we use the non-commercial Skinned Multi-Person Linear Model (SMPL) model in [19]. SMPL is a parametrized model of a 3D human body template that takes 72 pose parameters and 10 shape parameters, denoted by \(\mathbf{p}\in\mathbb{R}^{72}\) and \(\mathbf{s}\in\mathbb{R}^{10}\), respectively, and returns a mesh with 6,890 vertices in a 3D space. By adjusting the pose and shape parameters, i.e., \(\mathbf{p}\) and \(\mathbf{s}\), we can animate the 3D avatars that mimic the physical shapes and movements of human users. Further details of the SMPL model can be found in [19].
To transform the IMU signals \(\mathbf{\hat{x}}\in\mathbb{R}^{204}\) into a pose parameter \(\mathbf{p}\in\mathbb{R}^{72}\), we use another VAE to learn the mapping function \(F(\mathbf{\hat{x}}):\mathbb{R}^{204}\rightarrow\mathbb{R}^{72}\). This mapping function helps us to transform IMU signals into the input of the SMPL model [13]. For example, the changes of acceleration and orientation of the IMU data (i.e., \(\mathbf{\hat{x}}\) varies) from the left wrist and left elbow of the user will make the 3D avatar move its left arm (i.e., \(\mathbf{p}\) varies). In other words, given the reconstructed signals, we can create a human avatar in a virtual 3D space with a specific pose. Note that we keep the shape parameters \(\mathbf{s}\) of the SMPL model as constant numbers for the sake of simplicity as body shape modeling is not our focus. In the following, we first present the performance evaluation of the signal reconstruction with our proposed framework. Secondly, we show that our reconstructed signals are more robust in creating 3D human avatar poses. Lastly, we show a simple but effective method to make smooth animated motions for the avatars without using input signals, thanks to the ability of the generative model.
#### Iv-B3 Baseline Approaches
We evaluate the performance of our proposed framework, denoted by CS-VAE, and other baseline approaches. The considered baseline approaches are (i) Lasso (Lease absolute shrinkage and selection operator) [22] and (ii) DIP (Deep Inertial Poser) [13].
**Lasso** is a widely adopted algorithm for solving \(l_{1}\) penalized optimization problem in compressive sensing [17, 20]. It is a regression analysis technique that incorporates an \(l_{1}\) penalty term into the optimization problem. In particular, the solution of Lasso is obtained by solving the optimization problem in (5), which is equivalent to solving the Lagrangian in (6). As observed from equation (6), Lasso solves an optimization problem that involves the \(l_{1}\) penalty term \(\lambda\|\mathbf{x}\|_{1}\), which is similar to the term \(\lambda\|G(\mathbf{z})\|_{1}\) in the loss function \(L(\mathbf{z})\) in the proposed Algorithm 1. For a fair comparison, we use the same value for the \(l_{1}\) penalty \(\lambda=10^{-5}\) for both Lasso and CS-VAE approaches.
Note that by using the measurement matrix \(\mathbf{A}\) that follows the Proposition 1, the setting of Lasso in (6) is now constrained with transmit power in equation (14b). In our later experiments, we empirically show that the power constraint in (14b) makes the optimization problem much more challenging, resulting in Lasso's failures in reconstructing the original signals. For a more comprehensive evaluation, we also introduce a relaxed version of Lasso, denoted by the notation "Lasso w.o.\(P_{T}\)" (Lasso without power constraint \(P_{T}\)). In this relaxed version of Lasso, we remove the power constraint in (14b) and use Lasso to recover the original signals. This can be done by replacing \(\mathbf{A}\) in (6) with another unconstrained measurement matrix \(\mathbf{B}\) with Gaussian entries. We use a similar measurement matrix as in [21, 23] in which elements of \(\mathbf{B}\in\mathbb{R}^{m\times n}\) are \(B_{ij}\sim\mathcal{N}(0,1/m)\). Without the power constraint, "Lasso w.o.\(P_{T}\)" is expected to produce the optimal results as upper bounds for our evaluation. As shown later, our proposed CS-VAE approach achieves comparable results to the optimal solutions obtained by "Lasso w.o.\(P_{T}\)". Notably, the competitive results of our approach also come with an order of magnitude faster than Lasso in terms of decoding time, i.e., the time to find the solution for the optimization problem given a batch of input data.
**DIP** is a deep learning approach for reconstructing body pose from a fixed set of IMU sensors. The main idea of DIP and other approaches in this line of work, e.g., [13, 14], is to reconstruct the full body pose, e.g., using SMPL model, from measurements of IMU sensors placed on the important joints of the human body, i.e., head, spine, two wrists, and two knees. The common approaches of such frameworks are using recurrent neural networks to access the entire training set during the training time and using a shorter time window at the test time. The number of IMU sensors can be reduced to 3 sensors, e.g., sensors placed on the head and two wrists, but such an approach needs an extensive motion database and physics-based simulation engine [15].
In comparison with our proposed CS-VAE framework, the aforementioned works in [13, 14, 15] can be viewed as an underdetermined system in which the deep learning models try to recover the full body pose given signals from a few IMU sensors. As such, instead of using a matrix multiplication for downsampling the data, we can impose the training process of DIP by manually selecting the IMU sensors from the 17 IMU sensors in the dataset. Note that in the following results, we use the same VAE's architecture and training loss in Fig. 3 for the DIP baseline approach, rather than using the recurrent neural network in [13]. The main reason is that the recurrent neural network needs to see the entire training set during the training process, which may yield advantages and inappropriate comparisons with CS-VAE and Lasso. With
\begin{table}
\begin{tabular}{c|l|l} \hline
**Notation** & **IMU Parameters** & **Values** \\ \hline \(n\) & Dimension of signals & \(204\) features \\ \hline \(m\) & Number of linear & \([48:192]\) \\ & measurements & features \\ \hline & Frame per seconds & 60 fps [32] \\ \hline \(P_{T}\) & Transmit power & \(0.1\) Watt \\ & & [32, 33] \\ \hline \(\sigma_{N}\) & Standard deviation of noise & \([1:500]\) \\ & & \(\times 10^{-4}\) \\ \hline \(d\) & Parameter of \(\mathbf{A}\) & 2 \\ & in Proposition 1 & \\ \hline & **Algorithmic Parameters** & **Values** \\ \hline & Learning rate & \(10^{-4}\) \\ & (Adam optimizer) & \\ \hline & KL divergence weight & \(10^{-5}\) \\ \hline \(\lambda\) & \(l_{1}\) penalty & \(10^{-5}\) \\ \hline & Number of training epochs & 50 \\ \hline \end{tabular}
\end{table} TABLE I: Parameter settings.
similar architecture and training with batches of signals, it is a more reasonable comparison for DIP against CS-VAE and Lasso. The details of simulation parameters are described in Table I.
### _Simulation Results_
#### Iv-B1 Impacts of the number of measurements
We evaluate the performance of the proposed framework when the number of measurements \(m\) increases from 48 measurements to 192 measurements, which are approximately \(23\%\) and \(94\%\) of the total number of the IMU orientation and acceleration features, respectively. We select the number of measurements in Fig. 4 to be divisible by 12. The only reason for this selection is that it is easier for the DIP baseline framework as the DIP framework needs to work with the set of IMU sensors in which each IMU has 12 features of measurements (9 features for the orientation and 3 features for the acceleration). Unlike DIP, the CS-VAE and Lasso approaches are flexible with any arbitrary number of measurements. The error bars in Figs. 4, 5, and 6 are equivalent to half of the standard deviations from the mean values.
As observed from Fig. 4, the Mean Square Error (MSE) values in most approaches decrease when the number of measurements increases, except for Lasso as it fails to recover the signals. This observation about Lasso shows that the power constraint makes the setting much more difficult to obtain the results. When we remove the power constraint, the relaxed optimization problem can be effectively solved with Lasso, as illustrated by the Lasso w.o.\(P_{T}\) baseline. Recall that we use Lasso w.o.\(P_{T}\) as an upper bound for comparison. The results show that our CS-VAE approach achieves the highest performance, i.e., low MSE values, which is closest to the performance of the upper bound solution, i.e., Lasso w.o.\(P_{T}\).
The reason for the inferior performance of DIP to CS-VAE can be explained as follows. The linear projection of DIP from 204 measurements into a lower number of measurements only preserves the completeness of orientation and acceleration features. For example, the linear projection of DIP with \(m=120\) measurements makes it equivalent to the data from 10 IMU sensors in which each IMU preserves the full 12 features of orientation and acceleration. With compressive sensing technique in other approaches, the completeness of such 12 features no longer holds as the linear projection is performed by the measurement matrix \(\mathbf{A}\), which may preserve the features with the sparse values rather than keeping the full orientation and acceleration values of certain IMU sensors. We observe that our newly proposed design of the measurement matrix \(\mathbf{A}\) is the key factor making the CS-VAE approach work well with noisy and sparse signals, given a simple neural network's architecture of VAE.
#### Iv-B2 Impacts of the channel noise
Next, we evaluate the performance of the approaches under different channel noise power values. As we consider the Gaussian channel, the noise's power is equivalent to the variance of the Gaussian noise, i.e., \(\sigma_{N}^{2}\) in (2) [26]. We fix the power \(P_{T}\) and increase the standard deviation of the noise from \(10^{-4}\) to \(500\times 10^{-4}\) to obtain the results in Fig. 5. As observed from Fig. 5, the proposed CS-VAE approach achieves better performance under most of the considered scenarios, compared to the Lasso and DIP approaches. Similar to the observation in the previous setting, Lasso fails to reconstruct the signals regardless of the noise level. The relaxed version of Lasso, i.e., Lasso w.o.\(P_{T}\) achieves the highest performance as it is not constrained by the power transmission.
We observe that under high noise's power value, i.e., \(\sigma_{N}=500\times 10^{-4}\), CS-VAE performs worse than the DIP approach. The reason is that the design of matrix \(\mathbf{A}\) can bound the signal \(\mathbf{y}=\mathbf{A}\mathbf{x}\) with \(\frac{1}{m}\|\mathbf{y}\|_{2}^{2}\leq P_{T}\), but this also restricts the signal \(\mathbf{y}\) into suboptimal power region. With DIP, we adopt the power normalization strategy in [28] in which the value of \(\mathbf{y}\) is normalized by its \(l_{2}\) norm \(\|\mathbf{y}\|_{2}\). As the results suggest, this power normalization can be effective with high noise levels but it becomes less robust with low noise values. This is also the reason we design a new measurement matrix \(\mathbf{A}\) rather than following this power normalization scheme as it yields a nonlinear projection from \(\mathbf{x}\) to \(\mathbf{y}\), which is in contrast to the linear projection idea in compressive sensing. Nevertheless, the CS-VAE achieves more robust and better performance than Lasso and DIP, and the results are closest to Lasso w.o.\(P_{T}\).
#### Iv-B3 Decoding latency with respect to the number of input samples
Next, we investigate the decoding time at the receiver under different sizes of the input samples in Fig. 6. The
Fig. 4: Mean square error of reconstructed signals when the number of measurements \(m\) increases.
Fig. 5: Mean square error of reconstructed signals when channel noise power increases.
number of input samples \(\mathbf{\hat{y}}\) fed into the VAE is equivalent to the number of measurements \(m\) multiplied by the batch size. Note that in Fig. 6, we use the approximate values for the number of input samples in the x-axis for ease of illustration. The exact values are calculated by \(m\times b\), where \(m\) is the number of measurements and \(b\) is the batch size. For example, with \(10\times 10^{3}\) input samples in Fig. 6, we use \(m=168\) and \(b=60\) to produce \(10,080\) input samples. These \(10,080\) input samples are actually \(14\times 12\times 60\) samples, where \(14\) is the number of IMU sensors, \(12\) is the number of features generated by each IMU sensor per frame, and \(60\) is the number of frames per second. In other words, a sequence of \(10\times 10^{3}\) input samples is equivalent to data generated by the IMU sensors in one second. Similarly, a sequence of \(70\times 10^{3}\) input samples is equivalent to data generated by the IMU sensors in 7 seconds.
The transmission of a batch of data samples is equivalent to the case when we collect a batch of samples after a period of time and then transmit them to the receiver. For Lasso and Lasso w.o.\(P_{T}\), this batch of samples can be effectively used as the input of the optimization solver as the Lasso model can handle matrix-like input data. The decoding time is measured as the single forward operation from the input to the output of the pre-trained VAE models to obtain the reconstructed signals. With Lasso, the decoding time is measured as the time to find the solution for the optimization problem. We use the same central processing unit (CPU) to compute the decoding time for all approaches.
As observed from Fig. 6, the decoding time values of the CS-VAE and DIP approaches are significantly lower than that of the Lasso approaches. The main reason is that the search space of Lasso's solver increases as the number of input samples increases. In contrast, the decoding time values of CS-VAE and DIP slowly increase with the number of input samples because the pre-trained VAE models only need to forward the received signals through the deep neural network layers to obtain the reconstructed signals. Notably, the cost of finding high accuracy for reconstructed signals of Lasso w.o.\(P_{T}\) makes it significantly slower than the deep learning approaches.
It is worth mentioning that the CS-VAE and DIP approaches need to be pre-trained before being applied to get the results in Fig. 6 while the Lasso approach does not have this pre-training process. However, the pre-training process can be greatly facilitated through modern graphics processing unit (GPU) training. The observation in this experiment also suggests the real-time decoding capability of the CS-VAE approach with lower decoding latency. For example, with a number of input samples of \(10^{4}\), which is equivalent to a sequence data of one second, the decoding time of the CS-VAE approach is approximately \(8\times 10^{-2}\) second. We observe that although sharing the same network architecture with the CS-VAE approach, the DIP approach experiences slightly higher decoding time. The main reason is that the implementation of the linear projection matrix of DIP requires a few extra matrix transformation steps and de-transformation steps. However, this difference in the decoding time is not significant.
#### V-A4 3D pose estimation from reconstructed signals
In Fig. 7, we draw random poses from the reconstructed signals in the test set. The poses are generated from the SMPL model and the pre-trained VAE model (modeled as a function \(F(\mathbf{\hat{x}}):\mathbb{R}^{204}\rightarrow\mathbb{R}^{72}\)) as described in Section IV-A2. In this experiment, we reconstruct the poses from 168 measurements (i.e., 82% of the total signals) and the channel noise power \(\sigma_{N}=10\times 10^{-4}\). As observed from Fig. 7, our reconstructed poses are more accurate compared with the Lasso and DIP approaches. As Lasso w.o.\(P_{T}\) is considered as the upper bound of all the signal reconstruction scenarios, Fig. 7 clearly shows that the reconstructed 3D poses by Lasso w.o.\(P_{T}\) (green poses in the fourth column) achieve the most similarity to the ground truth reference poses (red poses in the first column). The 3D poses obtained by CS-VAE can accurately mimic the body poses of the references with slight errors in arm movements, e.g., the pose in the second row. With Lasso, as suggested by the poor signal reconstruction performance, the 3D poses obtained by Lasso fail to mimic the reference poses. In our experiment, the received signals are down-sampled and contain additive noise and the models do not have access to the full
Fig. 6: Decoding time at the receiver when the number of input samples increases.
Fig. 7: Reconstructed 3D poses from noisy and compressed \(m=168\) measurements and noise \(\sigma_{N}=10\times 10^{-4}\).
training set as in [13], resulting in poor performance of the DIP approach.
#### Iv-B5 Pose interpolation without input signals
We further use the pre-trained latent space and the decoder of the VAE to generate novel synthesis poses to animate the avatar. The ability to generate synthesis data is one of the most important features of generative models, which has not been well demonstrated in the literature of wireless systems. In particular, we consider a simple pose interpolation task as follows. Given two key poses, illustrated by the left pose and right pose in red color in Fig. 8, we aim to create a smooth transition between these two key poses by generating the immediate light gray colored poses. The need for creating the intermediate poses is important as the IMU signals might be lost during the transmission over the severe lossy wireless channels. In such a case, if the receiver cannot fill the missing poses or the transmitter does not retransmit the data, the user may experience motion sickness in the virtual 3D environment, thus decreasing the quality of experience.
Given the above setting, as illustrated in Fig. 8, we obtain the two key poses by reconstructing the IMU signals similar to the previous experiments. Let's denote the received signals \(\mathbf{\hat{y}}_{1}\) and \(\mathbf{\hat{y}}_{2}\) corresponding to the two key poses, respectively. The VAE's encoder can map the signals into two corresponding vectors in the latent space that are \(\mathbf{z}_{1}=q_{\boldsymbol{\phi}}(\mathbf{z}|\mathbf{y}=\mathbf{\hat{y}}_{1})\) and \(\mathbf{z}_{2}=q_{\boldsymbol{\phi}}(\mathbf{z}|\mathbf{y}=\mathbf{\hat{y}}_{2})\). As a result, the reconstructed signals corresponding to the two key poses are \(\mathbf{\hat{x}}_{1}=p_{\boldsymbol{\theta}}(\mathbf{x}|\mathbf{z}=\mathbf{z }_{1})\) and \(\mathbf{\hat{x}}_{2}=p_{\boldsymbol{\theta}}(\mathbf{x}|\mathbf{z}=\mathbf{z }_{2})\). To simply interpolate between the two latent vectors \(\mathbf{z}_{1}\) and \(\mathbf{z}_{2}\), we use the intermediate latent vectors \(\mathbf{z}_{j}=\varrho\mathbf{z}_{2}+(1-\varrho)\mathbf{z}_{1}\) where \(\varrho\in[0,1]\) is the interpolation parameter. Intuitively, \(\varrho=1\) makes the intermediate latent vector \(\mathbf{z}_{j}=\mathbf{z}_{2}\), resulting in the pose similar to the key pose on the right of the Fig. 8. Similarly, \(\varrho=0\) imposes the vector \(\mathbf{z}_{j}=\mathbf{z}_{1}\). The arbitrary value of \(\varrho\in[0,1]\) creates a latent vector that is a linear combination of the two vectors \(\mathbf{z}_{1}\) and \(\mathbf{z}_{2}\). As a result, we can make a smooth transition of the avatar poses by simply increasing the value of \(\varrho\), as shown in Fig. 8. As described, there is no need for the input signals when interpolation takes place in the learned latent space \(\mathbf{z}\). Similar linear interpolation techniques have been explored to create synthesized data in the various domains [34]. Our experiment shows the potential extension of the proposed framework to future VR/XR applications in conjunction with generative modeling where the synthesized data can be utilized.
## V Conclusion
In this paper, we have developed a novel framework for 3D human pose estimation from IMU sensors with generative model-based compressive sensing. The proposed framework helps the IMU sensors reduce the amount of information exchanged with the receiver, thus further enhancing channel utilization and encoding-decoding efficiency. At the receiver's side, we have employed a deep generative model, i.e., a VAE, that can recover the original signals from noisy compressed samples. With the ability of the generative model at the receiver, we have achieved an order of magnitude faster than Lasso in terms of decoding latency. We have further demonstrated that the proposed framework can learn a latent representation space and generate synthetic data samples, making it possible to fulfill missing data features (e.g., due to lossy transmissions) without using input data from the IMU sensors. Interesting findings suggest that the proposed generative model-based compressive sensing framework can achieve state-of-the-art performance in challenging scenarios with severe noise and fewer number of measurements, compared with other optimization and deep learning approaches.
## Appendix A Proof of Proposition 1
The purpose of the Proposition 1 is to construct a measurement matrix that satisfies the S-REC property in Definition 4 and the power constraint in (14) so that the recovered signal of the generative model is unique and accurate with high probability. In the following, we prove that if each element \(A_{ij}\) (element \(j\)-th of the \(i\)-th row of matrix \(\mathbf{A}\)) follows a normal distribution \(A_{ij}\sim\mathcal{N}\Big{(}0,\frac{P_{T}}{n^{2}d^{2}(d\sigma_{x}+\mu_{x})^{2} }\Big{)}\), the matrix \(\mathbf{A}\in\mathbb{R}^{m\times n}\) satisfies the S-REC property and power constraint. The first part of this section proves the guarantee of the power constraint. The second part proves the S-REC property of the measurement matrix.
_Proof of \(\mathbf{y}=\mathbf{A}\mathbf{x}\) guaranteeing the power constraint of a Gaussian channel_
First, we need prove that the matrix \(\mathbf{A}\) satisfies that power constraint
\[\frac{1}{m}\|\mathbf{y}\|_{2}^{2}\leq P_{T}. \tag{18}\]
By using \(l_{2}\) norm definition in (4), the power constraint can be rewritten as [26, Equation 9.2, Chapter 9]:
\[\frac{1}{m}\sum_{i=1}^{m}|y_{i}|^{2}\leq P_{T}. \tag{19}\]
Fig. 8: Interpolation between key poses (red avatars) without using input IMU signals. The gray avatars represent the poses generated by the VAE when the input IMU signals are missing, e.g., due to transmission loss.
Recall that we have a \(n\)-dimensional vector \(\mathbf{x}=[x_{1},x_{2},\ldots,x_{j},\ldots,x_{n}]^{\top}\) and the \(m\)-dimensional vector \(\mathbf{y}=[y_{1},y_{2},\ldots,y_{i},\ldots,y_{m}]^{\top}\), where the superscript \(\top\) denotes the transpose. The measurement matrix \(\mathbf{A}\in\mathbb{R}^{m\times n}\) is defined by:
\[\mathbf{A}=\left[\begin{array}{cccc}A_{11}&A_{12}&\cdots&A_{1n}\\ A_{21}&A_{22}&\cdots&A_{2n}\\ \vdots&\vdots&\ddots&\vdots\\ A_{m1}&A_{m2}&\cdots&A_{mn}\end{array}\right].\]
Using matrix calculation, we have \(y_{i}=\sum_{j=1}^{n}A_{ij}x_{j}\) with \(i=1,2,\ldots,m\). Following the definition of \(I_{p}\) norm in (4), we have
\[\frac{1}{m}\|\mathbf{y}\|_{2}^{2} =\frac{1}{m}\sum_{i=1}^{m}|y_{i}|^{2} \tag{20a}\] \[=\frac{1}{m}\sum_{i=1}^{m}\Big{(}\sum_{j=1}^{n}A_{ij}x_{j}\Big{)} ^{2}\] (20b) \[\leq\frac{1}{m}\sum_{i=1}^{m}\Big{[}\big{(}\sum_{j=1}^{n}A_{ij}^ {2}\big{)}\big{(}\sum_{j=1}^{n}x_{j}^{2}\big{)}\Big{]}, \tag{20c}\]
where (20c) directly applies Cauchy-Schwarz inequality, i.e., \(\Big{(}\sum_{j=1}^{n}A_{ij}x_{j}\Big{)}^{2}\leq\big{(}\sum_{j=1}^{n}A_{ij}^{2} \big{)}\big{(}\sum_{j=1}^{n}x_{j}^{2}\big{)}\). In the following, we derive the bounds for the two terms inside (20c) which are \(\sum_{j=1}^{n}A_{ij}^{2}\) and \(\sum_{j=1}^{n}x_{j}^{2}\). For this purpose, we use Chebyshev's inequality [26, Chapter 3, Equation 3.32], which can be stated as follows.
**Definition 5** (Chebyshev's inequality).: _Let \(X\) be a random variable with mean \(\mu\) and variance \(\sigma^{2}\). For any \(\varepsilon>0\),_
\[\mathbb{P}\big{(}|X-\mu|>\varepsilon\big{)}\leq\frac{\sigma^{2}}{\varepsilon^ {2}}.\]
In our setting, we are more interested in the central limits, i.e., the distances away from the mean values, of the random variables, i.e., \(x_{j}\) and \(A_{ij}\). Let's \(\varepsilon=d\sigma\) for real number \(d>0\), Chebyshev's inequality can be rewritten as
\[\mathbb{P}\big{(}|X-\mu|>d\sigma\big{)}\leq\frac{1}{d^{2}}. \tag{21}\]
Equivalently, we can bound the absolute value of \(|X-\mu|\leq d\sigma\) with probability is at least \(1-\frac{1}{d^{2}}\), i.e., \(\mathbb{P}\big{(}|X-\mu|\leq d\sigma\big{)}>1-\frac{1}{d^{2}}\). By choosing the value of \(d\), we can bound the value of \(X\) within a certain distance away from its mean with known probability. For sufficiently large \(d\), we have the following inequalities
\[\mu-d\sigma\leq X\leq\mu+d\sigma, \tag{22}\]
with high probability.
Now, by using inequalities in (22) for random variables \(x_{j}\) associated with mean \(\mu_{x}\) and variance \(\sigma_{x}^{2}\), and \(A_{ij}\) associated with zero-mean and variance \(\sigma_{a}^{2}\), we have the following inequalities
\[-d\sigma_{a} \leq A_{ij}\leq d\sigma_{a}, \tag{23a}\] \[\mu_{x}-d\sigma_{x} \leq x_{j}\leq\mu_{x}+d\sigma_{x}, \tag{23b}\]
with high probabilities. By taking the square of \(A_{ij}\) and \(x_{j}\), we have
\[A_{ij}^{2}\leq d\sigma_{a}^{2}, \tag{24a}\] \[x_{j}^{2}\leq\max\big{[}(d\sigma_{x}^{2}+\mu_{x})^{2},(-d\sigma _{x}+\mu_{x})^{2}\big{]}, \tag{24b}\]
with high probabilities. Taking the sum over \(n\) samples, we have
\[\sum_{j=1}^{n}A_{ij}^{2}\leq nd\sigma_{a}^{2}, \tag{25}\]
and
\[\sum_{j=1}^{n}x_{j}^{2}\leq\max\big{[}n(d\sigma_{x}^{2}+\mu_{x})^{2},n(-d \sigma_{x}+\mu_{x})^{2}\big{]}, \tag{26}\]
with high probabilities. As we empirically observe from the dataset that \(\mu_{x}>0\), the bound for the above equation can be simplified as
\[\sum_{j=1}^{n}x_{j}^{2}\leq n(d\sigma_{x}^{2}+\mu_{x})^{2}. \tag{27}\]
Replacing (25) and (27) in (20), we finally have
\[\frac{1}{m}\|\mathbf{y}\|_{2}^{2} \leq\frac{1}{m}\sum_{i=1}^{m}nd^{2}\sigma_{a}^{2}n(d\sigma_{x}+ \mu_{x})^{2} \tag{28a}\] \[=n^{2}d^{2}(d\sigma_{x}+\mu_{x})^{2}\sigma_{a}^{2}. \tag{28b}\]
As we want to have a power constraint \(\frac{1}{m}\|\mathbf{y}\|_{2}^{2}\leq P_{T}\), by choosing \(n^{2}d^{2}(d\sigma_{x}+\mu_{x})^{2}\sigma_{a}^{2}=P_{T}\), we have the variance of \(A_{ij}\) is
\[\sigma_{a}^{2}=\frac{P_{T}}{n^{2}d^{2}(d\sigma_{x}+\mu_{x})^{2}}. \tag{29}\]
The derivation above for \(\sigma_{a}\) proves the Proposition 1. In other words, by generating the measurement matrix \(\mathbf{A}\) from the normal distribution with zero-mean and variance \(\sigma_{a}^{2}\), the transmit power \(\frac{1}{m}\|\mathbf{y}\|_{2}\leq P_{T}\) can be achieved with high probability.
### _Proof of \(\mathbf{Ax}\) satisfying S-REC property_
Next, we prove that given the measurement matrix \(\mathbf{A}\) that follows the Proposition 1 will satisfy the S-REC property in Definition 4. In particular, the matrix \(\mathbf{A}\) is said to satisfy the S-REC\((S_{G},\gamma,\kappa)\), i.e., set-restricted eigenvalue condition of the set \(S_{G}\) (defined in (13)) with the parameters \(\gamma>0\) and \(\kappa\geq 0\), if \(\forall\mathbf{x}_{1},\mathbf{x}_{2}\in S_{G}\),
\[\|\mathbf{A}\big{(}G(\mathbf{z}_{1})-G(\mathbf{z}_{2})\big{)}\|_{2}\geq\gamma\| G(\mathbf{z}_{1})-G(\mathbf{z}_{2})\|_{2}-\kappa. \tag{30}\]
The range of the generator \(G(\mathbf{z})\) can be easily bounded by the output layer of the deep neural network. In our experiments, we use the Tanh activation function as the output of the neural network. Therefore, we have a simple bound \(-1\leq G(\mathbf{z})\leq 1\), which is similar to the bound of processed signal vectors \(\mathbf{x}^{*}\).
Let's define a vector \(\mathbf{v}=G(\mathbf{z}_{1})-G(\mathbf{z}_{2})\) with the bound of each element \(v_{j}\) (\(j=1,2,\ldots,n\)) of \(\mathbf{v}\) is \(-2\leq v_{j}\leq 2\), (30) can be rewritten as
\[\|\mathbf{Av}\|_{2}\geq\gamma\|\mathbf{v}\|_{2}-\kappa. \tag{31}\]
By using the definition of \(l_{p}\) norm in (4), we have
\[\gamma\|\mathbf{v}\|_{2} =\gamma\sqrt{\sum_{j=1}^{n}v_{j}^{2}} \tag{32a}\] \[\leq\gamma\sqrt{4n}\] (32b) \[=2\gamma\sqrt{n}, \tag{32c}\]
where the inequality in the second line is obtained by the bound \(-2\leq v_{j}\leq 2\). As a result, we have the following inequality for the term on the right-hand side of (30):
\[\gamma\|\mathbf{v}\|_{2}-\kappa\leq 2\gamma\sqrt{n}-\kappa. \tag{33}\]
To find a possible lower bound for \(\|\mathbf{Av}\|_{2}\), we use the inequalities between the \(l_{1}\) and \(l_{2}\) norm, and then find the probabilistic lower bound of the \(l_{1}\) norm based on Bernstein inequality. In particular, we apply the following inequality (see equations (A.3) and (A.4) in Definition A.2 of [20]):
\[\|\mathbf{Av}\|_{2}\geq\frac{1}{\sqrt{m}}\|\mathbf{Av}\|_{1}. \tag{34}\]
Using the definition of \(l_{1}\) norm in (4), (34) can be rewritten as
\[\|\mathbf{Av}\|_{2} \geq\frac{1}{\sqrt{m}}\sum_{j=1}^{n}\big{|}A_{ij}v_{j}\big{|} \tag{35a}\] \[\geq\frac{1}{\sqrt{m}}\Big{|}\sum_{j=1}^{n}A_{ij}v_{j}\Big{|}, \tag{35b}\]
where the inequality in the second line is obtained by using the generalized triangle inequality. Next, we use a probabilistic lower bound for (35b) which applies Bernstein inequality (see Theorem 7.27 - Chapter 7 of [20]), i.e., given the measurement matrix \(\mathbf{A}\), which has elements \(A_{ij}\) are zero-mean sub-gaussian random variables, we have the following probabilistic lower bound of
\[\mathbb{P}\Big{(}\big{|}\sum_{j=1}^{n}A_{ij}v_{j}\big{|}\geq t\Big{)}\leq 2 \exp\Big{(}\frac{-t^{2}}{4c\|\mathbf{A}\|_{2}^{2}}\Big{)}, \tag{36}\]
for \(\forall t>0\), where \(c\) is a subgaussian parameter. Let's \(t=t_{0}\sqrt{m}\) with \(\forall t_{0}>0\), (36) can be rewritten as
\[\mathbb{P}\Big{(}\big{|}\sum_{j=1}^{n}A_{ij}v_{j}\big{|}\geq\sqrt {m}t_{0}\Big{)}\leq 2\exp\Big{(}\frac{-t_{0}^{2}m^{2}}{4c\|\mathbf{A}\|_{2}^{2}} \Big{)} \tag{37a}\] \[\Rightarrow \mathbb{P}\Big{(}\frac{1}{\sqrt{m}}\big{|}\sum_{j=1}^{n}A_{ij}v_ {j}\big{|}\geq t_{0}\Big{)}\leq 2\exp\Big{(}\frac{-t_{0}^{2}m^{2}}{4c\|\mathbf{A}\|_{2 }^{2}}\Big{)}. \tag{37b}\]
Using the inequality in (35), (37b) becomes
\[\mathbb{P}\Big{(}\|\mathbf{Av}\|_{2}\geq t_{0}\Big{)}\leq 2\exp\Big{(} \frac{-t_{0}^{2}m^{2}}{4c\|\mathbf{A}\|_{2}^{2}}\Big{)}. \tag{38}\]
By choosing \(t_{0}=2\gamma\sqrt{n}-\kappa\), (31) can be written as
\[\|\mathbf{Av}\|_{2}\geq 2\gamma\sqrt{n}-\kappa, \tag{39}\]
with probability \(1-2\exp\Big{(}\frac{(-2\gamma m\sqrt{n}-m\kappa)^{2}}{4c\|\mathbf{A}\|_{2}^{2}} \Big{)}\). Applying inequality in (33), i.e., \(2\gamma\sqrt{n}-\kappa\geq\gamma\|\mathbf{v}\|_{2}-\kappa\), we have
\[\|\mathbf{Av}\|_{2}\geq\|\mathbf{v}\|_{2}-\kappa \tag{40}\]
with probability is at least \(1-2\exp\Big{(}\frac{(-2\gamma m\sqrt{n}-m\kappa)^{2}}{4c\|\mathbf{A}\|_{2}^{2} }\Big{)}\). As \(\mathbf{v}\) is defined by \(\mathbf{v}=G(\mathbf{z}_{1})-G(\mathbf{z}_{2})\), finally, we have
\[\|\mathbf{A}\big{(}G(\mathbf{z}_{1})-G(\mathbf{z}_{2})\big{)}\|_{2}\geq\gamma \|G(\mathbf{z}_{1})-G(\mathbf{z}_{2})\|_{2}-\kappa \tag{41}\]
with probability \(1-2\exp\Big{(}\frac{(-2\gamma m\sqrt{n}-m\kappa)^{2}}{4c\|\mathbf{A}\|_{2}^{2 }}\Big{)}\). This proves the S-REC(\(S_{G},\gamma,\kappa\)) property in (30).
By combining (41) and (29), the proof is now completed.
|
2309.13781 | Explainable Machine Learning for ICU Readmission Prediction | The intensive care unit (ICU) comprises a complex hospital environment, where
decisions made by clinicians have a high level of risk for the patients' lives.
A comprehensive care pathway must then be followed to reduce p complications.
Uncertain, competing and unplanned aspects within this environment increase the
difficulty in uniformly implementing the care pathway. Readmission contributes
to this pathway's difficulty, occurring when patients are admitted again to the
ICU in a short timeframe, resulting in high mortality rates and high resource
utilisation. Several works have tried to predict readmission through patients'
medical information. Although they have some level of success while predicting
readmission, those works do not properly assess, characterise and understand
readmission prediction. This work proposes a standardised and explainable
machine learning pipeline to model patient readmission on a multicentric
database (i.e., the eICU cohort with 166,355 patients, 200,859 admissions and
6,021 readmissions) while validating it on monocentric (i.e., the MIMIC IV
cohort with 382,278 patients, 523,740 admissions and 5,984 readmissions) and
multicentric settings. Our machine learning pipeline achieved predictive
performance in terms of the area of the receiver operating characteristic curve
(AUC) up to 0.7 with a Random Forest classification model, yielding an overall
good calibration and consistency on validation sets. From explanations provided
by the constructed models, we could also derive a set of insightful
conclusions, primarily on variables related to vital signs and blood tests
(e.g., albumin, blood urea nitrogen and hemoglobin levels), demographics (e.g.,
age, and admission height and weight), and ICU-associated variables (e.g., unit
type). These insights provide an invaluable source of information during
clinicians' decision-making while discharging ICU patients. | Alex G. C. de Sá, Daniel Gould, Anna Fedyukova, Mitchell Nicholas, Lucy Dockrell, Calvin Fletcher, David Pilcher, Daniel Capurro, David B. Ascher, Khaled El-Khawas, Douglas E. V. Pires | 2023-09-25T00:16:43Z | http://arxiv.org/abs/2309.13781v4 | # Explainable Machine Learning for ICU Readmission Prediction
###### Abstract
We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model. We propose a novel approach to the problem of the problem of the uncertainty of the model. We propose a novel approach to the problem of the uncertainty of the model.
###### Abstract
The intensive care unit (ICU) comprises a complex hospital environment, where decisions made by clinicians have a high level of risk for the patients' lives. A comprehensive care pathway must then be followed to reduce patient complications. Uncertain, competing and unplanned aspects within this environment increase the overall difficulty in uniformly implementing the care pathway. Readmission contributes to this pathway's difficulty, occurring when patients are admitted again to the ICU in a short timeframe, resulting in high mortality rates and high resource utilisation. Several works have tried to predict readmission through patients' medical information. Although they have some level of success while predicting readmission, those works do not properly assess, characterise and understand readmission prediction. This work proposes a standardised and explainable machine learning pipeline to model patient readmission on a multicentric database (i.e., the eICU cohort with 166,355 patients, 200,859 admissions and 6,021 readmissions) while validating it on monocentric (i.e., the MIMIC IV cohort with 382,278 patients, 523,740 admissions and 5,984 readmissions) and multicentric settings. Our machine learning pipeline achieved predictive performance in terms of the area of the receiver operating characteristic curve (AUC) up to 0.7 with a Random Forest classification model, yielding an overall good calibration and consistency on validation sets. From explanations provided by the constructed models, we could also derive a set of insightful conclusions, primarily on variables related to vital signs and blood tests (e.g., albumin, blood urea nitrogen and hemoglobin levels), demographics (e.g., age, and admission height and weight), and ICU-associated variables (e.g., unit type). These insights provide an invaluable source of information during clinicians' decision-making while discharging ICU patients.
keywords: Readmission, Intensive Care Unit, Machine Learning, Explainable Predictions. +
Footnote †: journal: Medical
unit (ICU) is crucial. In a given ICU, the likelihood of long-term length of stay (LOS), organ failures, and mortality tends to increase if adequate management is not taken into consideration [2; 3; 4]. Clinicians who are caring for these patients also have multiple competing activities to consider [5; 6], including the potential deterioration of the patient's condition after discharge, emergency admissions, elective admissions for high-risk surgery, staffing and resources, and evidence for appropriate allocation of resources. These aspects highlight the challenges across an ICU setting.
Readmission is one of the factors that extends the challenges within an ICU setting. It occurs when patients are admitted again to the ICU in a short timeframe (between 48 hours and 30 days), resulting in high mortality rates, increased LOS and, consequently, high resource utilisation [7; 8; 9]. In summary, when readmission occurs, it disrupts the care pathway for the patients and poses additional challenges to the clinical team and hospital caring for them.
Reducing ICU readmission rates might not only improve the care pathway, leading to better patient outcomes but also affect the hospital's bottom line [10; 11]. Predicting patients at high risk of readmission would not only allow early intervention to reduce the risk but also reduce mortality rates, reduce resource utilisation (based on LOS) and, potentially, hospital costs.
The emergence of abundant hospital electronic health record (EHR) data [12; 13; 14] allowed the use of machine learning (ML) models targeting ICU readmission [15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25]. These readmission models have been summarised in Table S1. Apart from their undeniable importance, these models lack proper interpretation, generalisation and validation in different ICU settings (e.g., monocentric _versus_ multicentric environments).
Our main contention is that predicting ICU readmission in a generalisable way with hospital-based datasets is still a challenge to overcome, even after taking into account these univariate predictive models and best practice ML techniques [21; 26; 27]. The main reason for this difficulty is that medical data is usually noisy, uncertain, and characterised by a high degree of missingness, as it relies on different human inputs. Further exacerbating this complexity is data imbalance, with only a small proportion of patients being readmitted. ICU patients who happen to be readmitted are also very heterogeneous [26], meaning that finding novel patterns across an ICU readmission data cohort tends to be challenging. As a result, an appropriate translation from ML models to the clinical environment is still limited.
In addition, readmission's association with healthcare quality has been debated in recent years [28; 29]. Many readmissions also emerge from factors inherent to patients, not only the quality of care, and may be reduced but not entirely eliminated. In these cases, better healthcare quality might not prevent readmission [30]. Still, predicting readmission accurately is valuable for post-discharge monitoring and planning. Machine learning models can connect clinical data to readmission, aiding pre-discharge decisions and informing readmission reduction programs [16; 18; 31; 32; 33].
This work proposes a standardised method for a comprehensive assessment of readmission modelling and prediction on ICU data [34; 35] with explainable machine learning [36]. Our method resulted in a generalisable predictive model for both multicentric and monocentric data settings [34; 35; 37]. This is the first study to provide such a large and robust validation on readmission models. In addition, explanations from the constructed model also provided a better understanding of the main relevant patient variables associated with 30-day ICU readmission, which can potentially help in guiding clinicians while treating and discharging patients from the ICU.
### State of Significance
**Problem** Intensive Care Unit (ICU) Readmission Prediction with Explainable Machine Learning.
**What is Already Known** Although there has been a range of related works aiming to predict ICU readmission with a variable level of success, those studies do not properly assess, characterise and understand readmission prediction, limiting their overall applicability to patient data.
**What This Paper Adds** This study proposes a standardised and explainable machine learning (ML) pipeline to model, predict and understand 30-day readmission. With a robust validation procedure, the developed ML model achieved generalisable performance on completely new patient data while deriving insightful (literature-supported) conclusions to contrast readmission and non-readmission within an ICU.
## 2 Material and Methods
Figure 1 presents the main methodological workflow followed in this work. Collecting data from monocentric (i.e., MIMIC IV) and multicentric databases (i.e., eICU) presents itself as the first step. Next, from both databases, the creation of variables describing the patient's information takes
place considering their admission timeframe. These variables include patient demographics, pre-admission information, vital signs and blood test results, and take into account the first and the last 24 hours, as well as their average values. Data preprocessing includes several steps, such as imputation, imbalance learning, and feature standardisation. The inclusion of variables outlining the hospitals and ICUs also plays an important role in this characterisation. Machine learning algorithms take these features as input to build a readmission model based on a timeframe of 30 days. The employed machine learning algorithms use multicentric data from eICU to build and internally validate the predictive model for readmission while it externally tests and validates the final model on monocentric data from MIMIC-IV. The readmission model predicts whether a given patient will be readmitted or not and supports explanations for its predictions, driving insights to clinicians while discharging a given patient in the ICU.
### Data Sources
For assessing, analysing and understanding ICU readmission, we utilise two datasets: (i) the eICU Collaborative Research Database (eICU) [34; 35; 38], and (ii) the Medical Information Mart for Intensive Care (MIMIC IV) [37; 38], which are described in next subsections. Their access relied on
Figure 1: **The methodological workflow followed by this work. Data comes from monocentric and multicentric databases (i.e., MIMIC IV and eICU, respectively), where the characterisation of ICU patients, hospitals, and ICU takes place. Preprocessing filters out, standardises and imputes data for the development of a 30-day readmission machine learning model, which is built and validated on eICU, and tested on MIMIC-IV. This model drives explanations from variables and predictions, potentially assisting and guiding clinicians while treating and discharging new patients from the ICU.**
PhysioNet [38].
#### 2.1.1 eICU
The eICU contains data collected from 335 ICUs at 208 US hospitals in 2014 and 2015. This cohort has information about 166,355 unique patients and 200,859 admissions, accounting also for readmissions for those listed patients.
The eICU cohort is a de-identified multicentric dataset with information about patient demographics, admission and discharge details, APACHE IV scores and variables used for its calculation, vital signs, laboratory test results, medications, and care plan information.
#### 2.1.2 Mimic Iv
The MIMIC cohort (version IV) emerged from two in-hospital systems of the Beth Israel Deaconess Medical Center (BIDMC): Electronic Health Record (EHR) and ICU-specific Clinical Information System. MIMIC IV has data about patients' admissions through the emergency department or one of the intensive care units at BIDMC between 2008 and 2019. It includes data about 382,278 patients and 523,740 admission records.
Like eICU, this database is also de-identified and comprises demographics, hospital billing information, data related to patients' admissions, transfers, and discharges, information about medication administration, and laboratory measurements. MIMIC IV also contains data generated in the ICU, such as intravenous and fluid inputs, procedures and services, charted information, and patients' outputs.
### Readmission Outcome
Patients were considered to have met the primary readmission outcome if they had one or more admission episodes within 30 days of discharge from their first ICU admission (see Supplementary Material and Methods for more details). If a patient is readmitted twice or more times, only the first readmission within 30 days counts for their data. It is worth noting that this range of days to define readmission follows different methodologies [17; 21; 22; 23].
Within the 30 days, the eICU cohort had a total of 149,009 patients, where 6,021 and 142,988 were admitted and non-readmitted, respectively. On the other hand, MIMIC IV had a total of 68,505 patients, where 5,984 underwent readmission and 62,521 did not.
### Predictor Variable Selection
The same potential predictor variables were extracted from both datasets, which resulted in 168 common variables. Patient-level data included demographics (age, sex, weight, and height), admission diagnosis, highest and lowest physiological and biochemical values from the first and last 24 hours of ICU admission, treatments provided in ICU (ventilation, dialysis, and vasopressor) and overall illness severity scores (i.e., APACHE IV scores [39]). Institutional-level variables included hospital-related (e.g., hospital size and location) and ICU-related variables (e.g., unit type and admission source). Supplementary Materials (Table S2) detail and distinguish these common variables in both databases across the following six categories: (i) demographics, (ii) vital sign- and blood test-related, (iii) pathology- and treatment-based, (iv) APACHE severity-related, (v) hospital-related, and (vi) ICU-related.
In addition, Figure S1 and Supplementary Material and Methods cover more details regarding the data collection criteria and the prediction window. Furthermore, categorical variables were encoded in a one-hot encoding scheme (a.k.a, one-of-K or dummy). Given the transformation in the categorical feature space, the number of features increased from 168 to 186 variables.
It is also worth noting that eICU contains an extra level of information on variables when compared to MIMIC IV. The variables from the aforementioned categories, which are detailed in Table S2, are only those that match across both cohorts or do not have linkage to the target readmission variable (e.g., length of stay or mortality). Nevertheless, as we used eICU to train and validate the machine learning models, we have included important eICU variables (missing in MIMIC IV) to analyse readmission. This step has been done only for the sake of readmission explainability. This list of complementary variables is detailed in Table S3.
### Data Preprocessing
We performed a standardised approach to preprocess both datasets (i.e., eICU and MIMIC IV). This approach included basic steps to filter patients' information and yield valid and reliable information for readmission prediction.
#### 2.4.1 General Filtering
First, this step removed patients that are younger than 18 years old from the cohort. In addition, the proposed filter did not consider patients who died during their first ICU. Furthermore, it separated patients who spent less than four hours in the ICU from the datasets. This preprocessing step also excluded patients with abnormal values in the generated variables, such as having an admission weight greater than 260 kg or having an admission height greater than 240 cm. Supplementary Material and Methods and Figure S2 detail these patients' inclusion and exclusion criteria on both eICU and MIMIC IV cohorts.
#### 2.4.2 Handling Missing Data
The employed imputation method took into consideration a threshold of 20% of missingness, meaning that our machine learning pipeline will only use variables with less than 20% of missing values. The imputation method handled missing values by replacing them with population means (in the case of continuous variables) or a constant (in the case of categorical variables).
#### 2.4.3 Dealing with Imbalance
Given the low proportion of readmitted patients in the ICU, the proposed pipeline considered several methods from the Imbalanced-learn toolbox [40] to tackle the hardness of learning on imbalanced data [41]. Nevertheless, random undersampling over the majority class (i.e., the non-readmitted class) yielded similar results to other methods, being then selected by its simplicity and performance.
Random undersampling took into accounting for the different data distributions, from 1:1 (1 readmitted patient to 1 non-readmitted patient) to 1:9 (one readmitted patient to 9 non-readmitted patients) to the real distribution (approximately one readmitted patient to 25 non-readmitted patients in eICU, and one readmitted patient to 10 non-readmitted patients in MIMIC IV).
#### 2.4.4 Standardisation
In keeping with best practices to improve the predictive performance of variables [42], standardisation was undertaken by transforming each value of a given variable. On each variable value, standardisation subtracts it from the variable's mean and then divides it by the variable's standard deviation. Although standardisation is not a matter for some machine learning models,
it is for several others. We therefore applied the same pipeline for all of them, making sure the use of proper data [42].
### Machine Learning Modelling, Evaluation and Feature Selection
Machine learning (ML) modelling followed traditional development and evaluation guidelines [43]. The proposed modelling included the use of seven ensemble-based ML algorithms for patient readmission prediction: (i) explainable boosting machine (EBM) [44], (ii) extremely randomised trees (ERT) [45], (iii) random forest (RF) [46], (iv) extreme gradient boosting (XGBOOST) [47], (v) adaptive boosting (ADABOOST) [48], (vi) gradient boosting (GB) [49], and (vii) light gradient boosting machine (LightGBM) [50]. Table S4 presents the hyper-parameter optimisations for these ML algorithms.
It is important to emphasise that these algorithms have been chosen not only because of their robustness based on ensemble properties (i.e., by combining the results of several ML models) but also for their interpretation easiness. They all have (at some level) pre-built functions to describe feature importance for a given assessed dataset and can be straightforwardly used with other methods/tools for explainability.
With these algorithms as base classifiers, a bottom-up greedy feature selection ensued to ensure low ML complexity while maintaining generalisable predictive performance. Greedy feature selection starts its iterative process by having a bucket with zero features, adding one feature at each round. This method adds a feature in its bucket if it achieves the best predictive performance on a 10-fold cross-validation procedure, which relies on a model built by a classification algorithm. The evaluation of each feature relies on an average between Matthew's correlation coefficient (MCC) [51] and the area under the receiver operating characteristic (AUC) curve [42; 43; 52], which therefore considers resilience to class imbalance. In addition, other evaluation measures complemented the decision process in our ML pipeline, including the area under the precision-recall curve (APR), balanced accuracy, precision, recall (sensitivity), specificity, and F1 score [42; 43; 52].
### Statistical Analyses
Data are presented as number/proportion (percentage), mean (\(\pm\) standard deviation) or median (inter-quartile range) depending on the type and distribution of data. Chi-squared, ANOVA, Kruskal-Wallis and Wilcoxon rank-sum tests were used to compare groups depending on the type and
distribution of data and the number of groups examined [53]. Statistical significance considers p-values lower than 0.001. The sample size of eICU and MIMIC IV datasets determined this decision, which consequently guaranteed the robustness of this study.
It is also worth noting that statistical analyses performed readmission comparisons not only within the same population (e.g., eICU) but also between populations (i.e., eICU _versus_ MIMIC IV), strengthening this methodology. Furthermore, APACHE scores are provided only for eICU databases, while not directly available for MIMIC IV, enriching the explanation of characteristics of the former to the latter.
### Model's Diagnostic Interpretation, Analysis, and Explanations
The proposed ML pipeline for readmission also assesses its final model with _a posteriori_ interpretation and explanation methods, which are described next.
#### 2.7.1 Likelihood Ratios
Likelihood ratio (LR) [54] provides a measure for expressing the diagnostic accuracy of a clinical finding of the readmission model from this work. LR represents the probability of patients with a condition or undergoing a medical event (e.g., being readmitted) divided by the probability of patients without a condition or undergoing a medical event (e.g., not being readmitted). Positive LRs in readmission prediction estimate the model's capabilities of identifying readmitted against non-readmitted patients. When LRs are greater than one, it means they go along with the readmission diagnosis. In other words, the greater the LR is, the more certain the predictive model is in finding readmitted patients against non-readmitted patients.
#### 2.7.2 Calibration
The calibration technique visually evaluates the predictive readmission model by inspecting the calibration curves and also numerically estimating the calibration slope and intercept, as well as the integrated calibration index (ICI) and associated metrics (E50, E90, Emax ) [55]. The ideal calibration slope is equal to one. The ideal calibration intercept, as well as the ideal value for the ICI and its related metrics, is zero.
#### 2.7.3 Shapley Additive Explanations
Our pipeline also utilised Shapley Additive Explanations (SHAP) [56], which is a game theory-based method to derive explanations for individual predictions [36], in which combined results yield a global interpretation of the model's behaviour. In readmission prediction, SHAP assists with model analysis, leading to a better understanding of readmission through a range of explanations. As a result, it can be used to recommend a set of personalised ICU guidelines while treating/discharging patients in the ICU.
## 3 Results
### eICU Readmission Group Characteristics, Outcomes and Comparison to MIMIC IV
Table 1 provides the analyses contrasting readmitted and non-readmitted population groups. Out of 149,009 ICU admissions in the eICU database, readmission happens in 6,021 (4.04%) patients within 30 days of their first ICU admission. We have observed that the readmitted group includes older, sicker and more male patients. These results are corroborated by a statistical test with a significance level lower than 1%. The readmitted group also had a high level of gastrointestinal, respiratory and sepsis admission diagnoses when compared to the non-readmitted group (with a p-value \(<\) 1%).
The combined medical-surgical ICU and cardiothoracic surgical ICU types reached the lowest proportion of readmissions, with statistical significance. The readmitted group received more vasopressors and renal replacement therapies when contrasted with the non-readmitted group. The readmitted group commonly had a longer ICU stay and, consequently, a longer hospital length of stay (p-value \(<\) 1%). Finally, they were more than twice as likely to die in a hospital (16.0% versus 7.1% with a p-value \(<\) 1%), with survivors needing more rehabilitation, home nursing or skilled nursing facilities (p-value \(<\) 1%).
The Medical-Surgical Intensive Care Unit (MSICU) had the highest readmission rate at 49.1%, followed by the Medical Intensive Care Unit (MICU) with 10.9% and the Critical Care Unit-Cardio-Thoracic Intensive Care Unit (CCU-CTICU) with 10.1%. However, the absolute difference between readmission and non-readmission rates in most ICU types does not exceed 2.7%, except for MSICU, with a 6.6% difference between the two groups. The lowest readmission rate has been found in the Cardio-Surgical Intensive Care Unit (CSICU).
\begin{table}
\begin{tabular}{|l|l|l|l|l|} \hline \multirow{2}{*}{**Characteristic**} & \multirow{2}{*}{**(value with \% OR**} & \multicolumn{2}{c|}{**Non-Readmitted**} & \multirow{2}{*}{**p-value**} \\ & **mean/median** & **mean/median** & **mean/median** \\ \hline
**Readmission Numbers** & 6,021 & 142,988 & - \\ \hline
**Gender:** & & & \\ \hline Male & 3,417 (56.8\%) & 77,144 (54.0\%) & 2.198e-05* \\ Female & 2,602 (43.2\%) & 65,771 (46.0\%) & 2.198e-05* \\ \hline
**Age** & & 65.48 / 67.00 & 62.70 / 65.00 & \(<\)2.2e-16* \\
**Body Mass Index (BMI)** & 28.69 / 27.30 & 29.05 / 27.51 & \(<\)0.001* \\
**APACHE IV** & 59.11 / 56.00 & 53.86 / 50.00 & \(<\)2.2e-16* \\ \hline
**APACHE Diagnosis:** & & & \\ \hline Cardiovascular & 1,789 (29.7\%) & 45,423 (31.8\%) & \(<\)0.001* \\ Neurological & 881 (14.6\%) & 27,068 (18.9\%) & \(<\)2.2e-16* \\ \hline Gastrointestinal & 798 (13.3\%) & 13,755 (9.6\%) & \(<\)2.2e-16* \\ Trauma & 230 (3.8\%) & 6,470 (4.5\%) & 0.011 \\ Respiratory & 1,045 (17.4\%) & 22,158 (15.5\%) & \(<\)0.001* \\ Sepsis & 972 (16.1\%) & 17,404 (12.2\%) & \(<\)2.2e-16* \\ \hline Other & 412 (6.8\%) & 13,810 (9.7\%) & 3.8e-13* \\ \hline
**Unit Type:** & & & \\ \hline Critical Care Unit-Cento-Thoracic Intensive & 610 (10.1\%) & 12,322 (8.6\%) & 4.8e-05* \\ Care Unit (CCU-CTICU) & 90 (1.5\%) & 5,335 (3.7\%) & \(<\)2.2e-16* \\ Cardio-Surgical Intensive Care Unit (CSICU) & 291 (4.8\%) & 4,750 (3.3\%) & 2.7e-10* \\ Cardiac Intensive Care Unit (CICU) & 440 (7.3\%) & 9,392 (6.6\%) & 0.025 \\ Medical Intensive Care Unit (MICU) & 657 (10.9\%) & 11,710 (8.2\%) & 7.6e-14* \\ Surgical Intensive Care Unit (SICU) & 503 (8.4\%) & 8,854 (6.2\%) & 1.5e-11* \\ Medical-Surgical Intensive Care Unit (MSICU) & 2,955 (49.1\%) & 79,613 (55.7\%) & \(<\)2.2e-16* \\ Neurological Intensive Care Unit (NICU) & 475 (7.89\%) & 11,012 (7.7\%) & 0.610 \\ \hline
**Hospital Capacity:** & & & \\ \hline Extra-Large Hospital & & & \\ (\(>\)500 beds) & 2,427 (40.3\%) & 51,538 (36.0\%) & 1.7e-11* \\ \hline Large Hospital & & & \\ (250 - 499 beds) & 1,431 (23.8\%) & 33,108 (23.2\%) & 0.277 \\ Medium-sized Hospital & & & \\ (100 - 249 beds) & 1,052 (17.5\%) & 31,539 (22.1\%) & \(<\)2.2e-16* \\ \hline Small hospital & & & \\ (\(<\)10 beds) & 231 (3.8\%) & 9,438 (6.6\%) & \(<\)2.2e-16* \\ Unknown number of beds & 880 (14.6\%) & 17,365 (12.1\%) & 1.1e-08* \\ \hline
**Hospital Type:** & & & \\ \hline Teaching & 1,687 (28.0\%) & 35,715 (25.0\%) & 1.1e-07* \\ \hline Non-teaching & 4,334 (72.0\%) & 107,273 (75.0\%) & 1.1e-07* \\ \hline
**Patient Origin:** & & & \\ ICU & 152 (2.5\%) & 3,261 (2.3\%) & 0.232 \\ \hline Operating Room & 564 (9.4\%) & 14,675 (10.3\%) & 0.026 \\ Emergency Department (ED) & 2,089 (34.7\%) & 59,728 (41.8\%) & \(<\)2.2e-16* \\ Recovery Room & 222 (2.7\%) & 4,929 (3.4\%) & 0.3359 \\ \hline Unknown & 1,545 (25.7\%) & 33,334 (23.3\%) & 2.7e-05* \\ Other & 1,449 (24.1\%) & 27,061 (18.9\%) & \(<\)2.2e-16* \\ \hline
**Therapios in ICU:** & & & \\ \hline Mechanical Ventilation & 1,252 (20.8\%) & 27,651 (19.3\%) & 0.005 \\ Vasopressors & 761 (12.6\%) & 16,060 (11.2\%) & \(<\)0.001* \\ Renal Replacement Therapies (RRT) & 232 (3.9\%) & 4,406 (03.1\%) & \(<\)0.001* \\ \hline
**Length of Stay (LOS):** & & & \\ ICU LOS, days & 3.72 / 2.18 & 2.86 / 1.75 & \(<\)2.2e-16* \\ Hospital LOS, days & 17.86 / 14.64 & 6.92 / 5.00 & \(<\)2.2e-16* \\ \hline
**Hospital Mortality** & 965 (16.0\%) & 10,162 (7.1\%) & \(<\)2.2e-16* \\ \hline
**Discharge destination:** & & & \\ \hline Home & 2,290 (38.0\%) & 87,841 (61.4\%) & \(<\)2.2e-16* \\ Care Facility & 2,409 (40.0\%) & 38,220 (26.7\%) & \(<\)2.2e-16* \\ \hline
**Care Facility (Discharge destination):** & & & \\ \hline Skilled Nursing Facility & 1,150 (19.1\%) & 18,578 (13.0\%) & \(<\)2.2e-16* \\ \hline Other Hospital & 374 (6.2\%) & 5,698 (4.0\%) & \(<\)2.2e-16* \\ Rehabilitation & 428 (7.1\%) & 6,213 (4.3\%) & \(<\)2.2e-16* \\ Nursing Home & 128 (2.1\%) & 1,604 (1.1\%) & 1.7e-12* \\ Other External & 329 (5.5\%) & 6,127 (4.3\%) & 1.2e-05* \\ \hline \multicolumn{4}{l}{\(\ast\) Presence of statistical significance.} \\ \end{tabular}
\end{table}
Table 1: Characteristics of discharged readmitted and non-readmitted patients in eICU.
Figure S3 also highlights patients readmitted to another type of ICU where the patient was originally admitted. Several ICUs in the eICU database have more than 30% readmissions to other ICU types, such as the Surgical Intensive Care Unit (SICU), Medical Intensive Care Unit (MICU) and Cardiac Intensive Care Unit (CICU).
While aiming to compare MIMIC IV and eICU readmitted patients, we noticed several differences in the level of information these cohorts provide, including patients who do not have length of stay (LOS) and discharge location. As a result, MIMIC IV and eICU were reduced from 5,984 and 6,021 to 5,980 and 5,403, respectively. This step was performed only to compare both cohorts directly. Matching the characteristics of MIMIC IV and eICU is not straightforward overall. MIMIC IV does not include several characteristics contained in eICU, and vice-versa, meaning that the comparison across the readmission groups from eICU and MIMIC IV was restricted to only a few characteristics.
Apart from that, we could observe they have similar baseline characteristics (Table 2). However, MIMIC IV readmitted patients received more ventilation and vasopressors but fewer renal replacement therapies than the readmitted eICU patients. In addition, MIMIC IV readmitted patients stayed longer in the ICU and hospital stay. Mortality was also slightly higher in the MIMIC IV readmission patients, although no statistical difference has been found between the two databases. Among survivors, the readmission group in the eICU was discharged more to home, while readmitted patients in MIMIC IV had a higher rate of further treatments in a rehabilitation care facility (p-value \(<\) 1%).
### Machine Learning Validation
The statistical analysis provided an insightful characterisation of the readmission group, contrasting it with the non-readmission group and across populations. With this information, we now move to a machine learning analysis and validation.
Table S5 shows the 47 selected features by the greedy feature selection approach, considering a balanced readmission dataset. The details of these variables representing machine learning features can be found in Table S2. Although we employed several different undersampling techniques to tackle the high level of readmission imbalance, using a balanced readmission rate presented itself as the most successful technique, which was then applied in both eICU and MIMIC IV cohorts at first. Hyper-parameter optimisation
has also been applied after feature selection, aiming to improve the predictive performances of the resultant models (Table S4).
When bringing all these ML components together, our proposed pipeline achieved an area under the ROC curve (AUC) of 0.68 on multicentric eICU data under a 10-fold cross-validation procedure with a Random Forest classifier with 80 trees in a balanced readmission scenario. On the blind test (stratified 10% from eICU data), which internally validates the proposed model, consistent results were reached (AUC of 0.672) when compared to 10-fold cross-validation. When externally validated on MIMIC IV's data, our model reached an AUC of 0.616, which demonstrates the overall generalisation capabilities of the proposed model to predict 30-day readmission. Figure 2 shows the AUC plots for these three validation schemes. The results for other performance measures are also summarised in Table S6.
### Calibration Analysis
Calibration metrics are presented in Table S7 for our readmission model on 10-fold cross-validation and blind testing on eICU, and external validation on MIMIC IV. Given these metrics, we present the calibration curves for these evaluation sets in Figures S4-S6.
Figure S4 depicts the calibration curve for our proposed readmission
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline \multirow{2}{*}{**Characteristic**} & \multicolumn{2}{l|}{**eICU**} & \multicolumn{2}{l|}{**MIMIC**} \\ & **(value with \% OR** & **(value with \% OR** & **p-value** \\ & **mean/median**) & **mean/median** & \\ \hline
**Readmission Numbers** & 5,403 & 5,980 & - \\ \hline
**Gender:** & & & \\ \hline Male & 3,077 (56.9\%) & 3,382 (56.6\%) & 0.700 \\ \hline Female & 2,326 (43.1\%) & 2,598 (43.4\%) & 0.685 \\ \hline \hline
**Body Mass Index (BMI)** & 28.7 / 27.3 & 28.9 / 27.8 & 0.010 \\ \hline
**Age** & 65.5 / 67.0 & 65.7 / 67.0 & 0.620 \\ \hline \hline
**Patient Origin:** & & & \\ \hline Emergency Department (ED) & 1,882 (34.8\%) & 2,824 (47.2\%) & \textless{}2.2e-16* \\ \hline \hline
**Therapies in ICU:** & & & \\ \hline Mechanical Ventilation & 1,146 (21.2\%) & 1,410 (23.6\%) & 0.003 \\ \hline Vasopressors & 703 (13.0\%) & 1,111 (18.6\%) & 6.6e-16* \\ \hline Renal Replacement Therapies (RRT) & 215 (4.0\%) & 66 (1.1\%) & \textless{}2.2e-16* \\ \hline \hline
**Length of Stay (LOS):** & & & \\ \hline ICU LOS, days & 3.7 / 2.2 & 3.5 / 2.0 & 6.2e-05* \\ \hline Hospital LOS, days & 17.9 / 14.6 & 20.2 / 15.0 & 9.5e-06* \\ \hline \hline
**Hospital Mortality** & 882 (16.3\%) & 1,089 (18.2\%) & 0.009 \\ \hline \hline
**Discharge destination:** & & & \\ \hline Home & 2,055 (38.0\%) & 631 (10.6\%) & \textless{}2.2e-16* \\ \hline Care Facility (Rehabilitation) & 407 (7.5\%) & 697 (11.7\%) & 1.5e-13* \\ \hline Care Facility (Skilled Nursing Facility) & 1,052 (19.5\%) & 1,132 (18.9\%) & 0.479 \\ \hline \multicolumn{4}{l}{\(*\) Presence of statistical significance.} \\ \end{tabular}
\end{table}
Table 2: Comparisons of characteristics between eICU and MIMIC IV readmission groups.
Figure 2: The predictive performance of the proposed readmission model on 10-fold cross-validation and blind test on eICU data. External validation was made utilising MIMIC IV data.
model on 10-fold cross-validation on eICU data. The calibration curve on the results in cross-validation indicates a slight overestimation of risk at predicted probabilities between 0.2 and 0.4, and a slight underestimation of risk at predicted probabilities between 0.6 and 0.7. Figure S5, in turn, shows the calibration curve for our model on a blind test set from eICU data. The calibration curve indicates a slight overestimation of risk at predicted probabilities around 0.5. Finally, Figure S6 outlines the calibration curve for our model on an external validation set, i.e., MIMIC IV. Its calibration curve indicates a slight underestimation of risk at predicted probabilities below 0.2, and an overestimation of risk at predicted probabilities above 0.4 but most significantly above 0.6. Our main conclusion is that our model is well-calibrated in all sets, demonstrating its clinical usability in an ICU setting.
### Likelihood Ratio Analysis
Figure S7 summarises the likelihood ratios (LRs) for our readmission model, which performs using a classification probability threshold of 0.5. With this threshold, the model's LRs for cross-validation on eICU, blind testing on eICU, and external validation on MIMIC IV are 1.660 (estimated change of probability of 9.628%), 1.636 (estimated change of probability of 5.494%), respectively. These model's results indicate its reasonable diagnostic expression.
Based on Figure S7, a threshold between 0.70 and 0.82 benefits the model the most regarding the LRs. The local and global optima based on the classification threshold 10-fold cross-validation would be 0.712 and 0.813, respectively. With 0.712, the model's LR for 10-fold cross-validation on eICU, blind testing on eICU and external validation on MIMIC IV would be 2.905 (estimated change of probability of 20.26%), 3.326 (estimated change of probability of 22.831%) and 1.942 (estimated change of probability of 12.615%), respectively. These estimated changes show small but relevant shifts in the post-test probabilities.
With 0.813, the model reaches a maximum LR on 10-fold cross-validation on eICU (i.e., 4.256), resulting in a change of probability of readmission for 27.517%. Nevertheless, with this threshold, we have a lower LR for blind testing and external validation. In the blind test on eICU, an LR of 1.075 would be identified (with an estimated change of probability of 1.373%), while for the external validation on MIMIC IV, the model would result in an LR of 1.400 (with an estimated change of probability of readmission of 6.393%).
This analysis determines that setting properly the classification threshold is beneficial for having a better diagnostic readmission profile for ICU patients. In summary, the model's diagnostic power depends on a proper analysis comparing and contrasting machine learning and clinical predictiveness.
### Explainable Machine Learning for ICU Readmission
We used SHAP to demonstrate how individual features included in the model (see Tables S2 and S5) influence overall readmission predictions. Figure 3 summarises and highlights the 20 most important features, based on SHAP values, we employed to characterise ICU readmission in eICU and MIMIC IV.
Most of the features in the SHAP plot of Figure 3 are vital sign- and blood test-related (see Table S2). The three most important in this category are the minimum value of albumin in the blood during the first 24 hours, the maximum level of blood urea nitrogen (BUN) during the last 24 hours and the minimum level of hemoglobin during the last 24 hours.
We identified through the SHAP tree explanation model that high values for the variables Min. Albumin (First 24hs) and Min. Hemoglobin (Last 24hs) are usually more linked to the non-readmission of patients, while low values for them are more present in readmitted patients. Max. Blood Urea Nitrogen (Last 24hs) reveals an opposite trend, where higher values for this variable are fairly more associated with ICU patient readmission, and there is clear evidence that low values for the Max. Blood Urea Nitrogen (Last 24hs) result are attached to non-readmission.
In addition, the SHAP tree explanation model resulted in only one hospital-related variable (see Figure 3 and Table S2), i.e., the number of minutes from unit admission time that the patient has been admitted to the hospital (i.e., the Hospital Admission Offset). In this case, there is a high correlation between lower values of Hospital Admission Offset and non-readmission, whereas relatively higher values for this variable are more related to readmission.
From ICU-related variables, the SHAP model outputs a category of unit type as important - i.e., the Unit Type Med-SurgICU, which stands for a unit type of Medical-Surgical ICU. As the variable Unit Type corresponds to a categorical variable and Med-SurgICU is one of its options, we transformed its value via one-hot encoding. In the SHAP summary plot of Figure 3, this basically means that most patients coming from Med-SurgICU are not
Min. Albumin (First 24hs)
Max. Blood Urea Nitrogen (Last 24hs)
Min. Hemoglobin (Last 24hs)
Hospital Admission Offset
Age
Unit Type (Med-SurglCU)
Min. Respiratory Rate (Last 24hs)
Avg. Heart Rate (Last 24hs)
Max. White Blood Cells (Last 24hs)_max_last_24
Min. Creatinine (First 24hs)
Max. Creatinine (Last 24hs)
Min. Heart Rate (First 24hs)
Admission Weight
Admission Height
Min. Partial Thromboplastin Time (First 24hs)
Avg. Non-Invasive Systolic Blood Pressure (Last 24hs)
Min. Bicarbonate (Last 24hs)
Min. Bilirubin (First 24hs)
Max. Temperature (First 24hs)
Max. Non-Invasive Systolic Blood Pressure (First 24hs)
Figure 3: The SHAP summary plot for our proposed readmission model on eICU training data. We show the 20 features with higher SHAP values, i.e., that have a higher impact on the model’s predictive outputs.
facing readmission, while patients that are not passing through such unit type undergo readmission.
Three important demographics-related variables were also tagged in the SHAP summary from Figure 3: Age, Admission Weight and Admission Height. As seen in this figure, older people were often associated with readmission, while low values in age are noticed in non-readmitted patients. This factor is actually expected as elderly patients tend to present more comorbidities [57], resulting in more readmissions. Contrarily, admission weight followed a distinct pattern, where readmitted patients had low-medium values in their readmission weight. We also noticed that readmission and non-readmission were found to be more present in high and low values of the variable admission height, respectively.
Finally, pathology-related variables, such as the Min. Partial Thromoplastin Time (First 24hs), are also at the top 20 features ranked by SHAP values. As seen in Figure 3, high Min. Partial Thromoplastin Time (First 24hs) values are more connected with readmission.
We believe these explanations, derived by SHAP for the proposed readmission model, regarding the top 20 most impactful variables might serve as a new source of guidelines while assessing patients through ICU, especially while discharging patients. Although clinical practice and experience will always be compulsory in such conditions, we believe machine learning models might provide proper support for them.
## 4 Discussion
### Key Findings
Although it is challenging to map and match common variables or characteristics across different cohorts, our study drove important clinical insights while analysing eICU and comparing it to MIMIC IV (Section 3.1). We discovered how heterogeneous the readmission monocentric and multicentric populations are in general. For example, the readmitted patients in MIMIC IV were treated more frequently with ventilation and vasopressors, although renal replacement therapies were more found in patients readmitted to eICU. Such contrasts in the cohort populations highlight the ability of our standardised pipeline to deal with heterogeneous readmission data.
Our proposed machine learning pipeline learned proper patterns on multicentric ICU data, consequently generalising well on an independent blind
test set over the same data (Section 3.2). Our proposed machine learning model captured the essence of readmission, being able to transfer similar predictive performance to external validation on monocentric ICU data (Section 3.2). Overall, this shows learning on data coming from multiple ICUs has its limitations but may lead to generalisable predictive performance if done adequately.
A reasonable calibration level is also an essential aspect of the proposed readmission model (Section 3.3). As a result, our proposed model shares meaningful clinical decision-making due to its ability to calculate individualised probabilistic estimates of readmission [58]. For example, clinicians may decide that patients with a risk of readmission at least double that of the average risk in their ICU population should be flagged for routine monitoring following discharge from ICU. If the patient is moved to the ward, this may take the form of a brief clinical review by an intensivist each day following transfer to identify early signs of clinical deterioration. This could potentially facilitate the escalation of ward-based measures to optimise the patient's management and avoid ICU readmission, or it could facilitate better preparation and planning such that unplanned readmissions are minimised.
A good diagnostic level, when we increase the classification probability threshold, complements the model's predictive performance (Section 3.4). At the threshold(s) of 0.712 (and 0.813), our model returns the most reliable likelihood ratio among cross-validation, blind testing and external validation, yielding trustworthy readmission predictions.
Finally, we highlight how the 20 most impactful features are linked with readmission prediction through their explainability (Section 3.5). These explanations might be used to support the clinical team. In fact, some of the insights provided by the interpretation of the readmission model's explanation are corroborated by the literature. We have highlighted these analyses and observations for some of the important variables displayed by SHAP values (Figure 3) next:
* Low serum albumin levels in ICU patients have already been linked with death, length of stay, and readmission [16, 59]. These results from previous work are congruent with our analysis based on Figure 3.
* High values of blood urea nitrogen (BUN) have also been associated with readmission and mortality in past studies [16, 60, 61], representing an important feature to characterise critically ill patients admitted to ICU.
* Previous studies also mapped low hemoglobin levels preceding ICU discharge, which may provide important insights for avoiding readmission [20; 62; 63].
* Elderly patients usually are observed to be more associated with ICU readmission [64; 65; 66].
* The patients' weight is strongly correlated to readmission. While gaining weight through an ICU readmission is linked to non-readmission in patients with heart failure [67], weight and weight change are common factors associated with mortality, duration of mechanical ventilation, and length of stay in the ICU [68; 69]. Therefore, looking at the patient's admission weight and tracking their respective weight during ICU admission is a relevant aspect before discharging a patient from the ICU.
Overall, this shows our proposed model was able to learn and capture good insights on eICU data, being compatible with results and discussion in the ICU readmission literature.
### Strengths and Limitations
One of the limitations of this work is the lack of comparable variables between multicentric eICU and monocentric MIMIC IV, including complete APACHE IV scores, which are not present in MIMIC IV. Nevertheless, we matched 168 common variables between the two cohorts, constituting both patient- and institutional-level information, building a key strength of our characterisation.
Another aspect is that several variables presented a large portion of missing data (more than 80%), bringing an extra level of difficulty to perform our methodology. We handled missing data by excluding columns with more than 20% of missing values and performing imputation in the other cases. This decision was taken because we considered we would include a high proportion of bias and noise if we impute columns with a great frequency of missingness. Although missingness might be informative [70], the generalisation reached by the proposed predictive readmission model showed the robustness of our ensemble-based methodology to deal with different noise levels - from both missingness and utilised imputation approach. We plan in future work to better understand missingness across eICU and MIMIC IV as a way to improve machine learning modelling.
In addition, we observed the challenge of finding comparable metrics for the severity of the illness to describe the population between eICU and MIMIC. Therefore, our attempt resumed using the variables included in most severity of illness scores in our data description. Given data restrictions, we believe this was the best way to compare the severity of illness between both cohorts.
Finally, we noticed an overall heterogeneity of the studied populations, including imbalance due to rare occurrences of readmission, where the eICU cohort presents 4.04% of readmission cases while MIMIC IV has more than double (8.74%). We successfully dealt with the imbalance by employing appropriate performance metrics on top of the feature selection method [51]. Our choice for the set of metrics considered the four confusion matrix categories (true positives, false negatives, true negatives, and false positives), setting proportional weights based on the size of the readmission group and the size of the non-readmission group in the datasets. Undersampling techniques also assisted with diminishing the imbalance hardness in our study.
### Relationship to Previous Studies
Multiple predictive models were developed for readmission with different AUC performances (see Table S1). Nevertheless, we believe that most of these models lack clinical applicability. The main reason behind this is that those related works prefer highlighting their good predictive performances rather than understanding readmission well and bridging meaningful clinical and machine learning knowledge.
Although performing similarly to the literature, most of the works are focused on monocentric data, except for Badawi and Breslow [15] and Mohanty et al. [17]. However, both works lack clinical decision-making and are not well-validated with internal and external data sets. Apart from Mohanty et al. employing explainable machine learning in their proposed models, providing insights for their results, they remain local to the Epic EHR database employed by them.
Hegselmann et al.'s work [24] is the closest to our method. Their readmission model is built from University Hospital Munster (UKM) data and externally tested on MIMIC IV. Even though it presents a certain level of similarity to our work, their work is set in a controlled scenario, where UKM data is employed to build the model. Furthermore, both UKM and MIMIC IV are monocentric, presenting, therefore, a lower level of difficulty than
multicentric eICU. In our method, we learn a readmission model from multicentric data to generalise on monocentric data. We show our method works well in translating from one to another.
## Funding
D.B.A. was supported by an Investigator Grant from the National Health and Medical Research Council (NHMRC) of Australia (GNT1174405 to D.B.A.). Supported in part by the Victorian Government's Operational Infrastructure Support Program.
## Conflict of Interest
None declared.
## Contributions
**Alex G. C. de Sa:** Conceptualisation, Draft figure proposals, Literature review, Methodology, Machine Learning - Experimental planning and execution, and results analysis, Visualisation, Writing - Original draft preparation and supplementary materials, Reviewing, Editing, and Validation (machine learning perspective);
**Daniel Gould:** Conceptualisation, Literature review, Visualisation, Investigation, Writing - introduction and results, Analysis, Reviewing, Editing, and Validation (clinical and machine learning perspectives);
**Anna Fedyukova:** Data collection, Machine Learning - Experimental planning and execution, and results analysis, Visualisation, Writing- Methodology and results, supplementary materials, and Validation (machine learning perspective);
**Mitchell Nicholas:** Literature review, Reviewing, Editing, and Validation (clinical perspective);
**Calvin Fletcher:** Writing - introduction, Validation (clinical perspective), Reviewing, and Editing;
**Lucy Dockrell:** Writing, Validation (clinical perspective), Reviewing, and Editing;
**David Pilcher:** Supervision, Writing, Validation (clinical perspective), Reviewing, and Editing;
**Daniel Capurro**: Conceptualisation, Supervision, Writing, Reviewing and Editing;
**David B. Ascher:** Supervision, Writing, Reviewing, Editing and Validation (clinical and machine learning perspectives);
**Khaled El-Khawas:** Conceptualisation, Supervision, Supervision, Writing-Methodology and Results, and Validation (clinical perspective);
**Douglas E. V. Pires:** Conceptualisation, Supervision, Writing, Reviewing, Editing, and Validation (clinical and machine learning perspectives);
**Supplementary Information**
Supplementary Materials are available at [https://tinyurl.com/36f333sc](https://tinyurl.com/36f333sc).
|
2309.09901 | The role of causality in explainable artificial intelligence | Causality and eXplainable Artificial Intelligence (XAI) have developed as
separate fields in computer science, even though the underlying concepts of
causation and explanation share common ancient roots. This is further enforced
by the lack of review works jointly covering these two fields. In this paper,
we investigate the literature to try to understand how and to what extent
causality and XAI are intertwined. More precisely, we seek to uncover what
kinds of relationships exist between the two concepts and how one can benefit
from them, for instance, in building trust in AI systems. As a result, three
main perspectives are identified. In the first one, the lack of causality is
seen as one of the major limitations of current AI and XAI approaches, and the
"optimal" form of explanations is investigated. The second is a pragmatic
perspective and considers XAI as a tool to foster scientific exploration for
causal inquiry, via the identification of pursue-worthy experimental
manipulations. Finally, the third perspective supports the idea that causality
is propaedeutic to XAI in three possible manners: exploiting concepts borrowed
from causality to support or improve XAI, utilizing counterfactuals for
explainability, and considering accessing a causal model as explaining itself.
To complement our analysis, we also provide relevant software solutions used to
automate causal tasks. We believe our work provides a unified view of the two
fields of causality and XAI by highlighting potential domain bridges and
uncovering possible limitations. | Gianluca Carloni, Andrea Berti, Sara Colantonio | 2023-09-18T16:05:07Z | http://arxiv.org/abs/2309.09901v1 | # The role of causality in explainable artificial intelligence
###### Abstract
Causality and eXplainable Artificial Intelligence (XAI) have developed as separate fields in computer science, even though the underlying concepts of causation and explanation share common ancient roots. This is further enforced by the lack of review works jointly covering these two fields. In this paper, we investigate the literature to try to understand how and to what extent causality and XAI are intertwined. More precisely, we seek to uncover what kinds of relationships exist between the two concepts and how one can benefit from them, for instance, in building trust in AI systems. As a result, three main perspectives are identified. In the first one, the lack of causality is seen as one of the major limitations of current AI and XAI approaches, and the "optimal" form of explanations is investigated. The second is a pragmatic perspective and considers XAI as a tool to foster scientific exploration for causal inquiry, via the identification of pursue-worthy experimental manipulations. Finally, the third perspective supports the idea that causality is propaedeutic to XAI in three possible manners: exploiting concepts borrowed from causality to support or improve XAI, utilizing counterfactuals for explainability, and considering accessing a causal model as explaining itself. To complement our analysis, we also provide relevant software solutions used to automate causal tasks. We believe our work provides a unified view of the two fields of causality and XAI by highlighting potential domain bridges and uncovering possible limitations.
keywords: causality, explainable artificial intelligence, causal discovery, counterfactuals, structural causal models +
Footnote †: journal: Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery on April 18\({}^{\text{th}}\), 2023
Figure 1: Graphical Abstract: Reviewing the literature to uncover how eXplainable Artificial Intelligence (XAI) and causality are related - the three main perspectives.
## 2 Introduction
Causation and explanation are not new concepts, since they have always drawn humans' attention. They are, indeed, highly intertwined since the ancient Greeks and throughout the philosophy of science (Sec. 3.1). Unfortunately, it seems that these concepts have had a diverse evolution in the field of Artificial Intelligence (AI). Regarding explanations, the eXplainable AI (XAI) research field has been formalized in the past few years to overcome the limitations of conventional black-box machine learning (ML) and deep learning (DL) models (Sec. 3.2). Regarding the field of causality (Sec. 3.3), some seminal works have been investigating its integration within ML and DL systems (Scholkopf et al., 2021; Berrevoets et al., 2023). What seems to emerge from the current literature is that there is no clear vision of whether there is a dependent relationship between the two fields.
In this review, we investigate the interdisciplinary literature regarding causality and XAI from both theoretical and methodological viewpoints to try to gain a clearer understanding of this question. Our results show three main perspectives can be identified. The first way to relate the two fields is to move some critics to XAI under a causal lens, to serve as a watch out. In this regard, a non-negligible subset of publications recognizes causality as a missing component of current XAI research to achieve robust and explainable systems. Other works highlight how the field of XAI (and AI by extension) suffers from certain innate issues, making the problem itself ill-posed. In a similar light, a further branch of works investigates different forms and desiderata of the XAI-produced explanations and their link with the causal theory. The second perspective tries to relate XAI and causality in a pragmatic way and sees the former as a means to get to the latter. Such works believe XAI has the potential to foster scientific exploration for causal inquiry. Indeed, by means of approaches able to identify pursue-worthy experimental manipulations, XAI may help scientists generate hypotheses about possible causal relationships to be tested. The third perspective turns the previous one around, claiming that causality is propaedeutic to XAI. Causal tools and metrics are exploited to implement XAI, and specific XAI approaches are brought back to their formal causal definition to improve generalization capabilities. Among the distinctive ideas of this perspective, getting access to the causal model of a system is a way to intrinsically explain the system itself.
We argue that the third of the perspectives is the one to be preferred to correctly combine the two areas of causality and XAI to advance the research toward reliable systems that are truly useful to humans. Overall, the novelty of our work lies in bridging the XAI-causation gap rigorously, highlighting areas of future development, and exposing limitations.
## 3 Rationale and Objective
### Ancient roots
The study of causation and explanation can be traced back to the ancient Greek philosophers. Aristotle, for instance, introduced causality as the foundation of explanation and argued that there must be a necessary and sufficient reason for every event (Hankinson, 1998).
As early as the 18th century, the empiricist David Hume formalized causation in terms of sufficient and necessary conditions: an event \(c\) causes an event \(e\) if and only if there are event-types \(C\) and \(E\) such that \(C\) is necessary and sufficient for \(E\). He, however, remained skeptical about humans' ability to explain and truly know any event. Indeed, he argued that we cannot perceive any necessary connection between cause and effect, but only events occurring in regular succession based on habit (Hume, 2003).
From the 1950s onward, some others also investigated scientific explanations. Initially, the "standard model" of explanation was deductive, following the Deductive-Nomological (DN) model by Hempel and Oppenheim (1948). An outcome was implied logically from universal laws plus initial conditions via deductive inference (e.g., explaining the volume of gas via the ideal gas law and some observations such as pressure). Regarding Hempel's viewpoint on causality, causal explanations are special cases of DN explanations, but not all laws and explanations are causal.
Later, Salmon (1984) developed a model in which good scientific explanations must be statistically relevant to the outcome to get explained. He argued that, in attempting to explain probabilistic phenomena, we seek not merely a high probability but screen for causal influence by removing system components to find ones that alter the probability. Salmon found causality ubiquitous in scientific explanation and was convinced that the time had come to put the "cause" back into "because". Although remaining vague as to how to attain it, he invited scientists to reconsider the role of causal relations as potentially fundamental constituents of adequate explanations.
### The need for XAI
Given the rapidly increasing interest in data mining for knowledge discovery, AI is becoming pervasive in our lives, and understanding and trusting its decisions has become imperative. This is further enforced, for instance, by the current guidelines for trustworthy AI by the European Commission1. Indeed, opacity in such decisions can lead to reluctance when adopting AI in a product, a decision process, or research. This, therefore, can result in missed opportunities in the use of AI to its fullest potential. To prevent this scenario, the research field of XAI aims to provide humans with explanations to understand the reasoning behind an AI system and its decision-making process. In other words, the goal of XAI is to enable end-users to understand the underlying explanatory factors of why an AI decision is taken. The term XAI was first introduced in Van Lent et al. (2004), but its popularity has spread across the literature only after the DARPA's XAI program (Gunning and Aha, 2019), reaching a certain degree of maturity to date (Guidotti et al., 2018; Du et al., 2019; Carvalho et al., 2019; Rudin, 2019; Arrieta et al., 2020; Molnar, 2020).
Footnote 1: [https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai](https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai)
XAI systems have been prioritized in different fields, such as healthcare, finance, education, and legal. In healthcare, XAI has been utilized for medical image analysis, acute critical illness prediction, intraoperative decision support systems, drug discovery, and treatment recommendations (Van der Velden et al., 2022; Lauritsen et al., 2020; Gordon et al., 2019; Jimenez-Luna et al., 2020). Regarding finance, popular applications of XAI are credit risk management and prediction, loan underwriting automation, and investment advice (Bussmann et al., 2021; Moscato et al., 2021; Sachan et al., 2020; Yang et al., 2021). In education, XAI has been applied in automatic essay scoring systems, educational data mining, and adaptive learning systems (Kumar and Boulanger, 2020; Alonso and Casalino, 2019; Khosravi et al., 2022), while digital forensics for law enforcement context represents an example in the legal domain (Hall et al., 2022).
Regardless of the application field, XAI is driven by the idea of making the reasoning process of AI transparent and, therefore, AI models more intelligible to humans. Accordingly, when it comes to explaining the logic of an inferential system or a learning algorithm, four aspects can be identified as the main driving motivations for XAI (Adadi and Berrada, 2018): (i) explain to justify (i.e., provide justifications for particular decisions to make sure they are not unfairly yielded by bias), (ii) explain to control (i.e., understand the system behavior for debugging vulnerabilities and potential flaws), (iii) explain to improve (i.e, understand the system behavior for enhancing its accuracy and efficiency), and (iv) explain to discover (i.e., learn from machines their knowledge on relationships and patterns).
### A causal perspective
Even though the wide literature on causality spans different interpretations, such as the causal potential theory (Xu, 2018) and Wiener-Granger causality (Granger, 1969), the one by computer scientist Judea Pearl is popularly associated with AI. Pearl identifies some major obstacles still undermining the ability of AI systems in reasoning in a way akin to humans, to be overcome by equipping machines with causal modeling tools (Pearl, 2019). Among those obstacles, is the lack of robustness of AI systems in recognizing or responding to new situations without being specifically programmed (i.e., adaptability), as well as their inability to grasp cause-effect relationships. Instead, those abilities are innate features of human beings, who can communicate with, learn from, and instruct each other since all their brains reason in terms of cause-effect relationships (Pearl, 2018).
Pearl argues humans organize their knowledge of the world according to three distinct levels of cognitive ability, which he embodies in distinct rungs of the _Ladder of Causation_(Pearl and Mackenzie, 2018). As Tab. 1 shows, the first rung is _Association_ and involves passive observation of data. Reasoning on this level could not distinguish the cause from the effect and, although this might come as a surprise to some, Pearl argues that it is where conventional AI approaches to classification or regression stand today. The second rung is _Intervention_ and involves not just viewing what exists, but also changing it. However, reasoning on this rung cannot reveal what will happen in an imaginary world where some observed facts are bluntly negated. To this end, we need to climb to the third rung, i.e. _Counterfactuals_ (CF). It involves imagination since to answer counterfactual queries one needs to go back in time and change history. For instance, we may wonder whether it was, indeed, _turning the heating system on_ that caused a _warm apartment_ or, rather, for instance, the outdoor weather.
Note that, in a somewhat confusing way, the term "counterfactual" may be encountered also in the XAI literature, where it applies to any instance with an alternative outcome. There, a _counterfactual explanation_ (CFE)
refers to the smallest change in an input that changes the prediction of an ML classifier (Wachter et al., 2017; Mothilal et al., 2020). This concept is quite distinct from the causal meaning of the term. In this regard, as a piece of clarification, we utilize _CFE_ and _CF_ to address, respectively, the XAI method and the causality concept.
In general, building models that represent causal relationships among variables from observations may be challenging without relying on assumptions that are hard to verify in practice, such as the absence of unmeasured confounding between the variables (Robins and Wasserman, 1999; Greenland and Mansournia, 2015). Nevertheless, Pearl's work was revolutionary in that it transformed causality from a notion clouded in mystery into a concept with logical foundations and defined semantics. The formalization of causality in mathematical terms within an axiomatic framework allowed the development of automatic computational systems for causal modeling. We refer the reader to A for some notations and terminology regarding Pearl's causality (and related concepts).
### Objective
This review investigates the role(s) of causality in the world of XAI today or, broadly, the relationship between causality and XAI. Throughout the paper, we aimed to refer to an interdisciplinary audience, which reflects the use of an accurate (yet not overly zealous) register, leaving the more technical parts (e.g., mathematical notations and supplementary details) to A and B.
Three main pieces of information led us to believe that those could be complementary fields, and, thus, motivated us to start our investigation. First, the concepts of causality and explanation have been jointly investigated since ancient times (Sec. 3.1). Second, even though they were born separately in the field of AI (_explanation_ as XAI and _causation_ as Pearl's causality theory), they share a common goal. Indeed, both fields feature human-centricity in AI systems and aim to ensure true usefulness to humans, be it by explaining in a human-comprehensible way what an AI system did, or by designing the system in such a way that it reasons like humans (Sec. 3.2 -- 3.3). Third, another "canary in the coal mine" for us was the presence of the same "counterfactual" term in both fields (Sec. 3.3).
To the best of our knowledge, Chou et al. (2022) are the only ones investigating a somewhat similar question, albeit with a narrower scope. They systematically review current counterfactual model-agnostic approaches (i.e., CFEs) studying how they could promote _causability_. Causability is a relatively new term representing "the extent to which an explanation of a statement to a human expert achieves a specified level of causal understanding with effectiveness, efficiency, and satisfaction in a specified context of use." (Holzinger et al., 2019). Since causability differs from causality, this is the first (and major) difference with our study, which covers the wide notion of causality itself. Our work also departs from Chou et al. (2022) in that they solely investigate CFE methods, while, in our analysis, we consider the whole corpus of XAI literature, which also includes (but is not limited to) CFEs.
## 4 Methods
This review aims at exploring the literature surrounding the relationship between causality and XAI, from both theoretical and methodological viewpoints. We conducted our work by adopting a structured process that involved the following: (i) specifying the eligibility criteria; (ii) detailing the information sources; (iii) illustrating the search strategy on specified databases; (iv) describing the selection process; (v) conducting a high-level analysis on the cohort of selected studies; (vi) extracting relevant data and information from studies; and (vii) synthesizing results.
\begin{table}
\begin{tabular}{l|l|l|l}
**Level (rung)** & **Cognitive ability (activity)** & **Typical questions** & **Examples** \\ \hline Association & Seeing, observing (i.e., recognizing & "What if i see...?" & "What is the probability that an apartment is warm if i see the heating system being on?" \\ \hline Intervention & Doing (i.e., predicting the effect(s) of & "What if i do...?" & "What is the probability that the apartment will get warm if i turn on the heating system?". \\ & multiple intentional actions on the & & \\ & environment and choosing the best to & & \\ & produce a desired outcome) & & \\ \hline Counterfactuals & Imagining, reasoning in retrospection, and & "What if i had done...?" & "What would have happened to the indoor comfort of the apartment if i had kept the heating system off?". \\ \hline \end{tabular}
\end{table}
Table 1: The Ladder of Causation by Pearl and Mackenzie (2018).
We carried out our search on four popular bibliographic databases, _Scopus2_, _IEEE Xplore digital library_ (s. IEEE)3, _Web of Science_ (s. WoS)4, and _ACM Guide to Computing Literature_ (s. ACM)5, utilizing the following query:
Footnote 2: [https://www.scopus.com/](https://www.scopus.com/)
Footnote 3: [https://ieeexplore.ieee.org/](https://ieeexplore.ieee.org/)
Footnote 4: [https://clarivate.com/webofsciencegroup/solutions/web-of-science/](https://clarivate.com/webofsciencegroup/solutions/web-of-science/)
Footnote 5: [https://dl.acm.org/browse](https://dl.acm.org/browse)
Footnote 6: [https://www.vosviewer.com/](https://www.vosviewer.com/)
Footnote 7: [https://opensource.org](https://opensource.org)
(causal*) **AND** (expla*) **AND** ("Xai" OR "explainable artificial intelligence* OR "explainable AI") **AND** ("machine learning" OR "ai" OR "artificial intelligence" OR "deep learning")
Elements within brackets had to be present within at least one of the title, abstract, or keywords of the manuscript. Terms ending with the wildcard "*" matched all the terms with the specified common prefix. Among the obtained publications, we ensured that only peer-reviewed papers from conference proceedings and journals were included. Upon completion of the process of identification, screening, eligibility, and inclusion of articles, \(51\) publications formed the basis of our review. We describe the technical details of the whole study collection process in Appendix B.
In our study, we first performed a high-level analysis of the final cohort of records regarding keywords co-occurrence, then, we extracted information from the publications to answer our research question, and, finally, we collected any cited software solutions in a structured way.
### Keywords' co-occurrence analysis
Regarding the high-level analysis of the final cohort of records, we constructed a bibliometric network of articles' keywords co-occurrence, by utilizing the Java-based application _VOS Viewer6_. Bibliometric networks are methods to visualize, in the form of graphs, the collective interconnection of specific terms or authors within a corpus of written text. In our setting, we applied such networks to study the paired presence of articles' keywords within a corpus of scientific manuscripts.
Footnote 6: [https://www.vosviewer.com/](https://www.vosviewer.com/)
### Research question analysis
For each of the papers that were included in the review, we identified the most relevant aspects on a conceptual level. According to the research question, we searched for any theoretical viewpoints and comments on the possible ways in which causality and XAI may relate, including formalization frameworks and insights from AI, cognitive, and philosophical perspectives.
Based on the collected information, we performed a topic clustering procedure to organize the literature in related concepts and gain a global view of the field. Selecting cluster topics for a multidisciplinary field as that of causality in the broad field of XAI proved challenging. Topics that are too general would result in an excessively vague and superficial division of papers and therefore be of little use in answering the research question. On the other hand, topics that are too specific would create many quasi-empty clusters, resulting in an improper division, which lacks abstraction capabilities and prevents an overall view of the field. Therefore, we iteratively refined the clusters during a trial-and-error process.
### Software tools collection
During the analysis of the full-text manuscripts, we kept track, in a structured collection, of any cited software solutions (e.g., tools, libraries, packages), whenever they were used to automate causal tasks. Specifically, for each one, we analyzed: (i) the URL of the corresponding web-page; (ii) whether the software was commercial or with an open-source license, according to the Open Source Initiative7; (iii) the name of the company for cases of commercial software; (iv) the eventual release publication that launched the software; (v) whether the frontend consisted in a command line interface (CLI) or a graphical user interface (GUI); and, finally, (vi) the main field of application and purpose.
Footnote 7: [https://opensource.org](https://opensource.org)
## 5 Results to the keywords' co-occurrence analysis
As a result of the high-level analysis of the final cohort of records, we obtained the bibliometric network shown in Fig. 2. The items (i.e., nodes) of the network represent terms (specifically, articles' keywords); the link (i.e., edge) between two items represents a co-occurrence relation between two keywords; the strength of a link indicates the number of articles in which two keywords occur together; and, finally, the importance of an item is given by the number of links of that keyword with other keywords and by the total strength of the links of that keyword with other keywords. Accordingly, more important keywords are represented by bigger circles in the network visualization, and more prominent links are represented by larger edges between keywords.
This visualization provides insight into how and to what extent the literature relates different research concepts, and it helped us to appreciate the multidisciplinary nature of our research question. Moreover, it is possible to marginalize the scope of specific keywords by identifying the terms to which they relate, as shown in Figs. 3a-b for the keywords _causality_ and _counterfactual_, respectively. The relevance and wide scope of the first are justified by the structure of our query, where it was an obligatory search term. Regarding the latter, its scope and relevance represent the central role of the term in both the research fields of causality and XAI.
## 6 Results to the research question analysis
This review allowed us to understand how the theory of causality could intertwine with the XAI literature and, specifically, which methodologies and theoretical frameworks could be adopted to approach the bridge between these two fields. We conceived three main topic clusters of studies, which are presented together with their possible sub-clusters in Fig. 4. Specifically, they embody the following perspectives:
* _critics to XAI under the causality lens_;
* _XAI for causality_;
* _causality for XAI_.
This procedure led us to identify which of the three possible perspectives is the preferable one in order to correctly combine the two areas of causality and XAI. We discuss them in Sec. 6.1 -- 6.3.
Figure 2: Bibliometric network of papers’ keywords for the cohort of publications included in the review.
### Critics to XAI under the causality lens
This first perspective utilizes a causal viewpoint to identify some issues in current XAI. The focus of such papers is either: (i) to point out the inability of XAI to consider causality, (ii) to highlight the profound limitations of current (X)AI both on a methodological and a conceptual level, or (iii) to investigate the forms of the produced explanations.
#### 6.1.1 Lack of causality
A fundamental aspect that hinders the value of classical AI models' inference and explainability methods is the lack of a foundation in the theory of causality. Indeed, classical ML and DL predictive models are based on the correlation found among training data instead of true causation. This might be of particular concern in specific fields, such as epidemiology, that have always been grounded in the theory of causation (Broadbent and Grote, 2022). Moreover, this lack of causality makes models more easily affected by adversarial attacks and less valuable for decision-making (Molnar et al., 2020). Since the parameters and predictions of classical data-driven AI models cannot be interpreted causally, they should not be used to draw causal conclusions.
As Naser (2021) points out, meeting specific performance metrics does not necessarily mean that an AI/ML model captures the physics behind a phenomenon. In other words, there is no guarantee that the found correlations map to causal relations between input data and final decisions. For this reason, determining whether such models reflect the true causal structure is crucial (Ryo et al., 2021). This inability of today's ML/DL to grasp causal links reflects also on XAI, constituting a major broad challenge to the ability of AI systems to provide sound explanations.
Hamon et al. (2022) stress how this poses serious challenges to the possibility of satisfactory, fair, and transparent explanations. Regarding the soundness of the generated explanations, Watson et al. (2022) demonstrate that they are volatile to changes in model training that are perpendicular to the classification task and model structure. This raises further questions about trust in DL models which just rely on spurious correlations that are made visible via explanation methods. Since causal explanations cannot be provided for AI yet, explanatory methods are fundamentally limited for the time being.
#### 6.1.2 Pitfalls of (X)AI
In addition to the weaknesses due to the lack of causality, some works highlight how the fields of AI and XAI may suffer from some innate issues. On a methodological level, Molnar et al. (2022) present a number of pitfalls of
Figure 3: The isolated connections from Fig. 2 for the terms _causally_ (a) and _counterfactual_ (b).
local and global model-agnostic interpretation techniques, such as in case of poor model generalization, interactions between features, or unjustified causal interpretations. At a deeper level, some researchers advocate some concerns about XAI based on its very nature. For instance, Landgrebe (2022) argues that the human inability to interpret the behavior of deep models in a more objective manner still restricts XAI methods to provide merely a partial, subjective interpretation. Undeniably, deep neural networks solve their classification in a manner that differs completely from the way humans interpret text, language, sounds, and images. For instance, convolutional neural networks (CNNs) use features of the input space to perform their classifications, which are different from those humans use. Not only is it true, but what's more, we do not understand how humans themselves classify texts or images or conduct conversations. Indeed, as of now, human or physical behavior can only be emulated by creating approximations, but approximations cannot be understood any more than complex systems can be.
Under similar considerations, Leventi-Peetz et al. (2022) study the scope and sense of explainability in AI systems. In their view, it is impossible or unwise to follow the intention of making every ML system explainable. Indeed, even domain experts cannot always provide explanations for their decisions and, furthermore, on AI systems much higher demands are made than on humans when they have to make decisions.
#### 6.1.3 Form of the explanations
These works explore different forms, qualities, and desiderata of the explanations produced by XAI methods and their link with causality. Depending on the application domain, less accurate yet simpler explanations may be preferable to convey a proper understanding of an AI decision. For instance, in Natural Language Generation,
Figure 4: The included studies are classified according to the three main perspectives on how causality and XAI may be related: _Critics to XAI under the causality lens, XAI for causality_, and _Causality for XAI_. Next to each of them, are the possible sub-clusters.
a narrative explanation where facts are linked with causal relations is probably a better explanation for narrative-inclined individuals, even though it may not be the most accurate way to describe how the model works (Reiter, 2019). Similarly, in image classification via CNNs, a simpler visualization (e.g., natural dataset examples) may lead to an equal causal understanding of unit activation instead of using complex activation maximization approaches (Zimmermann et al., 2021).
Shimojo et al. (2020) examine what a good explanation is by drawing on psychological evidence regarding two explanatory virtues: (i) the number of causes enforced in an explanation8, and (ii) the number of effects invoked by cause(s) in an explanation9. The authors report that, in a user study, the two virtues had independent effects, with a higher impact for the first one. Similarly, Kim et al. (2021) discuss several desiderata of XAI systems, among which, they should adjust explanations based on the knowledge of the explainee, to match their background knowledge and expectations. This is further stated by Kovalerchuk et al. (2021), who define as "quasi-explanations" those explanations using terms that are foreign to a certain application domain (e.g., medicine, finance, law), such as distances, weights, and hidden layers, and that consequently do make sense only for the data scientists. Kim et al. (2021) further states that explanations are considered to be _causal_ when they arise from the construction of causal models, serving as the basis for recreating a causal inference chain to (i.e., a "recipe" for reconstructing) a prediction. According to the authors, intelligent systems must be able to provide causal explanations for their actions or decisions when they are critical or difficult to understand. When a causal explanation answers a "why" question, it can be referred to as a _scientific_ explanation. In general, answers to questions such as "How does a personal computer work?" are not considered to be scientific explanations. Such answers are still part of a scientific discipline, but they are descriptive rather than explanatory.
Footnote 8: This is sometimes referred to as _simplicity_ and is conforming with the _Occam’s razor_ principle, according to which, an event should not be explained by more causes than necessary (Jeffreys and Berger, 1992).
Some other works argue that useful explanations are not only causal explanations but many types of non-causal explanations (e.g., semantic, contrastive, justificatory) may help (Sovrano et al., 2019). A pilot user study from Taschdjian (2020) supports this idea revealing that participants preferred causal explanations over the others only when presented in chart form, whilst they resulted as the least favorite choice when in text form.
### XAI for causality
Only three papers openly support a pragmatic line of thinking according to which XAI is a basis for causal inquiry. Indeed, such works recognize certain limits of current XAI methods but approach the discussion pragmatically.
Zednik and Boelsen (2022) discuss the role of post-hoc analytic techniques from XAI in scientific exploration. The authors show that XAI techniques, such as CFEs, can serve as a tool for identifying potentially pursuedworthy experimental manipulations within a causal framework and, therefore, for recognizing causal relationships to investigate. In this regard, the authors remark on an asymmetry between the role of CFEs in _industry_ and in _science_. The following two hypothetical scenarios clarify this idea:
* _industry_: a bank decides whether to accept or reject a loan application based on an AI agent. A CFE for a rejection case has revealed that doubling the client's income would have led to the acceptance of the loan. Here, the AI agent is not trying to model reality, but it is reality itself. Indeed, a change in the client's income would actually change the application outcome, meaning that CFEs are _perfect_ guides to causal inference.
* _science_: an AI agent determines the probability of type-2 diabetes based on patients' features. A CFE for a high-probability case has revealed that losing weight would decrease that probability. Here, the AI agent is trying to model the biological reality of the problem, but still, it remains an approximation. Indeed, it is still possible that losing weight does not actually reduce the probability of type-2 diabetes. That is to say that a change in the model's behavior does not actually change the way the world works, but at best constitutes a changed representation of how the world could possibly work. In this light, CFEs are _imperfect_ guides to causal inference.
All in all, it is just because the relevant ML models might not perfectly adhere to reality that the generated XAI explanations only foster scientific _exploration_ rather than scientific _explanation_. At most, products of XAI may be thought of as starting points to study potentially causal relationships that have yet to be confirmed.
Similarly, Medianovskyi and Pietarinen (2022) consider the outputs of the current XAI methods, such as CFEs, to be far from conclusive explanations. Rather, they are initial sketches of possible explanations and invitations to explore further. Those sketches must go through validation processes and experimental procedures before satisfactorily answering the "why" questions, long sought after by XAI.
According to the review by Antoniadi et al. (2021), XAI can help to shed some light onto causality. Indeed, since causation involves correlation, an explainable ML model could validate the results provided by causality inference techniques. Additionally, XAI can provide a first intuition of (i.e., generate hypotheses about) possible causal relationships that scientists could then test (Arrieta et al., 2020; Lipton, 2018).
### Causality for XAI
This third perspective is driven by the idea that causality is propaedeutic to XAI. Indeed, these works either: (i) exploit causality-based concepts to support XAI, (ii) restore the causal foundation of CFEs, or (iii) argue that accessing the causal model of a system is intrinsically explaining the system itself.
#### 6.3.1 Causal tools for supporting XAI
Such papers interpret the role of causality in XAI in the sense that some causal concepts, such as structural causal model (SCM) and do-operator (A.3) and causal metrics, may bring useful tools for explainability and for finding the causes of AI predictions. Regarding the use of **Structural causal models** to foster XAI, Reimers et al. (2020) reduce DL to a basic level and frame the constitutional structure of a CNN model into an SCM. In this setting, the random variables represent, for instance, the network's weights and the final prediction, while the functions linking the variables are the _training function_ (from labeled images to the network's weights), and the _inference function_ (from unlabeled images and weights to the prediction). By doing so, the authors aim to establish whether a feature is relevant to a CNN prediction by leveraging causal inference and Reichenbach's Common Cause Principle10.
Footnote 10: According to Reichenbach (1956), if two variables A and B are dependent, then there exists a variable C that causes A and B. In particular, C can be identical to A or B meaning that A causes B or B causes A.
Lazzari et al. (2022), in order to predict employee turnover, utilize the concept of SCM to revisit and equip the Partial Dependence Plot (PDP)11 method with causal inference properties. Their SCM-based PDP can now go beyond correlation-based analyses and reason about causal interventions, allowing one to test causal claims around factors. This, in turn, provides an intuitive visual tool for interpreting the results and achieving the explainability of automatic decisions.
Footnote 11: A visual tool introduced by Friedman (2001), commonly used for model-agnostic XAI, that shows the marginal effect of one feature on the predicted outcome of a system.
Regarding _do-operator_, some authors employ this concept to bring the theory of Shapley values a step further. A fundamental component of Shapley values is to evaluate the reference distribution of dropped (i.e., 'out-of-coalition') features, which has implications on how Shapley values are estimated since this helps define the value function. Based on this distribution, the following variants of Shapley values exist (Watson, 2022; Heskes et al., 2020): _marginal_ Shapley values (they ignore relations among features and are used to discover the model's decision boundary), _conditional_ Shapley values (they consider feature dependencies and condition by observation), and _interventional_ Shapley values. The latter was introduced by Janzing et al. (2020) who replaced conventional _conditioning by observation_ with _conditioning by intervention_ (_do-operator_).
Extending this concept, Heskes et al. (2020) introduce _causal_ Shapley values by explicitly considering the causal relationships between the data in the real world to enhance the explanations. Using the interventional distribution is optimal when, with access to the underlying SCM, one seeks explanations for causal data-generating processes. These methods are required when seeking to use XAI for discovery and/or planning, as they seem to provide sensible, human-like explanations that incorporate causal relationships in the real world.
Finally, some other works borrow **metrics from the causal theory** to aid XAI, and, specifically, Probability of Necessity (PN) and Probability of Sufficiency (PS) from Glymour et al. (2016) and the metric of _responsibility_ from Chockler and Halpern (2004). Regarding PN and PS, two works investigate their implications for XAI. Indeed, such probabilities, often addressed as "probabilities of causation", play a major role in all "attribution" questions. Watson et al. (2021) formalize the relationship between existing XAI methods and the probabilities of causation. For instance, they highlight the role of PN and PS in feature attribution methods and CFEs. Regarding the former, the authors reformulate the theory of Shapley values in their framework and show how the value function (i.e.,
the payoff associated with a feature subset) precisely corresponds to the PS of a factor. Regarding the latter, the authors rewrite the CFE optimization problem with an objective based on the PS of the factor with respect to the opposite of the outcome. Moreover, Tan et al. (2022) borrow PN and PS and adapt them to evaluate the necessity and sufficiency of the explanations extracted for a graph neural network (GNN). This makes it possible to conduct a quantitative evaluation of GNN explanations even without ground-truth explanations for real-world graph datasets.
On the other hand, regarding the metric of responsibility, Chockler et al. (2021) propose DC-Causal, a greedy, compositional, perturbation-based approach to computing explanations for image classification. It leverages causal reasoning in its feature masking phase with the goal of finding causes in input images by causally ranking parts of the input image (i.e., superpixels) according to their responsibility for the classification. In addition to responsibility, Debbi (2021) borrows from Chockler and Halpern (2004) the concept of blame to compute visual explanations for CNN decisions. The author abstracts the CNN model into a causal model by virtue of similarity in a hierarchical structure, and filters are considered as actual causes for a decision. First, each filter is assigned a degree of responsibility (i.e., weight) as a measure of its importance to the related class. Then, the responsibilities of these filters are projected back to compute the blame for each region in the input image. The regions with highest blame are returned then as the most important explanations.
PN is the probability that the garden would not have got wet had the sprinkler not been activated (\(Y_{0}=0\)), given that, in fact, the garden did get wet (\(Y=1\)) and the sprinkler was activated (\(X=1\)). Mathematically, this becomes: \(PN=P(Y_{0}=0|X=1,Y=1)\). In other words, this probability quantifies to what extent activating the sprinkler is necessary to get the garden wet, and consequently if other factors (e.g., rain) may have caused the wet garden.
PS is the probability that the garden would have got wet had the sprinkler been activated (\(Y_{1}=1\)), given that the sprinkler had not in fact been activated (\(X=0\)), and the garden did not get wet (\(Y=0\)). Mathematically, this becomes: \(PS=P(Y_{1}=1|X=0,Y=0)\). In other words, this probability quantifies to what extent activating the sprinkler is sufficient to wet the garden, and consequently, if there may exist scenarios (e.g., hardware malfunctioning) where activating the sprinkler does not wet the garden.
Responsibility is a quantification of causality, attributing to each actual cause its degree of responsibility \(\frac{1}{1+k}\), which is based on the size \(k\) of the smallest contingency feature set required to obtain a change in the prediction (i.e., creating a counterfactual dependence). The degree of responsibility is always between \(0\), for variables that have no causal influence on the outcome (\(k\rightarrow\infty\)), and \(1\), for counterfactual causes (\(k=1\)). Responsibility extends the actual causality framework of Halpern and Pearl (2005).
#### 6.3.2 Causal counterfactual explanations
As noted in Sec. 3.3, the _counterfactual_ concept seems to belong both to the XAI literature and to the causality literature. Some authors remark on how CFEs and CF are two separate concepts (Crupi et al., 2022) and, strictly speaking, some would not even call the former _counterfactuals_, precisely to contrast the causal perspective (Dash et al., 2022). Interestingly, however, these two seemingly separate concepts may be bridged in what we could name structural causal explanations. Indeed, the papers in this sub-cluster present methods for generating CF based on their formal causal definition, restoring the causal underpinning to CFEs by using the concept of SCM and Pearl's CF three-step "recipe" (Appendix A.3).
In their quest to explain an image classifier's output and its fairness using counterfactual reasoning, Dash et al. (2022) propose ImageNetGGen, a system that combines knowledge from an SCM over image attributes and uses an inference mechanism in a generative adversarial network-like framework to generate counterfactual images. The proposed architecture directly maps to Pearl's three steps: (i) for _abduction_, an encoder infers the latent vector of an input image coupled with its attributes; (ii) for _action_, a subset of desired attributes is changed and, accordingly, the values of their descendants in the SCM are updated; (iii) for _prediction_, a generator takes the latent vector together with the modified set of attributes and produces a counterfactual image. A subset of work focuses on a specific aim of the XAI research tightly bound with counterfactual reasoning, i.e., _recourse_. Recourse can be seen as the act of recommending a set of feasible actions to assist an individual to achieve a desired outcome. Karimi et al. (2021) argue that the conventional, non-causal CFEs are unable to convey a relevant recourse to the end-user of AI algorithms since they help merely understand rather than act (i.e., inform an individual to where they need to get, but not how to get there). Shifting from explanation to _minimal intervention_,
the authors leverage causal reasoning (i.e., tools of SCMs and structural interventions) to incorporate knowledge of the causal relationships governing the world in which actions will be performed. This way, the authors are able to compute what they refer to as _structural CF_ by performing the _abduction-action-prediction_ steps and provide _algorithmic recourse_. Galhotra et al. (2021) introduce Lewis, a principled causality-based approach for explaining black-box decision-making systems. They propose to achieve _counterfactual recourse_ by solving an optimization problem that searches for minimal interventions on a pre-specified set of actionable variables that have a high probability of producing the algorithm's desired future outcome. Notably, the authors propose a GUI that implements Lewis, of which they show a demo in Wang et al. (2021). Crupi et al. (2022) also contribute to the recourse objective by proposing Cells, a new post-hoc method to generate causality-grounded CFEs and recommendations. It involves the creation of an SCM in the latent space, the generation of causality-grounded CFEs, and their translation to the original feature space.
#### 6.3.3 Accessing the causal model is explaining
Part of the work relates to the common thought that accessing the causal model of a system intrinsically explains the system itself. Under this view, two fundamental observations are supported:
* when a model is built on a causal structure, it is inherently an interpretable model;
* making the inner workings of a causal model directly observable, such as through a directed acyclic graph (DAG) (A.1), makes the model inherently interpretable.
Much of the causality theory focuses on explaining observed events, that is, inferring causes from effects. According to its retrospective attribution, causality lies at the heart of explanation-based social constructs such as explainability and, therefore, causal reasoning is an important component of XAI (Wu et al., 2021).
Ibrahim et al. (2020) try to fill the lack in the causality literature of automatic and explicit operationalizations to enable explanations. The authors propose an extensible, open-source, interactive tool (Actual Causality Canvas) able to implement three main activities of causality (causal modeling, context setting, and reasoning) in a unifying framework. According to the authors, what Canvas can provide, through answers to causal queries, largely overlaps with the ultimate goal of XAI, which is providing the end-user with explanations of why particular factors occurred. Hoque and Mueller (2021) propose Outcome Explorer, an interactive framework guided by causality, that allows expert and non-expert users to select a dataset, choose a causal discovery (CD) algorithm for structure discovery (A.2), generate (and eventually refine) a causal diagram, and interpret it by setting values to the input features to observe the changes in the outcome. Katz et al. (2017) propose an XAI system that encodes the causal relationships between actions, intentions, and goals from an autonomous system and explains them to a human end-user with a cause-effect reasoning mechanism (i.e., causal chains). Chatterjee and Dethlefs (2020) exploit the representational power of CNNs with attention, to discover causal relationships across multiple features from observed time-series and historical error logs. The authors believe causal reasoning can enhance the reliability of decision support systems making them more transparent and interpretable.
A subset of publications sees CD as the most appropriate way of operationalizing the idea that accessing the causal model of a system intrinsically explains the system itself. In this regard, all of them utilize **Bayesian networks (BNs)** (A.2) as the methodological tool. Since establishing unique directions for edges based on passive evidence alone may be challenging, knowledge-based constraints can help orient arrows to reflect causal interpretations (Cox Jr, 2021). In line with this, some works perform CD with BNs in a mixed approach: on the one hand, they leverage knowledge from domain-experts to outline the causal structure of the system (i.e., finding nodes and related edges); on the other hand, they fit the model parameters on observed, real-world data.
Sahoh and Choksuriwong (2022) propose a new system to support emergency management (e.g., terrorist events) based on the Deep Event Understanding perspective, introduced in an earlier work of theirs (Sahoh and Choksuriwong, 2021). Deep Event Understanding aims to model expert knowledge based on the human learning process and offers explanation abilities that mimic human reasoning. Their model utilizes BNs based on social sensors as an observational resource (i.e., text data from Twitter), with prior knowledge from experts to infer and interpret new information. Their approach helps in recognition of an emergency event and in the uncovering of its possible causes, contributing to the explanation of "why" questions for decision-making.
Sahoh et al. (2022) propose discovering cause-effect ML models for indoor thermal comfort in Internet of Things (IoT) applications. They employ five different CD algorithms and show how these may converge to the ground-truth SCM of the problem variables obtained from domain experts. Kliangkhlao et al. (2022) introduce
a BN model for agricultural supply chain applications, initially constructed from causal assumptions from expert qualitative knowledge, which conventional ML cannot reasonably conceive. Therefore, a data-driven approach using observational evidence is employed to encode these causal assumptions into quantitative knowledge (i.e., parameter fitting). The authors report their system constitutes a framework that is able to provide reasonable explanations of events for decision-makers.
In Zapaishchykova et al. (2021) the authors leverage the respective strengths of DL for feature extraction and BNs for causal inference, achieving an automatic and interpretable system for grading pelvic fractures from CT images. The BN model is constructed upon variables extracted with the neural network, together with a variable from the clinical practice (i.e., patient age). By doing so, the authors believe that the framework provides a transparent inference pipeline supplying fracture location and type, by establishing causal relationships between trauma classification and fracture presence.
Yang et al. (2022) propose a new process monitoring scheme based on BNs to explain (diagnose) a detected fault and promote decision-making. Their system allows the identification of the root cause (i.e., labeling the abnormal variables) so that the result of the analysis can be linked to the repairing action, reducing the investigation time. Among one of their use cases, the authors fit a BN model on observed, real-world data for manufacturing fault events. During this CD process, they employ a blacklist obtained from domain experts to exclude causally-unfeasible relationships.
## 7 Results of software tools collection
We hereby present a summary of the main data mining software tools collected within the cohort of papers. Table 2 comprises tools for performing CD with BNs (i.e., PySMILES12, CausalNex13, bnlearn14, CompareCausalNetworks15, CaMML16, Python CausalDiscovery Toolbox17, and Tetrad18), creating and analysing SCMs (i.e., IBM+ SPSS+ Amos19, lavaan20, and semopy21), and editing and analyzing DAGs (i.e., DAGitty22). We believe this list of software solutions may be of interest to AI practitioners in helping them save valuable time when choosing the right tool to automate causal tasks.
Footnote 12: [https://www.bayesfusion.com/smile/](https://www.bayesfusion.com/smile/)
Footnote 13: [https://causalnex.readthedocs.io/en/latest](https://causalnex.readthedocs.io/en/latest)
Footnote 14: [https://www.bnlearn.com](https://www.bnlearn.com)
Footnote 15: [https://cran.r-project.org/web/packages/CompareCausalNetworks/](https://cran.r-project.org/web/packages/CompareCausalNetworks/)
Footnote 16: [https://baysesian-intelligence.com/software/](https://baysesian-intelligence.com/software/)
Footnote 17: [https://fentechositions.github.io/CausalDiscoveryToolbox/html/index.html](https://fentechositions.github.io/CausalDiscoveryToolbox/html/index.html)
Footnote 18: [https://htmlperview.github.io/?https://github.com/cmu-phil/tetrad/blob/development/docs/manual/index.html](https://htmlperview.github.io/?https://github.com/cmu-phil/tetrad/blob/development/docs/manual/index.html)
Footnote 19: [https://www.ibm.com/products/structural-equation-modeling-sem](https://www.ibm.com/products/structural-equation-modeling-sem)
Footnote 20: [https://cran.r-project.org/web/packages/lavaan/index.html](https://cran.r-project.org/web/packages/lavaan/index.html)
Footnote 21: [https://semopy.com/](https://semopy.com/)
Footnote 22: [http://www.dagitty.net/](http://www.dagitty.net/)
The most popular choice is an open-source license type, and this reflects the great interest in sharing code and information across the AI research community. The first benefit of that is flexibility. Researchers often need to access the source code of software implementations to eventually customize its functionalities according to a desired (yet not implemented) purpose. This would be highly unfeasible with closed and commercial software. Another advantage of having open-source implementations is software security. According to Linus's law, "given enough eyeballs, all bugs are shallow" (Raymond, 1999). That is, when all the source code for a project is made open to professionals worldwide, it is more likely that security checks could discover eventual flaws.
Furthermore, Table 2 shows that the CLI is the preferred frontend interface across such solutions. This aspect also reflects the AI research community viewpoint. Opting for CLI over the GUI brings some advantages, such as faster and more efficient computing, easier handling of repetitive tasks, lighter memory usage, and availability of the history of commands. On the other hand, using CLI involves a steeper learning curve associated with memorizing commands and complex arguments, together with the need for correct syntax. This may explain why GUI is preferred in cases where the end-user does not have a programming background. Typical examples of that include physicians in healthcare facilities or product managers in finance companies, who prefer, in general, a more user-friendly product.
## 8 Conclusion
The concepts of causation and explanation have always been part of human nature, from influencing the philosophy of science to impacting the data mining process for knowledge discovery of today's AI. In this study, we investigated the relationship between causality and XAI, by exploring the literature from both theoretical and methodological viewpoints, to reveal whether a dependent relationship between the two research fields exists. We provided a unified view of the two fields by highlighting which methodologies could be adopted to approach the bridge between these two fields and uncovering possible limitations. As a result of the analysis, we found and formalized three main perspectives.
The _Critics to XAI under the causality lens_ perspective analyses how the lack of causality is one of the major limitations of current (X)AI approaches as well as the "optimal" forms to provide explanations. Regarding the former, traditional AI systems are only able to detect correlation instead of true causation, which affects the robustness of models against adversarial attacks and of the produced explanations. This is of concern since pure associations are not enough to accurately describe causal effects. Regarding the latter, optimal explanations may be characterized by being expressed according to the explaine's knowledge and domain terminology and being able to explain many effects with few causes. However, it is debated whether causal explanations (i.e., causal inference chains to a prediction) are the only useful ones in the XAI landscape. This first perspective states the problem and serves as a watch out.
The _XAI for causality_ perspective openly claims that XAI may be a basis for further causal inquiry. Despite the recognized limits of XAI explanations, they may be pragmatically thought of as starting points to generate hypotheses about possible causal relationships that scientists could then confirm. That is, XAI can only foster scientific exploration, rather than scientific explanation. Although underrepresented in the final cohort, this perspective suggests a really thoughtful idea in our opinion.
The _Causality for XAI_ perspective supports the idea that causality is propaedeutic to XAI. This is realized in three manners. First, some causal concepts (i.e., SCM and _do_-operator) are leveraged to revisit existing XAI methods to empower them with causal inference properties. Second, the formal causal definition of CF (Sec. 3.3) is invoked to generate causal-CFEs using the SCM tool, which may also enable recourse. Third, and lastly, it is argued that, when a model is built on a causal structure, it is inherently an interpretable model. In a related way, making the inner workings of a causal model directly observable (e.g., through a DAG) makes the model inherently interpretable.
\begin{table}
\begin{tabular}{l|l|l|l|l}
**Name** & **License** & **Release** & **Frontend** & **Main** & **purpose** \\ \hline _bnlearn_ & Open-source & Scutari (2010) & CLI (R) & BNs for CD \\ & (GPL) & & & \\ \hline _CaMML_ by Bayesian Intelligence Pty Ltd & Open-source & n.a. & CLI (Bash) & BNs for CD \\ & (BSD) & and GUI & & \\ \hline _CausalNex_ by QuantumBlack, AI by McKinsey & Open-source & n.a. & CLI (Python) & BNs for CD \\ & (Apache 2.0) & & & \\ \hline _CompareCausalNetworks_ & Open-source & Heinze-Deml et al. (2018) & CLI (R) & BNs for CD \\ & (GPL) & & & \\ \hline _DAGgily_ & Open-source & Textor et al. (2016) & CLI (R) and & Create and \\ & (GPL) & & GUI & analyze causal \\ \hline _IBM SPSS Amos_ by IBM Corp. & Commercial & n.a. & GUI & Create and \\ & & & & analyze SCMs \\ \hline _lavaan_ & Open-source & Rosseel (2012) & CLI (R) & Create and \\ & (GPL) & & & analyze SCMs \\ \hline _PySMLE_ by BayesFusion LLC & Commercial & n.a. & CLI (Python) & BNs for CD \\ \hline _Python Causal Discovery Toolbox_ by Fentech & Open-source & _Kalainathan et al. (2020)_ & CLI (Python) & BNs for CD \\ & (MIT) & & & \\ \hline _semopy_ & Open-source & Igolkina and Meshcheryakov & CLI (Python) & Create and \\ & (MIT) & (2020) & & analyze SCMs \\ & & Meshcheryakov et al. (2021) & & \\ \hline _Tetrad_ & Open-source & Ramsey et al. (2018) & GUI & BNs for CD \\ & (GPL) & & & \\ \hline \end{tabular}
\end{table}
Table 2: Software tools within the cohort of papers useful to automate causal tasks. BSD: Berkeley Software Distribution, CD: causal discovery, CLI: common line interface, GPL: General Public License, GUI: graphical user interface.
Among the three main perspectives, we believe _Causality for XAI_ to be the most promising one. Naturally, it comes with limitations. Much work in causal modeling is based on specific and (by far) non-unique causal views of the problems at hand. Interventions and CF make sense as long as the specified causal graph makes sense, which may hinder the generalization of their results. Overall, their causal claims depend on strong and often non-testable assumptions about the underlying data-generating process. On the other hand, however, this may be in line with what already happens in our life, and we should not request from AI more than we request from human beings. Another weak point is the interpretability of a causal model with hundreds of variables. In this scenario, a DAG would encode too much information and the complexity of the underlying SCM would rise exponentially with the number of modeled variables. This, however, is common to other simpler and more traditional approaches such as Decision Trees with hundreds of nodes.
We acknowledge three main limitations that may have led us to miss publications that could have potentially been included in the review: (i) the exclusion of non-peer-reviewed e-prints, (ii) the usage of only four databases, and (iii) not having extracted any references from the collected papers to enrich our search. The latter was motivated by the fact that, this being an unexplored field, the papers we collected were sufficient and significant enough to produce a first scenario. Obviously, as with any human-made assignment, the search process for relevant material may have been affected by the cognitive bias of the authors, who have brought their knowledge and assumptions in the study.
We believe our results could be useful to a wide spectrum of readers, from upper-level undergraduate students to research managers in the industry, and have implications for practice, policy, and future research. Indeed, having a clear view of how the two concepts of causality and XAI are related can benefit both areas individually, as well as the joint research field. Considering our conceptual framework, future publications may be framed in a precise and rigorous way and have the potential to expand (or generate new flavors of) one of the identified perspectives.
All in all, our work disclosed how causality and XAI may be related in a profound way. In our opinion, the _Causality for XAI_ perspective has great potential to produce significant scientific results and we expect the field to flourish the most soon.
## 9 Funding Information
This work was partially funded by: the European Union's Horizon 2020 research and innovation programme under grant agreement No 952159 (ProCAncer-I), and the Regional Projects PAR FAS Tuscany - PRAMA and NAVIGATOR. The funders had no role in the design of the study, collection, analysis and interpretation of data, or writing of the manuscript.
## Appendix A Background notions
### Directed acyclic graphs
From graph theory, a _graph_ consists of a set \(V\) of vertices (i.e., variables) and a set \(E\) of edges (i.e., relationships) that connect some pairs of vertices. A graph is _directed_ when all the edges are directed (i.e., marked by a single arrowhead). In a directed graph, an edge goes from a _parent_ node to a _child_ node. A _path_ in a directed graph is a sequence of edges such that the ending node of each edge is the starting node of the next edge in the sequence (e.g., nodes \(A\), \(B\), \(D\) in Fig. 11). A _cycle_ is a path in which the starting node of its first edge equals the ending node of its last edge (e.g., nodes \(C\), \(E\), \(F\) in Fig. 12), and this represents mutual causation or feedback processes. When a directed graph does not include directed cycles, it is called a _directed acyclic graph_ (DAG), and much of the discussion of causality and qualitative modeling is occupied by it (Pearl, 2009).
### Bayesian networks
A Bayesian network (BN) is a probabilistic graphical model that consists of two parts, a qualitative one based on a DAG, representing a set of variables and their dependencies, and a quantitative one based on local probability distributions for specifying the probabilistic relationships (Pearl, 1985). Let \(\textbf{X}=[X_{1},X_{2},\ldots,X_{m}]\) be a data matrix with \(n\) samples and \(m\) variables. In the DAG \(G=(V,E)\) of a BN, each node \(V_{k}\in V\) represents the random variable \(X_{k}\) in **X**, \(k\in\{1,2,\ldots,m\}\), and each edge \(e\in E\) describes the conditional dependency between pairs of variables. The absence of an edge implies the existence of conditional independence.
The structure of the DAG can be constructed either manually, with expert knowledge of the underlying domain (knowledge representation), or automatically learned from a large dataset. In this regard, causal discovery (CD) denotes a broad set of methods aiming at retrieving the topology of the causal structure governing the data-generating process, using the data generated by this process. CD algorithms are commonly divided into two families: _constraint_-based and _score_-based.
_Constraint_-based methods begin with fully-connected edges between random variables and leverage conditional independence tests to identify a set of edge constraints for the graph. By deleting relations if there is no statistical significance between variables, they narrow down the candidate graphs that explain the data and then try to determine the direction of the found relationships. Popular examples include the PC algorithm (Spirtes and Glymour, 1991), assuming no latent confounders (i.e., variables that are not directly observed but interact with the observables), and the Fast Causal Inference (FCI) algorithm (Spirtes et al., 2000), whose results are asymptotically correct even in the presence of (possibly unknown) confounders. Although constraint-based methods can handle various types of data distributions and causal relations, they do not necessarily provide complete causal information, since they output a set of causal structures satisfying the same conditional independence.
On the other hand, _score_-based methods iteratively generate candidate graphs, assign them a relevance score to evaluate how well each one explains the data (i.e., "model fit"), and select the best one. Since enumerating (and scoring) every possible graph among the given variables is computationally expensive, these algorithms apply greedy heuristics to restrict the number of candidates. Among them, Greedy Equivalence Search (GES) (Chickering, 2002) is a well-known two-phase procedure that directly searches over the space of equivalence classes. Starting with an empty graph, at each step, it adds currently needed edges (if that increases fit), and then eliminates unnecessary edges in a pattern.
Regarding the quantitative part of which a BN consists, the local probability distributions can be either _marginal_, for nodes without parents (root nodes), or _conditional_, for nodes with parents. In the latter case, the dependencies are quantified by Conditional Probability Tables (CPTs) for each node given its parents in the graph. These quantities can be estimated from data in a process known as Parameter Estimation, two popular examples of which are the Maximum Likelihood approach and the Bayesian approach.
Once the DAG and CPTs are determined, a BN is fully specified and compactly represents the Joint Probability Distribution (JPD). An example of a fully specified BN is shown in Fig. A.6. According to the _Markov condition_, each node is conditionally independent of its non-descendants, given its parents. As a result, the JPD can be expressed in a product form:
\[p(X_{1},X_{2},\ldots,X_{m})=\prod_{k=1}^{m}p(X_{k}|\mathbb{X}_{pa(k)})\] (A.1)
Where \(\mathbb{X}_{pa(k)}\) is the set of parent nodes of \(X_{k}\) and \(p(X_{k}|\mathbb{X}_{pa(k)})\) is the conditional probability of \(X_{k}\) given \(\mathbb{X}_{pa(k)}\). Thus, such a BN can be used for predictions and inference, that is, computing the posterior probabilities of any subset of variables given evidence about any other subset.
Figure A.5: Examples of directed graphs: (a) directed cyclic graph, (b) directed acyclic graph (DAG).
### Structural Causal Models
Consider the set **X** of variables associated with the vertices of a DAG. When each of them appears on the left-hand side (i.e., the dependent variable) of an equation of the type:
\[X_{k}=f_{k}(\mathbb{X}_{pa(k)},U_{k}),\quad k=1,\ldots,m \tag{10}\]
that represents an autonomous mechanism, then the model is called a _structural causal model_ (SCM) (Pearl, 2009; Scholkopf et al., 2021). In this equation, \(f_{k}\) represents a deterministic function depending on the \(X_{k}\)'s parents in the graph (i.e., \(\mathbb{X}_{pa(k)}\)), and on \(U_{k}\), which represents the exogenous variables (i.e., errors or noises due to omitted factors). These noises are assumed to be jointly independent, and hence ensure that each structural equation can represent a general conditional distribution \(p(X_{k}|\mathbb{X}_{pa(k)})\), Recursively applying Eq. 10, when the distributions of \(U=\{U_{1},\ldots,U_{m}\}\) are specified, allows the computation of the entailed observational joint distribution \(p(X_{1},X_{2},\ldots,X_{m})\), which, in turn, can be canonically factorized into Eq. 11. The advantages of using the SCM language include modeling unobserved variables (i.e., latent variables and counfounders), easily formalizing interventions, and computing CF. Interventions and CF are defined through a mathematical concept called _do_-operator, which simulates physical interventions by modifying a subset of structural equations (e.g., replacing them with a constant), while keeping the rest of the model unchanged. Specifically, to compute the probability of CF, Pearl proposes a three-step procedure. Given a known SCM \(M\) over the set **X** of variables, let \(x_{factual}=[X_{1}=x_{1},X_{2}=x_{2},\ldots,X_{m}=x_{m}]\) be the evidence. To compute the probability of a counterfactual instance \(x_{counterfactual}\), one needs to:
1. _abduction_: infer the values of exogenous variables in \(U\) for \(x_{factual}\), i.e., calculate \(P(U|x_{factual})\);
2. _action_: intervene on \(X=x_{factual}\) by replacing (some of) the equations by the equations \(X=x_{counterfactual}\), where \(x_{counterfactual}=[X_{1}=x^{\prime}_{1},X_{2}=x^{\prime}_{2},\ldots,X_{m}=x^{ \prime}_{m}]\), and thus obtain a new SCM \(M^{\prime}\);
3. _prediction_: use \(M^{\prime}\) to compute the probability of \(P(x_{counterfactual}|x_{factual})\).
## Appendix B Study selection process
Although we did not apply any temporal constraint to the search, we adopted some exclusion criteria in the process. We excluded works that were not written in English, articles from electronic preprint archives (e.g., ArXiv23), book chapters, and theses. In addition, we excluded too-short papers and/or papers of poor quality that hindered our ability to extract data meaningfully. We also deemed off-topic those papers that considered causality in the common and everyday sense of the term, not based on theoretical definitions. Indeed, they frequently present few occurrences of the causal domain terms, which were often either poorly contextualized or only present in the abstract/keywords of the article.
Figure 12: Example of a fully specified BN which models the probability of observing wet grass. In this (simplified) real-world scenario, grass can be wet either by turning on a sprinkler or by rainfall, and both can be influenced by the presence of clouds in the sky.
Regarding information sources, we selected Scopus, IEEE, Wos, and ACM because they cover a comprehensive range of AI works and provide powerful interfaces for retrieving the required data with limited restrictions. Conversely, we excluded Google Scholar24, SpringerLink25, and Nature26 since they do not allow to formulate the query string with the same level of detail as the selected databases do, and, on the other hand, we excluded PubMed27, since it provides this capability, but its coverage is restricted solely to the medical field.
Footnote 24: [https://scholar.google.com/](https://scholar.google.com/)
Footnote 25: [https://link.springer.com/](https://link.springer.com/)
Footnote 26: [https://www.nature.com/siteindex](https://www.nature.com/siteindex)
Footnote 27: [https://pubmed.ncbi.nlm.nih.gov/](https://pubmed.ncbi.nlm.nih.gov/)
As for the search strategy on the specified databases, the use of the wildcard made word-matching easier. For instance, **causal\(*\)** matched terms like _causal_ and _causally_, while **expla\(*\)** matched terms such as _explanation(s)_, _explainable_, _explainability_, _explaining_, and _explained_.
On July 14, 2022, we utilized the research query on the four databases for the first time. We collected the retrieved publications and started analyzing them. Then, on September 5, 2022, we repeated the search in the same settings. This allowed us to refine our cohort of papers with new works that have been published in the meanwhile, therefore enriching our analyses. In general, although we utilized the same research query across the four databases (Sec. 4), the actual query string was edited according to the specific syntax of each of them. In this regard, those strings are shown in Tab. B.3.
Fig. B.7 shows the process of identification, screening, eligibility, and inclusion of articles in our work.
From the search, we obtained the following number of records from the four databases: \(99\) (Scopus), \(17\) (IEEE), \(62\) (WoS), and \(44\) (ACM). As a result, we collected a total of \(222\) publications. Upon extraction of query results from the databases, we operated the identification phase. For the retrieved records, we extracted the BibTeX files and uploaded them into a popular reference manager application by Elsevier, namely Mendeley28, desktop version 1.19.8. We then utilized its _Check for Duplicates_ feature to perform duplicate removal. Then, we removed one thesis and two book chapters, according to the defined exclusion criteria. After these steps, the joint
Figure B.7: Flowchart of the study collection process, from identification, through screening, to eligibility and inclusion.
output was \(107\) publications.
During the screening phase, we examined independently the resulting works by title, abstract, and keywords to verify and ensure that proper results were retrieved by the query. Whenever both authors deemed a paper irrelevant, it was discarded from the cohort. Specifically, two publications were hereby discarded. Instead, publications for which the authors agreed on the inclusion, together with those on which they disagreed, passed to the next phase.
Next, in the eligibility phase, we first checked for the availability of full-text manuscripts for the records in the cohort. We excluded two studies as we could not access their full text. We then jointly analyzed the available full-text publications to remove papers that were clearly out of scope, together with poor-quality or too-short papers. As a result, we identified \(11\) poor-quality or too-short papers and \(41\) out-of-scope works. Lastly, once we reached a common decision for each of the publications, we collected the final cohort of studies to be included in the review.
\begin{table}
\begin{tabular}{l l} \hline Database & Query string \\ \hline Scopus & TITLE-ABS-KEY(causal*) AND \\ & TITLE-ABS-KEY(expla*) AND \\ & TITLE-ABS-KEY(xi OR "explainable artificial intelligence" OR "explainable ai") \\ & AND \\ & TITLE-ABS-KEY(”machine learning" OR ai OR "artificial intelligence" OR "deep learning") \\ & learning”) \\ \hline Web of & (TT=causal* OR AB=causal* OR AK=causal*) AND \\ & (T1=expla* OR AB=expla* OR AK=expla*) AND \\ & (TI=(xai OR "explainable artificial intelligence" OR "explainable ai") OR AB=(xai OR "explainable artificial intelligence" OR "explainable ai")) AND \\ & (TI=(”machine learning" OR ai OR "artificial intelligence" OR "deep learning") OR \\ & AB=(”machine learning" OR ai OR "artificial intelligence" OR "deep learning") OR \\ & AX=(”machine learning" OR ai OR "artificial intelligence" OR "deep learning")) \\ \hline IEEE Xplore & (”Document Title”:causal* OR "Abstract":causal* OR "Author Keywords":causal*) AND \\ & (”Document Title”:expla* OR "Abstract":expla* OR "Author Keywords":expla*) AND \\ & (”Document Title”:xai OR "Document Title":"explainable artificial intelligence" OR "Document Title":"explainable ai" OR "Abstract":"explainable artificial intelligence" OR "Author Keywords":"explainable artificial intelligence" OR "Author Keywords":"explainable artificial intelligence" OR "Author Keywords":"explainable artificial intelligence" OR "Author Keywords":"explainable ai") AND \\ & ("Document Title":"machine learning" OR "Document Title":"ai OR "Document Title":"artificial intelligence" OR "Document Title":"deep learning" OR "Abstract":"machine learning" OR "Abstract":"aai OR "Abstract":"artificial intelligence" OR "Abstract":"deep learning" OR "Author Keywords":"machine learning" OR "Author Keywords":"adioR "Author Keywords":"artificial intelligence" OR "Author Keywords":"deep learning") \\ \hline ACM & (Title:causal* OR Abstract:causal* OR Keyword:causal*) AND \\ & (Title:expla* OR Abstract:expla* OR Keyword:expla*) AND \\ & (Title:xai OR Title:"explainable artificial intelligence" OR Title:"explainable ai" OR Abstract:"explainable artificial intelligence" OR "Abstract:"explainable ai" OR Keyword:"explainable artificial intelligence" OR "explainable ai") AND \\ & (Title:"machine learning" OR Title:ai OR Title:"artificial intelligence" OR Title:"deep learning" OR Abstract:"machine learning" OR Abstract:"deep learning" OR "Acknowledg":"machine learning" OR Keyword:"artificial intelligence" OR Keyword:"deep learning") \\ \hline \end{tabular}
\end{table}
Table 3: Query strings used for each database. AB, ABS: abstract; AK, KEY: keywords; TI: title. |
2309.11196 | When to Trust AI: Advances and Challenges for Certification of Neural
Networks | Artificial intelligence (AI) has been advancing at a fast pace and it is now
poised for deployment in a wide range of applications, such as autonomous
systems, medical diagnosis and natural language processing. Early adoption of
AI technology for real-world applications has not been without problems,
particularly for neural networks, which may be unstable and susceptible to
adversarial examples. In the longer term, appropriate safety assurance
techniques need to be developed to reduce potential harm due to avoidable
system failures and ensure trustworthiness. Focusing on certification and
explainability, this paper provides an overview of techniques that have been
developed to ensure safety of AI decisions and discusses future challenges. | Marta Kwiatkowska, Xiyue Zhang | 2023-09-20T10:31:09Z | http://arxiv.org/abs/2309.11196v1 | # When to Trust AI: Advances and Challenges for Certification of Neural Networks
###### Abstract
Artificial intelligence (AI) has been advancing at a fast pace and it is now poised for deployment in a wide range of applications, such as autonomous systems, medical diagnosis and natural language processing. Early adoption of AI technology for real-world applications has not been without problems, particularly for neural networks, which may be unstable and susceptible to adversarial examples. In the longer term, appropriate safety assurance techniques need to be developed to reduce potential harm due to avoidable system failures and ensure trustworthiness. Focusing on certification and explainability, this paper provides an overview of techniques that have been developed to ensure safety of AI decisions and discusses future challenges.
## I Introduction
Artificial intelligence (AI) has advanced significantly in recent years, largely due to the step improvement enabled by deep learning in data-rich tasks such as computer vision or natural language processing. AI technologies are being widely deployed and enthusiastically embraced by the public, as is evident from the take up of ChatGPT and Tesla. However, deep learning lacks robustness, and neural networks (NNs), in particular, are unstable with respect to so called _adversarial perturbations_, often imperceptible modifications to inputs that can drastically change the network's decision. Many such examples have been reported in the literature and the media. Figure 1 (left) shows a dashboard camera image from [1], for which a change of a single pixel to green changes the classification of the image from red traffic light to green, which is potentially unsafe if there is no fallback safety measure; while this is arguably an artificial example, some modern cars have been observed to mis-read traffic signs, including the physical attack in Figure 1 (middle), where the digit 3 has been modified. Traffic sign recognition is a complex problem to specify and solve, see Figure 1 (right), which shows a real traffic sign in Alaska. As with any maturing technology, it is natural to ask if AI is ready for wide deployment, and what steps - scientific, methodological, regulatory, or societal - can be taken to achieve its trustworthiness and reduce potential for harm through rushed roll-out. This is particularly important given the fast-paced development of AI technologies and the natural propensity of humans to overtrust automation.
For AI to be trusted, particularly in high-stakes situations, where avoidable failure or wrong decision can lead to harm or high cost being incurred, it is essential to provide _provable guarantees_ on the critical decisions taken autonomously by the system. Traditionally, for software systems this has been achieved with _formal verification_ techniques, which aim to formally prove whether the system satisfies a given specification, and if not provide a diagnostic counter-example. Founded on logic, automated verification, also known as model checking, achieves this goal by means of executing a verification algorithm on a suitably encoded model of the system. Software verification has become an established methodology and a variety of tools of industrial relevance are employed in application domains such as distributed computation, security protocols or hardware. Beginning with [2, 3], over the past few years a number of formal verification techniques have been adapted to neural networks, which are fully data-driven and significantly differ from the state-based transition system models of conventional software, and have given rise to practical, algorithmic techniques that provide provable guarantees on neural network decisions [4].
This paper aims to provide an overview of existing techniques that can be used to increase trust in AI systems and outline future scientific challenges, while at the same time raising awareness of potential risks with early adoption. It is taken as granted that safety assurance of AI systems is complex and needs to involve appropriately regulated processes and assignment of accountability. The topics discussed in this paper are by no means exhaustive, but offer a representative selection of techniques and tools that can be used within such safety assurances processes, and can be adapted, extended or built upon to increase robustness and trustworthiness of AI systems. The paper will focus on highlighting the following
Fig. 1: Challenges of safe traffic sign recognition. Single-pixel adversarial attack from [1] (left), physical attack (middle) and a real traffic sign (right).
two aspects:
* **Certification**: focusing on individual decisions (possibly critical to the integrity of the system) that are made by neural networks, we provide an overview of the main methodological approaches and techniques that have been developed to obtain provable guarantees on the correctness of the decision, which can thus be used for certification. The sources of computational complexity of neural network verification will be discussed, as well as limitations of existing methods and ways to address them.
* **Explainability**: neural networks are 'black boxes' that are trained from data using obscure optimization processes and objectives, and it is argued that users of AI systems will benefit from the ability to obtain explanations for the decisions. We summarise the main approaches to producing explanations and discuss that they may lack robustness and how this issue can be addressed.
The overview includes high-level description of main algorithms, which are illustrated by worked examples to explain their behaviour to the interested reader. This is followed by a selection of case studies of robustness analysis and/or certification drawn from a variety of application domains, with the aim to highlight the strengths and weaknesses of the approaches. Finally, future challenges and suggestions for fruitful directions to guide the developments in this actively studied and important area will be outlined.
The paper is organised as follows. Section II introduces the main concepts, focusing on neural networks in the supervised learning setting. Section III provides an overview of the main (forward and backward) analysis approaches, with a description of the working for a selection of algorithms illustrated by worked examples. Section IV includes a few excerpts from a selection of verification and certification experiments, aimed at highlighting the uses of the main methods, and Section V outlines future challenges. Finally, Section VI concludes the paper.
## II Safety, Robustness and Explainability
In the context of safety-critical systems, safety assurance techniques aim to prevent, or minimise the probability of, a hazard occurring, and appropriate safety measures are invoked in case of failures. In this paper, we focus on critical decisions made by neural networks, which we informally refer to as safe if they satisfy a given property, which can be shown or disproved by formal verification. Before discussing formal verification techniques, we begin with background introduction to the main concepts of deterministic neural networks, their (local) robustness and explanations.
### _Neural Networks_
We consider neural networks in the supervised learning setting. A neural network is a function \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}\) mapping from the input space to the output space, which is typically trained based on a dataset \(\mathcal{D}\) of pairs \((x,y)\) of input \(x\) and ground truth label \(y\). A neural network consisting of \(L+1\) layers (including the input layer) can be characterized by a set of matrices \(\{W^{(i)}\}_{i=1}^{L}\) and bias vectors \(\{b^{(i)}\}_{i=1}^{L}\) for linear (affine) transformations, followed by pointwise activation functions, such as \(ReLU\), \(Sigmoid\), and \(Tanh\), for nonlinear transformations. We use \(\hat{z}^{(i)}\) and \(z^{(i)}\) to denote the pre-activated and activated vectors of the \(i\)-th layer, respectively. The layer-by-layer forward computation of neural networks can be described as follows:
* _Linear transformation._ The linear transformation generates a pre-activated vector \(\hat{z}^{(i)}=W^{(i)}\cdot z^{(i-1)}+b^{(i)}\) (\(i\in[1,L]\)) from the output of the previous layer, and \(z^{(0)}=x\) denotes the input vector.
* _Pointwise nonlinear transformation._ The pointwise nonlinear transformation generates the activation vector \(z^{(i)}=\sigma(\hat{z}^{(i)})\) (\(i\in[1,L]\)). In practice, \(softmax\) is usually employed as the activation function for the output layer in classification tasks, which provides the normalised relative probabilities of classifying the input into each label.
Given an input \(x\in\mathbb{R}^{n}\), the output of \(f\) on \(x\) is defined by \(f(x)=f^{(L)}\circ\cdots\circ f^{(1)}(x)\), where \(f^{(i)}\) denotes the mapping function of the \(i\)-th layer, which is the composition of linear and pointwise nonlinear transformations.
**Example 1**.: _Figure 2 shows a simple feed-forward (and fully connected (FC)) neural network with four layers and ReLU as the activation function. \(x_{1}\), \(x_{2}\) represent two input neurons. \(z_{1}\), \(z_{2}\) and \(z_{3}\), \(z_{4}\) represent the activated neurons of the two hidden layers. \(y_{1}\), \(y_{2}\) are two output neurons. The forward computation from the input layer to the output layer is as follows (ReLU is denoted as \(\sigma\))._
\[z_{1} =\sigma(x_{1}+x_{2}),\quad z_{2}=\sigma(x_{1}-x_{2}) \tag{1}\] \[z_{3} =\sigma(z_{1}+3z_{2}),\quad z_{4}=\sigma(-z_{1}+2z_{2})\] (2) \[y_{1} =z_{3},\qquad y_{2}=-2z_{3}-z_{4} \tag{3}\]
_We will use this neural network as a running example to illustrate different problem formulations and methods to address them._
### _Robustness_
Robustness focuses on neural networks' resilience to adversarial attacks, noisy input data, etc., at test time, known as evasion attacks [6, 7, 8]. Attacks at training time are known as poisoning attacks [9], which have been omitted from this overview.
_Adversarial robustness_[7] of neural networks formalizes the desirable property that a well-trained model makes consistent predictions when its input data point is subjected to
Fig. 2: A feed-forward neural network.
small adversarial perturbations. _Local (adversarial) robustness_ pertains to a given input point \(x\) with ground truth label \(y\), and is usually defined in terms of invariance of the network's decision within a small neighbourhood \(\mathbb{R}_{p}(x,\epsilon)\) of \(x\), for a class of perturbations bounded by \(\epsilon\) with respect to the \(\ell_{p}\) norm.
**Definition 1**.: _Given a (deterministic) neural network \(f\), a labelled input data point \((x,y)\), and a perturbation bound \(\epsilon\), the local robustness property of \(f\) on \(x\) is defined as_
\[\forall x^{\prime}\in\mathbb{B}_{p}(x,\epsilon).\operatorname*{arg\,max}_{i=1,\cdots,m}f_{i}(x^{\prime})=y,\]
_where \(\mathbb{B}_{p}(x,\epsilon)\) denotes the adversarial \(\ell_{p}\)-ball of radius \(\epsilon\) around input \(x\)._
Should there exist a point \(x^{\prime}\) in the neighbourhood whose class is different than \(y\), it is referred to as an _adversarial example_.
A related concept is that of a _maximal safe radius (MSR)_[10], denoted \(MSR(x)\), which is the minimum distance from \(x\in\mathbb{R}^{n}\) to the decision boundary, and is defined as the largest \(\epsilon>0\) such that \(\forall x^{\prime}\in\mathbb{B}_{p}(x,\epsilon).\operatorname*{arg\,max}_{i=1,\cdots,m}f_{i}(x^{\prime})=\operatorname*{arg\,max}_{i=1,\cdots,m}f_{i}(x)\). Computing the value of MSR, say \(\gamma\), provides a _guarantee_ that the decision is robust (safe) for perturbations up to \(\gamma\). On the other hand, finding an adversarial example at distance \(\gamma^{\prime}\) is witness to the failure of robustness.
Global robustness [11] concerns the stability of predictions over the whole input space and is omitted.
### _Explainability_
Explainability [12, 13] aims to understand and interpret why a given neural network makes certain predictions. The term explainability is often used interchangeably with interpretability in the literature, though interpretability usually refers to explaining how the model works. In this overview, we focus on local (pointwise) explainability for an individual _model decision_, which is categorised into _feature attribution_ methods, which heuristically estimate feature attribution scores for model predictions and include _gradient-based_[14, 15] and _perturbation-based_ techniques [16, 17], and _abduction-based_ methods [18, 19], which identify the features that imply the decision and can thus provide (safety) guarantees. Attribution scores can also be used for feature importance ranking [20] to provide an overall understanding of the importance of different input attributes on the model decisions.
#### Iii-C1 Gradient-based methods
Gradient-based methods aim to estimate feature attribution scores for model predictions. Among these, a prominent method is the integrated gradients (IG) [15], which measures the attribution score of each input feature to the model's prediction by integrating the gradients of the model's output with respect to the input features along the path from a baseline input to the actual input.
**Definition 2**.: _Given a neural network \(f\), an input \(x\) and a baseline input \(x^{\prime}\), the integrated gradients for each input feature \(i\in[1,\cdots,n]\) are defined as the weighted (by input feature difference) integral of the gradients over the straight line path between \(x\) and \(x^{\prime}\):_
\[\text{IG}_{i}(x)=(x_{i}-x^{\prime}_{i})\times\int_{\alpha=0}^{1}\frac{\partial f (x^{\prime}+\alpha\times(x-x^{\prime}))}{\partial x_{i}}d\alpha \tag{4}\]
Figure 3 from [5] presents an illustrative example of IG explanations, showing the explanation for each class of a correctly classified handwritten-digit "8" from the MNIST dataset. In this example, positive contributions are highlighted in red, while negative contributions are indicated by blue.
#### Iii-C2 Perturbation-based methods
LIME [16] and its successor Anchors [17] are representatives of explainability methods that deploy a perturbation-based strategy to generate local explanations for model predictions. LIME assumes local linearity in a small area around an input instance and generates a set of synthetic data by perturbing the original input. Anchors [17] explains the model predictions by identifying a set of decision rules that "anchors" the prediction. Compared with LIME, Anchors generates more explicit decision rules and derives local explanations by consulting \(x\)'s perturbation neighbourhood in different ways. In particular, Anchors evaluates the coverage fraction of the perturbed data samples sharing the same class as \(x\), matching the decision rules.
#### Iii-C3 Robust explanations
The explanation techniques mentioned above use different heuristics to derive local explanations, demonstrating effective generality beyond the given input but lacking robustness to adversarial perturbations. The robustness notion for explanation is important to ensure the stability of the explanation in the sense that the explanation is logically sufficient to imply the prediction. Intuitively, the computed explanation for a perturbed input should remain the same as the original input.
To this end, [18, 19] introduce a principled approach to derive explanations with formal guarantees by exploiting abduction reasoning. This ensures the robustness of the explanation by requiring its invariance w.r.t. any perturbation of the
Fig. 3: The IG explanation for each of the classes of the MNIST dataset, where red indicates a positive contribution and blue a negative. Figure taken from [5].
remaining features that are left out. The explanation method of [19] focuses on _optimal robust explanations (OREs)_, to provide both robustness guarantees and optimality w.r.t. a cost function. Optimality provides the flexibility to control the desired properties of an explanation. For instance, the cost function could be defined as the length of the explanation to derive minimal but sufficient explanations.
## III Certification for Neural Networks
In this section, we present an overview of recent advances for certification of neural networks, with a focus on formal verification. Given a neural network \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}\), we consider the formal verification problem [4], defined for a property specified as a pair \((\phi_{\mathsf{pre}},\phi_{\mathsf{post}})\) of precondition and postcondition, by requiring that \(\forall x\in\mathbb{R}^{n}.\,x\models\phi_{\mathsf{pre}}\implies f(x) \models\phi_{\mathsf{post}}\), that is, for all inputs satisfying the precondition the corresponding (optimal softmax) decision must satisfy the postcondition. Typically, \(\phi_{\mathsf{pre}}\subseteq\mathbb{R}^{n}\) and \(\phi_{\mathsf{post}}\subseteq\mathbb{R}^{m}\), but can be respectively induced from subsets of input features or sets of labels. Formal verification then aims to establish algorithmically whether this property holds, thus resulting in a _provable guarantee_. Otherwise, the property may be falsified, in which case a witness is provided, or inconclusive. Sometimes, we may wish to compute the proportion of inputs that satisfy the postcondition, known as _quantitative verification_[21].
Various formal verification methods have been proposed to provide provable guarantees for neural networks. We classify existing verification methods into forward and backward analysis, depending on whether they start from the input or output space.
* _Forward analysis:_ Forward analysis methods start from the precondition \(X=\{x\in\mathbb{R}^{n}\mid x\models\phi_{\mathsf{pre}}\}\) defined on the input space, and check whether the outputs (corresponding to the input region) satisfy the postconditions \(\phi_{\mathsf{post}}\). For example, robustness verification approaches [22, 23, 24, 25] start from the perturbation neighbourhood of a given input, e.g., an \(l_{\infty}\) ball around an input point \(x\), and compute bounds on the outputs to check whether the predicted labels over the adversarial region are preserved.
* _Backward analysis:_ Backward analysis methods start from the postcondition \(Y=\{y\in\mathbb{R}^{m}\mid y\models\phi_{\mathsf{post}}\}\) and aim to find the set of inputs that lead to such outputs. For example, preimage generation (inverse abstraction) approaches [26, 27, 28, 29] start from the output constraints, e.g., a polytope constraining the probability of the target label is greater than the other labels, and derive the input set that provably leads to this particular decision.
We remark that, similarly to formal verification for conventional software, certification for machine learning models is computationally expensive, and it is therefore recommended for use in safety- or security-critical settings. In less critical situations, diagnostic methods [30], which approximate model decisions to analyse their predictions, can be employed to investigate both model- and data-related issues.
### _Forward Analysis Methods_
We categorize the forward analysis methods into two groups: _sound but incomplete_ and _complete_ methods. Soundness and completeness are essential properties of verification algorithms, which are defined as follows.
* _Soundness:_ A verification algorithm is sound if the algorithm returns True and the verified property holds.
* _Completeness:_ A verification algorithm is complete if (i) the algorithm never returns unknown; and (ii) if the algorithm returns False, the property is violated.
#### Iii-A1 Incomplete methods
Incomplete verification methods leverage approximation techniques, such as search [1, 10], convex relaxation [31] and abstract interpretation [32], respectively to compute lower/upper bounds on MSR or the non-convex optimization problem. A safety property is verified when the reachable outputs satisfy the postcondition; otherwise, no conclusion can be drawn. At the same time, due to the relaxation introduced by the approximation techniques, incomplete methods have better scalability than complete ones.
Game-based searchKnowledge of the maximum safe radius (MSR) can serve as a guarantee on the maximum magnitude of the allowed adversarial perturbations. Unfortunately, MSR computation is intractable, and instead approximate algorithms have been developed for images in [10], and extended to videos in [33], that compute lower and upper bounds on MSR with provable guarantees, i.e., bounded error. The method relies on the network satisfying the Lipschitz condition and can be configured with a variety of feature extraction methods, for example SIFT. Given an over-approximation of the Lipschitz constant, the computation is reduced to a finite optimization over a discretisation of the input region \(X\) corresponding to the precondition \(\phi_{\mathsf{pre}}\). The resulting finite optimization is solved in anytime fashion through a two-player game, where player 1 selects features and player 2 perturbs the image representation of the feature, and the objective is set to minimise the distance to an adversarial example. Under the assumptions, the game can be unfolded into a finite tree and Monte Carlo Tree Search (MCTS) used to approximate MSR upper bound, and Admissible A* MSR lower bound, respectively.
Bound propagationA common technique for incomplete verification is applying convex relaxation to bound nonlinear constraints in neural networks. This way, the original non-convex optimization problem is transformed into a linear programming problem. With the relaxed linear constraints, the global lower and upper bounds can be computed more
Fig. 4: Illustration of the convex relaxation for inactive (left), active (middle) and unstable (right) ReLU neurons.
efficiently for the associated (relaxed) linear program. Representative methods that adopt efficient bound propagation include convex outer adversarial polytope [31], CROWN [34] and its generalization [25, 35]. Figure 4 illustrates convex relaxation using linear bounding functions to bind ReLU neurons. Note that relaxation is only introduced for unstable neurons, while the ReLU constraints for inactive and active ones are exact. For unstable neurons, the lower and upper bounding function for the \(j\)-th neuron of the \(i\)-th layer \(a_{j}^{(i)}(x)\) (activated value) with regard to \(h_{j}^{(i)}(x)\) (before activation) are:
\[\alpha_{j}^{(i)}h_{j}^{(i)}(x)\leq a_{j}^{(i)}(x)\leq-\frac{u_{j}^{(i)}l_{j}^ {(i)}}{u_{j}^{(i)}-l_{j}^{(i)}}+\frac{u_{j}^{(i)}}{u_{j}^{(i)}-l_{j}^{(i)}}h_{ j}^{(i)}(x) \tag{5}\]
where a flexible lower bound function with parameter \(\alpha_{j}^{(i)}\) as in [35] is used, which leads to a valid lower bound for any parameter value within \([0,1]\).
By propagating the linear (symbolic) upper and lower bounds layer by layer, we can obtain the linear bounding functions \(f^{L}\), \(f^{U}\) for the entire neural network \(f\), and it holds that \(\forall x\in X\). \(f^{L}(x)\leq f(x)\leq f^{U}(x)\). The non-convex verification problem is thus transformed into a linear program with the objective linear in the decision variables. The certified upper and lower bounds can be computed by taking the maximum, \(\max_{x\in\mathbb{B}_{p}(x,\epsilon)}f^{U}(x)\), and the minimum, \(\min_{x\in\mathbb{B}_{p}(x,\epsilon)}f^{L}(x)\), which have _closed-form_ solutions for linear objectives (\(f^{U}\), \(f^{L}\)) and convex norm constraints \(\mathbb{B}_{p}(x,\epsilon)\).
**Example 2**.: _Consider the neural network illustrated in Example 1. The verification problem we consider is given by the pre-condition \(\phi_{\text{pre}}=\{x\in\mathbb{R}^{2}|x\in[-1,1]\times[-1,1]\}\) and the post-condition \(\phi_{\text{post}}=\{y=f(x)\in\mathbb{R}^{2}\mid y_{1}\geq y_{2}\}\), and we want to prove that \(\forall x\). \(x\models\phi_{\text{pre}}\implies f(x)\models\phi_{\text{post}}\)._
_Figure 5 shows the overall bound propagation procedure for this verification problem, where the interval \([\cdot,\cdot]\) represents the concrete value range computed for each neuron. \(z_{i}^{U}\), \(z_{i}^{L}\) represent the linear upper and lower bounding functions for nonlinear neurons, which are computed according to Equation 5 based on the concrete value intervals. Starting from the input layer, we can first compute the concrete bounds (\([-2,2]\)) for \(z_{1}\) and \(z_{2}\) (before activation). The bounding functions \((z_{1}^{L}\), \(z_{1}^{U})\), \((z_{2}^{L}\), \(z_{2}^{U})\) are then computed according to Equation 5, where \(\alpha=0\) is taken as the lower bounding function coefficient. The linear bounding functions can directly propagate to the next layer via the linear matrix transformation. Then, by taking the minimum value of the lower bounding function and the maximum of the upper one, concrete value ranges (\([0,7]\) and \([-2,4]\)) are computed for \(z_{3}\) and \(z_{4}\), based on which symbolic functions (\(z_{3}^{L}\), \(z_{3}^{U}\)), (\(z_{4}^{L}\), \(z_{4}^{U}\)) can be derived and further propagated to the output layer. In the end, we compute the global lower and upper bounds for \(y_{1}\) and \(y_{2}\), which are \([0,7]\) and \([-17.4,0]\), respectively. From the certified bounds on the output layer, it holds that \(\min(y_{1})\geq\max(y_{2})\) for any input \((x_{1},x_{2})\in[-1,1]\times[-1,1]\). Therefore, the bound propagation method certifies that the neural network is robust in the input domain with respect to the ground-truth label \(y_{1}\)._
Abstract interpretationAbstract interpretation [32, 36] is a classic framework that can provide sound and computable finite approximations for infinite sets of behaviours. To provide sound analysis of neural networks, several works [22, 37, 38, 24] have exploited this technique to reason about safety properties. These methods leverage numerical abstract domains to overapproximate the inputs and compute an overapproximation of the outputs layer by layer. To this end, an abstract domain is selected to characterize the reachable output set for each layer as an abstract element. The choice of abstract domain is essential to balance the analysis precision and scalability. Commonly used abstract domains for neural network verification [39] include _Interval_, _Zonotope_, and _Polytope_, of which the general formulations are summarized in the following (increasing in precision):
Interval: \[\{x\in\mathbb{R}^{n}|l_{i}\leq x_{i}\leq u_{i}\}\] Zonotope: \[\{x\in\mathbb{R}^{n}|x_{i}=c_{i0}+\sum_{j=1}^{m}c_{ij}\cdot \epsilon_{j},\epsilon_{j}\in[-1,1]\}\] Polytope: \[\{x\in\mathbb{R}^{n}|x_{i}=c_{i0}+\sum_{j=1}^{m}c_{ij}\cdot\epsilon_{j},F (\epsilon_{1},\cdots,\epsilon_{m})\}\]
where \(\epsilon_{j}\) (\(j=1,\cdots,m\)) denote \(m\) generator variables. The generator variables are bounded within the interval \([-1,1]\) for zonotopes and constrained by \(F\) for polytopes, where \(F\) takes in the form of a convex polytope \(\mathbf{cx}\leq\mathbf{d}\).
With the abstract domain capturing the reachable outputs of each layer, abstract transformers are defined to compute the effect of different layers on propagating the abstract element. Affine transformers are usually supported by the underlying abstract domain, such as Zonotope and Polytope, to abstract the linear functions. For nonlinear functions, case splitting and unifying is proposed in [22] by defining the _meet_ and _join_ operators to propagate zonotope abstraction through piecewise-linear layers. Convex approximations are adopted in [24] for abstract transformers of nonlinear functions where the approximation can be captured with the proposed polyhedra abstraction. At the end of the analysis, the abstract element of the output layer is an over-approximation of all
Fig. 5: Verification via bound propagation.
possible concrete outputs corresponding to the input set. Then we can directly verify the over-approximation of the outputs against the postcondition \(\phi_{\mathsf{post}}\), i.e., check whether the over-approximation is fully contained within \(\phi_{\mathsf{post}}\). One drawback of this method is that the over-approximation may be quite loose.
#### Iii-B2 Complete methods
Early complete verification approaches for neural networks [40, 3] encode the neural network into a set of constraints exactly and then check the satisfaction of the property with constraint solvers, e.g, SMT (Satisfiability Modulo Theory) or MILP (Mixed Integer Linear Programming) solvers. Since such constraint-solving methods encode the neural network in an exact way, they are able to ensure both soundness and completeness in providing certification guarantees. One limitation is that these methods suffer from exponential complexity in the worst case. To address the computational intractability, Branch and Bound techniques are adopted and customized for neural network verification, where efficient incomplete methods can be exploited to speed up the bound computation.
Smt solverReluplex [3] is proposed as a customized SMT solver for neural network verification. The core idea is to extend the simplex algorithm, a standard algorithm to solve linear programming problems, with additional predicates to encode (piecewise linear) ReLU functions and transition rules (_Pivot_ and _Update_) to handle ReLU violations. The extended Reluplex algorithm allows variables that encode ReLU nodes to temporarily violate the ReLU constraints. Then, as the iteration proceeds, the solver picks variables that violate a ReLU constraint and modifies the assignment to fix the violation using _Pivot_ and _Update_ rules. When the attempts to fix a ReLU constraint using _Update_ rules exceed a threshold, a ReLU splitting mechanism is applied to derive two sub-problems. Reluplex is then invoked recursively on these two sub-problems. Compared with the eager splitting on all ReLU neurons, Reluplex proposes a splitting-on-demand strategy to reduce unnecessary splitting and limit splits to ReLU constraints that are more likely to cause violation problems. Due to the exact encoding nature, Reluplex suffers from exponential complexity in the worst case and thus cannot scale to large neural networks.
MilpMILP-based verification methods [41, 42, 43] encode a neural network with piecewise-linear functions as a set of mixed integer linear constraints. To encode the nonlinearities, they introduce an indicator decision variable \(\delta\) to characterize the two statuses of unstable ReLU neurons. An unstable ReLU neuron \(z=\max(\hat{z},0)\) with concrete bounds \((l,u)\) can be encoded exactly using the following constraints:
\[z\geq 0,\quad z\leq u\cdot\delta,\] \[z\geq\hat{z},\quad z\leq\hat{z}-l\cdot(1-\delta),\] \[\delta\in\{0,1\}\]
Note that the MILP constraints require the pre-computation of finite bounds for the nonlinear neurons, i.e., \((l,u)\). It is known that the tightness of lower and upper bounds in the indicator constraints is crucial to the resolution of the MILP problem [44, 42], and consequently, the verification efficiency. MIPVerify [43] thus proposes a progressive bound tightening approach to improve upon existing MILP-based verifiers. The algorithm starts with coarse bounds computed using efficient bound computation procedures such as _Interval Arithmetic_. Bound refinement is performed only when the MILP problem can be further tightened. In such a case, more precise but less efficient bound computation procedures, e.g., _Linear Programming_ (LP), are adopted to derive tighter bounds. This progressive bounding procedure can also be extended to other bound computation methods, such as dual optimization, to achieve a trade-off between tightness and computational complexity.
**Example 3**.: _In this example, we encode the neural network, shown in Example 1, into the exact MILP formulation. The verification problem is the same as shown in Example 2, i.e., to determine whether \(\phi_{\mathsf{post}}=\{y\in\mathbb{R}^{2}\mid y_{1}\geq y_{2}\}\) holds for all inputs in the input domain \([-1,1]\times[-1,1]\). We encode the output property by specifying its negation, i.e., \(\phi^{\prime}_{\mathsf{post}}=\neg\phi_{\mathsf{post}}\). If there exists an instance where \(\phi^{\prime}_{\mathsf{post}}\) does hold, then a witness to \(\phi^{\prime}_{\mathsf{post}}\) is the counter-example for \(\phi_{\mathsf{post}}\). If \(\phi^{\prime}_{\mathsf{post}}\) is unsatisfiable, then the property \(\phi_{\mathsf{post}}\) is proved._
_Assume we have computed the concrete value of lower and upper bounds of \(z_{i}\) employing efficient bound propagation techniques. Then the neural network and the verification problem can be formulated as follows:_
\[x_{1}\geq-1,\,x_{1}\leq 1,\,x_{2}\geq-1,x_{2}\leq 1\,(\phi_{ \mathsf{pre}}) \tag{6}\] \[\hat{z}_{1}=x_{1}+x_{2},\quad\hat{z}_{2}=x_{1}-x_{2},\quad\delta _{1},\delta_{2}\in\{0,1\}\] (7) \[z_{1}\geq 0,\,z_{1}\leq 2\delta_{1},\,z_{1}\geq\hat{z}_{1},\,z_{1} \leq\hat{z}_{1}+2(1-\delta_{1}),\] (8) \[z_{2}\geq 0,\,z_{2}\leq 2\delta_{2},\,z_{2}\geq\hat{z}_{2},\,z_{2} \leq\hat{z}_{2}+2(1-\delta_{2}),\] (9) \[\hat{z}_{3}=z_{1}+3z_{2},\quad\hat{z}_{4}=-z_{1}+2z_{2},\quad\delta _{4}\in\{0,1\}\] (10) \[z_{3}=\hat{z}_{3},\,\text{(stable neuron)}\] (11) \[z_{4}\geq 0,\,z_{4}\leq 4\delta_{4},\,z_{4}\geq\hat{z}_{4},\,z_{4} \leq\hat{z}_{4}+2(1-\delta_{4}),\] (12) \[y_{1}=z_{3},\qquad y_{2}=-2z_{3}-z_{4}\] (13) \[y_{1}<y_{2}\,(\neg\phi_{\mathsf{post}}) \tag{14}\]
_The lower and upper bounds \(l_{i}\) and \(u_{i}\) (\(i\in\{1,2,3,4\}\)) are derived as shown in Example 2. The binary variables \(\delta_{i}\) (\(i\in\{1,2,4\}\)) are introduced to indicate the status of unstable ReLUs and it holds that \(\delta_{i}=0\Leftrightarrow z_{i}=0\) and \(\delta_{i}=1\Leftrightarrow z_{i}=\hat{z}_{i}\). Checking the feasibility of the above model using MILP solvers (e.g., Gurobi) will return infeasible, thus proving the original property._
Branch and BoundTo improve the scalability of verification algorithms to larger neural networks, a branch and bound framework (BaB) [45] has been proposed. The BaB framework mainly consists of two components: a branching method that splits the original verification problem into multiple subproblems and a bounding method to compute the upper and lower bounds of the subproblems. This modularized design provides a unifying formulation paradigm for different verifiers, with the main difference lying in the splitting
function and the bounding method. For example, the verifier _ReluVal_[46] performs splitting on the input domain according to sensitivity analysis, e.g., input-output gradient information, and computes bounds using symbolic interval propagation. The aforementioned SMT-based verifier _Reluplex_[3] performs splitting on ReLU neurons guided by the violation frequency of the ReLU constraints and computes the bounds on the relaxed problems by dropping some constraints on the nonlinearities (which yields an over-approximation of the constraint optimization problems).
To further improve neural network verification, the BaB method introduces two new branching strategies: BaBSB for branching on input domains and BaBSR for branching on ReLU neurons. Both branching methods adopt a similar heuristic to decide which dimension or ReLU neuron to split on. BaBSB computes a rough estimate of the improvement on the bounds obtained with regard to every input dimension, where the estimation makes the split decision be set more efficiently. On the other hand, BaBSR estimates the bound improvement with regard to each unfixed ReLU neuron by computing the ReLU scores. The bounding methods resort to LP solvers to tighten the intermediate bounds on the subdomains or use more computationally efficient methods such as Interval Arithmetic.
### _Backward Analysis Methods_
Backward analysis methods for neural networks, also known as preimage generation or inverse abstraction, aim at computing the input set that will lead the neural network to a target set, e.g., a safe or unsafe region. They complement the forward analysis methods, which may result in over-approximated bounds worsening as the computation progresses through the layers of the network. In the following, we categorize the representative approaches broadly into two groups: _exact_ and _approximate_ methods.
#### Iv-B1 Exact methods
Exact backward analysis methods reason about the preimage of a target output set by encoding the neural network behaviours in an exact manner. These methods are able to compute the exact symbolic representation of the preimage for different output properties. One limitation suffered by these methods is that they can only process neural networks with piecewise-linear activation functions (e.g., \(ReLU\)), as they aim at an exhaustive decomposition of the non-convex function (the neural network) into a set of linear functions. The preimage (input set) for a target output set with regard to a neural network \(f\) is characterized as a union of polytopes, where the mapping functions are completely linear on each subregion.
Exact preimageThe exact preimage generation method [26] complements the forward analysis methods to reason about the inputs that lead to target outputs. The algorithm computes the exact preimage by relying on two elementary properties: (1) preimage of the composite functions is the reversed composition of preimages for each layer, i.e., \((f^{(L)}\circ\cdots\circ f^{(1)})^{-1}=(f^{(1)})^{-1}\circ\cdots\circ(f^{(L) })^{-1}\), and (2) preimage of a union set can be built up from the preimages of each subset in the union, i.e., \(f^{-1}(\cup_{j}S_{j})=\cup_{j}f^{-1}(s_{j})\). This method assumes that the output set, e.g., a safe region, can be formulated as a polytope (intersection of half-planes) \(\{y\in\mathbb{R}^{m}|Ay-b\leq 0\}\). It then propagates the polytope backwards through the layers.
For linear layers, the preimage is computed by applying the linear operations corresponding to the layer. Suppose we have a linear mapping in the form of \(y=Wz+a\), then the preimage of the output polytope under this linear operation can be formulated as \(\{z\in\mathbb{R}^{n_{L-1}}|AWz+(Aa-b)\leq 0\}\). For nonlinear layers, the algorithm restricts the backward propagation to a subset where the activation pattern of the ReLU neurons is fixed. Let \(s(z)\) denote the activation status vector of the nonlinear neurons where \(s(z)_{j}=1\) if \(z_{j}\geq 0\) and \(s(z)_{j}=0\) otherwise. A diagonal matrix \(diag(s(z))\) is introduced to restrict to a fixed activation pattern, on which only linear computation is required to compute the preimage subset. The exact preimage can then be computed by taking the union of each partition (preimage property (2)).
\[ReLU^{-1}(\{y\in\mathbb{R}^{m}|Ay-b\leq 0\})\] \[=\bigcup_{s\in\{0,1\}^{n_{s}}}\{z\in\mathbb{R}^{n_{i}}|Adiag(s)z- b\leq 0,-diag(s)z\leq 0,\] \[diag(1-s)z\leq 0\}\]
**Example 4**.: _In this example, we consider the same verification problem as in Example 2 and 3, but from the backward perspective. Preimage analysis aims to investigate whether the input region \([-1,1]\times[-1,1]\), which is expected to result in decision \(y_{1}\), fails the safety check. We first formulate the target output region as a polytope. Since we only have two labels, the output constraint is, therefore, a single half-plane encoded as \(\{y\in\mathbb{R}^{2}|y_{1}-y_{2}\geq 0\}\). We then proceed to compute the preimage of the target polytope under the linear mapping (from the \(2^{nd}\) hidden layer to the output layer), of
Fig. 6: Preimage polytopes by exact method.
_which the result is \(\{(z_{3},z_{4})\in\mathbb{R}^{2}\mid 3z_{3}+z_{4}\geq 0\}\). Next, preimage computation for ReLU starts with partitioning the neuron vector space \(\mathbb{R}^{2}\) into \(2^{2}\) sets where, for each subset, the status of nonlinear neurons is fixed and preimage computation proceeds similarly to the linear mapping. The partition leads to four result polytopes._
_Figure 6 shows the result of four preimage polytopes derived in the two-dimensional space \((z_{1},z_{2})\). As an example, the preimage polytope derived corresponding to the partition where both neurons are active is (upper left of Figure 6):_
\[\{z^{(1)}\in\mathbb{R}^{2}\ :A^{(1)}z^{(1)}\geq 0\}\quad\text{where}\] \[A^{(1)}=\begin{bmatrix}2&11\\ 1&3\\ -1&2\end{bmatrix},\quad z^{(1)}=\begin{pmatrix}z_{1}\\ z_{2}\end{pmatrix}\]
_The other three polytopes are derived in the same way. The four preimage polytopes are then partitioned further into 16 polytopes to characterize the exact preimage of the input layer. Note that the combination of the four polytopes actually covers the hidden vector space \([-2,2]\times[-2,2]\), and the resulting preimage polytopes on the input layer cover the region \([-1,1]\times[-1,1]\), which certifies that the correct decision is taken for the entire region under investigation._
SyReNNSyReNN is proposed in [47] to compute the symbolic representation of a neural network so as to understand and analyze its behaviours. It targets _low-dimensional_ input subspaces and computes their exact symbolic partitioning, on which the mapping function is completely linear. This methodological design is also referred to as neural network decomposition. We classify it as a backward analysis method, as this method provides a symbolic representation in the input space. SyReNN focuses on neural networks with piecewise-linear activation functions. This restriction enables a precise characterisation of the input space \(X\) as a finite set of polytopes \(\{X_{1},\cdots,X_{n}\}\). Within each input polytope \(X_{i}\), the neural network is equivalent to a linear function. By means of such a symbolic representation, safety verification is reduced to checking whether the vertices of every bounded convex polytope \(X_{i}\) satisfy the output property.
To compute the symbolic decomposition on the input domain, this algorithm starts with the trivial partition \(X\) and derives the linear partitions layer by layer. Given the partition hyperplanes of the nonlinear layer \(i\), e.g., \(z_{1}=0,z_{2}=0,\cdots,z_{n_{i}}=0\) with \(n_{i}\) ReLUs, and the symbolic representation \(f_{i-1}\) (a set of polytopes) computed until layer \(i-1\), \(\hat{f}_{i}\) is computed by recursively partitioning the current polytopes based on the newly-added hyperplanes. For example, given a polytope \(Z_{i-1}\), if an orthant boundary (e.g., hyperplane \(z_{i}=0\)) is hit when traversing the boundary of \(Z_{i-1}\), then \(Z_{i-1}\) is further partition into \(Z_{i-1,1}\) and \(Z_{i-1,2}\) which lie on the opposite sides of the hyperplane. This procedure terminates until all resulting polytopes lie within a completely linear region of the neural network \(f\).
#### Iii-B2 Approximate methods
Exact methods for preimage analysis suffer from exponential complexity in the worst case. Similarly to the development of incomplete verifiers, preimage approximation techniques begin to emerge by leveraging different approximation (relaxation) techniques. They compute a symbolic approximation of the preimages to bypass the intractability of computing exact preimage representations. Computational efficiency and scalability can be greatly improved with the sacrifice of precision.
Symbolic interpolationSymbolic interpolation[48] has been used for program verification and SMT solving. To compute provable preimage approximations, [27] leverages interpolants, especially those with simple structures, and computes preimages from the output space through hidden layers to the input space. The generated approximations can then be applied to reason about the properties of the neural network itself. For example, in the case that a desired property (a target output set) \(Y\) should be satisfied when starting from a certain input set \(X\), an under-approximation \(\underline{X}\) of the preimage for \(Y\) can be computed. Then the property can be verified by checking whether \(\underline{X}\to X\) holds.
[48] proposes an algorithm to compute the preimage approximation by iterating backwards through the layers. It encodes the neural network as constraints in the theory of _quantifier-free linear rational arithmetic_ (QFLRA) and requires the output set to be encoded as a Boolean combination of atoms in the form of half-spaces. Suppose we now focus on deriving preimage over-approximations of the target output set \(Y\). The algorithm starts by computing the (overapproximated) set of inputs to the last layer, denoted as \(p_{L}^{f,Y}\), which leads to the output set \(Y\), i.e., \(f^{(L)}(p_{L}^{f,Y})\models Y\). The algorithm then iteratively computes preimages of the other layers that satisfy \(p_{i}^{f,Y}=\{z\mid f^{(i)}(z)\models p_{i+1}^{f,Y}\}\). This procedure leverages sampling techniques to construct a set of points mapped to the complement of \(Y\), which are used to tighten the over-approximations. The algorithm relies on Craig's Interpolation theorem to guarantee the existence of an (over- and under-) approximation. It also leverages the bound propagation framework to compute a bounded domain on each layer, which speeds up the interpolation condition checking.
Inverse bounding[28] points out two important applications based on preimage analysis: safety verification for dynamical systems and out-of-distribution input detection. Motivated by these use cases, an inverse bound propagation method is proposed to compute the over-approximation of the preimage. Bound propagation has been widely employed to build efficient verifiers in the forward direction (certified output bound computation). Compared with forward analysis, it is challenging to adopt bound propagation methods directly to compute tight intermediate bounds, and thus difficult to compute tight over-approximations. This is because, for the inverse problem, the constraints on the input are quite loose and even unbounded in some control applications. Simply applying the bound propagation procedure will not lead to useful intermediate bounds, which further impacts the tightness of the symbolic relaxation on nonlinear neurons.
Given this, an inverse propagation algorithm is proposed in [28] to compute a convex over-approximation of the preimage
represented by a set of cutting planes. It first transforms the preimage over-approximation problem to a constrained optimization problem over the preimage and further relaxes it to Lagrangian dual optimization. To tighten the preimage and intermediate bounds, they introduce a dual variable with respect to the output constraints and tighten these bounds iteratively, leveraging standard gradient ascent algorithm.
Preimage approximationMotivated by the practical needs of global robustness analysis [49, 11, 21] and quantitative verification [50, 51], an anytime algorithm is proposed in [29] to compute provable preimage approximation. The generated preimage is further applied to verify quantitative properties of neural networks, which is defined by the relative proportion of the approximated preimage volume against the input domain under analysis, formally defined as follows.
**Definition 3**.: _Given a neural network \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}\), a measurable input set with non-zero measure (volume) \(X\subseteq\mathbb{R}^{n}\), a measurable output set \(Y\subseteq\mathbb{R}^{m}\), and a rational proportion \(p\in[0,1]\), the neural network satisfies the quantitative property \((X,Y,p)\) if \(\frac{\text{vol}(f^{-1}_{X}(Y))}{\text{vol}(X)}\geq p\)._
This approach targets safety properties that can be represented as polytopes and characterizes preimage under-approximation using a disjoint union of polytopes. To avoid the intractability of the exact preimage generation method, convex relaxation is used to derive sound under-approximations. However, one challenge is that the generated preimage under-approximation can be quite conservative when reasoning about properties in large input spaces with relaxation errors accumulated through each layer. To refine the preimage abstraction, a global branching method is introduced to derive tighter approximations on the input subregions. This procedure proposes a (sub-)domain search strategy prioritizing partitioning on most uncovered subregions and a greedy splitting rule leveraging GPU parallelization to achieve better per-iteration improvement. To further reduce the relaxation errors, this method formulates the approximation problem as an optimization problem on the preimage polytope volume. Then it proposes a differentiable relaxation to optimize bounding parameters using projected gradient descent.
**Example 5**.: _In this example, we demonstrate how to construct a provable preimage (under-)approximation for the target output region, and apply it to quantitative analysis of the verification problem shown in previous examples. Consider the quantitative property with input set \(\phi_{\text{pre}}=\{x\in\mathbb{R}^{2}\mid x\in[-1,1]^{2}\}\), output set \(\phi_{\text{post}}=\{y\in\mathbb{R}^{2}\mid y_{1}-y_{2}\geq 0\}\), and quantitative proportion \(p=0.9\). We apply the preimage approximation algorithm to verify this property. Figure 7 presents the computed preimage before (left) and after one-iteration refinement (right). Note that the partition is performed w.r.t. input \(x_{1}\), which results in two polytopes for the subregions. We compute the exact volume ratio of the refined under-approximation against the input set. The quantitative proportion reached with the refinement is 94.3%, which verifies the quantitative property._
## IV Application Examples
In this section we provide a selection of experimental results and lessons learnt from applying formal verification and certification approaches described in the previous section to neural network models drawn from a range of classification problems. These include image and video recognition, automated decisions in finance and text classification. In addition to adversarial robustness of the models, we demonstrate certification of individual fairness of automated decisions and discuss robust explanations.
### _MSR-based Certification for Images and Videos_
The game-based method [10] has been applied to analyse and certify the robustness of image classification models to adversarial perturbations with respect to the maximal safe radius, working with a range of feature extraction methods and distance metrics. Figure 8 shows a typical outcome of such analysis, with converging lower and upper MSR bounds for an image of a traffic sign for \(l_{2}\) distance and features extracted from the latent representation computed by a convolutional neural network (CNN) model. It can be seen that the image is certified safe for adversarial perturbations of up to 1.463 in \(l_{2}\) distance, which is some distance away from the best upper bound at approx. 3, but can be improved with more iterations since the method is anytime.
Fig. 8: Convergence of maximum safe radius computed using the game-based method for a traffic sign image from the GTSRB dataset originally classified as “keep right”. Left: The convergence trends of the upper bound obtained with Monte Carlo Tree Search and the lower bound with Admissible A*. Right: unsafe images (top two rows) and certified safe images (bottom two rows). Figure taken from [10].
Fig. 7: Preimage approximation.
An extension of the game-based method was developed in [33] to provide MSR-based certification for videos, and specifically for neural network models consisting of a CNN to perform feature extraction and a recurrent neural network (RNN) to process video frames. Adversarial perturbations were defined with respect to optical flow, and the algorithmic techniques involve tensor-based computation. Examples of safe and unsafe perturbations are shown in Figure 9, and convergence trends for lower and upper bounds similar to those in Figure 8 can be observed.
### _Robustness of Language Models_
As an example of application of convex relaxation tools (variants of CROWN [34]), we mention the study of [52], which aims to assess the robustness of Natural Language Processing tasks (sentiment analysis and text classification) to word substitution. It was reported that standard fully connected (FC) and CNN models are very brittle to such perturbations, which may make their certification unworkable. [53] critiqued the appropriateness of the classical concept of adversarial robustness defined in terms of word substitution in the context of NLP models. It was observed in an empirical study that models trained to be robust in the classical sense, for example, trained using interval bound propagation (IBP), lack robustness to syntax/semantic manipulations. It was then argued in [53] that a _semantic_ notion of robustness that better captures linguistic phenomena such as shallow negation and sarcasm is needed for language models, where a framework based on templates was developed for evaluation of semantic robustness.
### _Robust Explanations for Language Models_
Explainability of language models was studied in [19], with a focus on robust optimal explanations that imply the model prediction. Figure 10 shows examples of high-quality robust optimal explanations (using the minimum length of explanation as the cost function). In contrast, heuristic explanations such as integrated gradients or Anchors my lack of robustness, but it is possible to repair non-robust Anchors explanations by minimally extending them, see Figure 11.
### _Fairness Certification Using MILP_
[54] developed methods for certification of individual fairness of automated decisions, defined, given a neural network and a similarity metric learnt from data, as requiring that the output difference between any pair of \(\epsilon\)-similar individuals is bounded by a maximum decision tolerance \(\delta\geq 0\). Working with a range of similarity metrics, including Mahalanobis distance, a MILP-based method was developed not only to compute certified bounds on individual fairness, but also to train certifiably fair models. The computed certified bounds \(\delta_{*}\) are plotted in Figure 12 for the Adult and the Crime benchmarks. Each heat map depicts the variation of \(\delta_{*}\) as a function of \(\epsilon\) and the NN architecture. It can be observed that increasing \(\epsilon\) correlates with an increase in the values for \(\delta_{*}\), as higher values of \(\epsilon\) allow for greater feature changes.
## V Future Challenges
Formal verification and certification of neural network models has made steady progress in recent years, with several tools released to the community and an established tool competition [4]. Nevertheless, considerable scientific and methodological progress is needed before these tools are adopted by developers. Below we outline a number of research challenges.
Beyond \(\ell_{p}\)-norm robustnessThe vast majority of robustness evaluation frameworks consider bounded \(\ell_{p}\)-norm perturbations. While these suffice as proxies for minor visual image perturbations, real-world tasks rely on similarity measures, for example cosine similarity for word embeddings or Mahalanobis distance for images. It is desirable to define measures and certification algorithms for semantic robustness, which considers such similarity measures as first class citizens, and works with perturbations that reflect visual or geometric aspects characteristic of the application, such as object movement or lighting conditions. More generally, robustness evaluation frameworks for more complex properties induced by the use cases will be needed.
Beyond supervised robustnessExisting robustness formulations focus on the supervised learning setting. However, collecting and labelling large datasets that are necessary to ensure the high robustness performance needed in safety-critical applications is costly and may not be feasible for use cases such as autonomous driving. Instead, it is desirable to formulate robustness measures and evaluation frameworks directly in some appropriate semi-supervised, or even unsupervised, setting, where the definition of robustness needs to focus on the quality of the learned representations rather than
Fig. 9: Shown in top row are sampled frames of a HammerThrow video and the corresponding optical flows are in the 2nd row. Unsafe perturbations of flows are in 3rd row and safe in 4th. Figure taken from [33].
classification (prediction) because of the lack of labels. This may involve working with similarity measures such as Mahalanobis distance and will be challenging both theoretically and computationally to achieve provable robustness guarantees.
Scalability in network width and depthDespite much progress, the scalability of robustness certification and evaluation frameworks remains limited to low-dimensional models. In order to apply certification to realistic use cases (such as object detection) will necessitate significant improvements with respect to input dimensionality and network depth, as well as the types of activation functions that can be handled.
Efficiency and precision trade-offRobustness certifications and evaluation involves a variety of methods, including exact, approximate and statistical. While exact methods offer completeness, trading off exact precision for approximate bounding results in more efficient any time methods, and completeness can be recovered by combining fast approximate methods such as convex relaxation with branch-and-bound computation. Statistical methods provide estimates of robustness that may be unsound but fast and, in many cases, sufficient for the application being considered.
Compositionality and modularity of AI systemsCertification tools that have been developed to date are monolithic, which matches the monolithic structure of the vast majority of neural network models. Yet, similarly to safety-critical systems, it is anticipated that better structuring of models and tools is likely to improve their reliability and maintainability. Therefore, modularity, compositionality and, in particular, assume-guarantee compositional frameworks, are desirable future directions.
Calibrating uncertaintyIt is recognised that deterministic neural networks can be overconfident in their decisions, and instead a variant known as Bayesian neural networks (BNNs), which admits a distribution over the weights and provides outputs in the form of the posterior distribution, is preferred, as it allows for a principled means to return an uncertainty measure alongside the network output. BNN certification methodologies are much more complex than for deterministic NNs, and still in early stages of development, including uncertainty quantification [55], computing lower bounds on safety probability [56] and certifiable adversarial robustness [57]. Unfortunately, the methods do not scale beyond small networks and standard Bayesian inference tends to underestimate uncertainty.
Robust learningA drawback of certification as presented in this paper is that it pertains to trained models, and if the model fails certification, it is not clear how it can be repaired, and expensive retraining may be needed. A natural question then arises as to whether one can learn a model that is guaranteed to be robust. Building on the positive and negative theoretical results in the case of robust learning against evasion attacks [58, 59, 60, 61], it would be interesting to generalise these results to neural network models and development of implementable frameworks that can provide provable guarantees on robustness.
## VI Conclusion
We have provided a brief overview of formal verification approaches that can be employed to certify neural network models at test time, to train certifiably robust or fair models, and to provide meaningful explanations for network predictions. The methods can be categorised into forward and backward analysis, and involve techniques such as search, bound propagation, constraint solving and abstract interpretation. Both forward and backward analysis have the potential to support more complex verification properties, which have been little explored to date. Empirical results obtained on a range of standard benchmarks show that neural network models are often brittle to adversarial perturbations, but verification approaches can be used to strengthen their robustness and compute certification guarantees, thus improving trustworthiness of AI decisions.
Fig. 11: Examples of Anchors explanations (in blue) along with the minimal extension required to make them robust (in red). Figure taken from [19].
Fig. 10: Optimal robust explanations (highlighted in blue) for IMDB, SST and Twitter datasets (all the texts are correctly classified). Figure taken from [19].
## Acknowledgments
This project received funding from the ERC under the European Union's Horizon 2020 research and innovation programme (FUN2MODEL, grant agreement No. 834115) and ELSA: European Lighthouse on Secure and Safe AI project (grant agreement No. 101070617 under UK guarantee).
|
2307.16433 | Detecting Out-of-distribution Objects Using Neuron Activation Patterns | Object detection is essential to many perception algorithms used in modern
robotics applications. Unfortunately, the existing models share a tendency to
assign high confidence scores for out-of-distribution (OOD) samples. Although
OOD detection has been extensively studied in recent years by the computer
vision (CV) community, most proposed solutions apply only to the image
recognition task. Real-world applications such as perception in autonomous
vehicles struggle with far more complex challenges than classification. In our
work, we focus on the prevalent field of object detection, introducing Neuron
Activation PaTteRns for out-of-distribution samples detection in Object
detectioN (NAPTRON). Performed experiments show that our approach outperforms
state-of-the-art methods, without the need to affect in-distribution (ID)
performance. By evaluating the methods in two distinct OOD scenarios and three
types of object detectors we have created the largest open-source benchmark for
OOD object detection. | Bartłomiej Olber, Krystian Radlak, Krystian Chachuła, Jakub Łyskawa, Piotr Frątczak | 2023-07-31T06:41:26Z | http://arxiv.org/abs/2307.16433v1 | # Detecting Out-of-distribution Objects
###### Abstract
Object detection is essential to many perception algorithms used in modern robotics applications. Unfortunately, the existing models share a tendency to assign high confidence scores for out-of-distribution (OOD) samples. Although OOD detection has been extensively studied in recent years by the computer vision (CV) community, most proposed solutions apply only to the image recognition task. Real-world applications such as perception in autonomous vehicles struggle with far more complex challenges than classification. In our work, we focus on the prevalent field of object detection, introducing **N**euron **A**ctivation **Pa**T**e**R**ns for out-of-distribution samples detection in **O**bject detection(NAPTRON). Performed experiments show that our approach outperforms state-of-the-art methods, without the need to affect in-distribution (ID) performance. By evaluating the methods in two distinct OOD scenarios and three types of object detectors we have created the largest open-source benchmark for OOD object detection.
## 1 Introduction
OOD detection, as described in a thorough study by Yang et al. [31], is a problem that arises when machine learning models are applied to data derived from a distribution that is beyond the one they were trained on. This results in the model's poor performance since the model had not acquired knowledge that would enable correct predictions. This issue is not limited to classification tasks because object detectors may also encounter OOD samples in two situations: when the inference image contains unknown object classes or when the image scenery is significantly different from the training examples. Practitioners tackle the latter by covering the entire range of scenarios. For example, the most popular autonomous driving datasets include images of environmental, weather, and geographic diversity [1, 32, 29]. On the other hand, identifying an unknown object may be possible thanks to algorithms relating to two highly overlapping research fields - open-set (OS) detection [28, 6] and OOD detection [15, 16, 7]. Both OS and OOD methods design a way to quantify the model's uncertainty regarding encountered data so that high uncertainty scores are assigned to unknown objects while low scores are to those well-represented in the training data.
Contrary to the image classification problem, we observe a severe deficiency of OOD detection methods in object detectors. Detectors process images in a much more complex way than classifiers. Analogously hampered are the evaluation process and posterior analysis of predicted uncertainty scores. This issue became a substantial obstacle for researchers to invent novel OOD algorithms for object detection. Finally, almost every published OS or OOD algorithm assumes Faster R-CNN as the default architecture, making the proposed solution architecture-specific, and unfeasible to apply to any other model. We notice the need for a universal, simple, and yet efficacious OOD framework, which we address in this work.
In this work, we present NAPTRON - i.e., a neuron activation pattern (NAP) OOD method adapted to object detection. It was proved that NAP is a highly efficient technique for OOD detection in image recognition problems [25]. Our algorithm leverages object detectors' internal feature representation and enables an understanding of training distribution to estimate the uncertainty of predicted bounding boxes. ReLU-activated layers, being the foundation of most network architectures [14, 26, 19, 2], are the natural source of binary patterns, because they set every neuron (or convolution unit) in either positive i.e., on- or zeroed i.e., off-state. The NAPs of ReLU networks display a very convenient property for OOD detection; namely, ReLU-activated networks generate much fewer nonidentical NAPs than they are theoretically capable of generating [12, 13]. This finding fuels the intuition that if one memorized all known patterns, which are not very numerous, then encountering an unseen pattern during inference would be a reliable indicator of OOD data.
The main contributions of this paper are:
* We present a theoretically inspired NAPTRON that uses binary NAPs extracted from hidden layers of object detectors to tell ID from OOD predictions. This method is both computationally efficient and practically effective for OOD detection, making it simple to incorporate into existing object detection architectures. 1 Footnote 1: [https://github.com/safedm-group/nsptron](https://github.com/safedm-group/nsptron)
* We perform comprehensive experiments involving two datasets and three network architectures, which prove that the proposed method outperforms state-of-the-art OOD detectors.
* We introduce a novel OOD detection evaluation protocol that analyzes scores of OOD bounding boxes and allows for a more objective comparison of the methods.
## 2 Related Work
### Out-of-Distribution Detection
Heindrycks et al. [15] proposed a baseline OOD samples detection method. It only involves setting a threshold on the winning class softmax probability (maximum softmax probability, MSP). In object de
tection, it is already necessary to set a softmax probability threshold for non-maximum-suppression (NMS) to filter redundant background predictions. Therefore, setting another one to separate OOD objects can not be particularly useful in practice. Nevertheless, it still provides a decent baseline.
Initially, Helmholtz **Energy**[21] had been successfully utilized in OOD detection in image classification. It is a very easy-to-use and universal way to estimate semantic uncertainty. It does not require any detector customization or loss handcrafting. Internally, object detectors perform the classification of proposals resulting in a classification vector for which one can compute the energy score.
**Virtual outlier synthesis** (VOS) [7] is the first work focusing on OOD identification in the object detection task. VOS allows OOD detection by synthesizing virtual outliers in feature (latent) space, thereby regularizing the model's decision boundary during training. VOS samples the virtual outliers from the low-likelihood region in the feature space and uses them as input for an unknown-aware training objective. The contrastive loss function shapes the uncertainty (Helmholtz energy) space between known data and synthesized outlier data. VOS is not architecture-agnostic because the outlier synthesis process occurs in feature space bound to the fully-connected layers of Faster R-CNN ROI head. The authors of VOS proposed an evaluation scheme that requires an OOD dataset i.e. a set of images that do not contain ID categories. Any output bounding box generated for these images is considered an "OOD object", while any bounding box generated for images from the ID test dataset is deemed an "ID object". We find this approach is over-simplified because model predictions are not evaluated, so any "OOD object" or "ID object" can actually be a background prediction.
### Open-Set Detection
**Gaussian mixture models** (GMM) [23] approach also introduces a change in the default loss function, focusing on classification loss. The authors add an anchor loss term to facilitate learning a structured logit space. Next, they fit class-specific GMMs to the logit space with a validation dataset. For any test sample, uncertainty is estimated as a log-likelihood of belonging to any one of the known GMMs.
The authors of **OpenDet**[11] follow the intuition that known-class objects tend to be clustered to form high-density regions in the latent space, while unknown objects are distributed in low-density regions. Consequently, they propose identifying unknown objects by separating high- and low-density regions in the latent space using a contrastive loss. On top of this, they provide another loss function component responsible for learning predictive uncertainty directly as a softmax probability without the logit of ground-truth class. The authors of OpenDet provide the implementation for both Faster R-CNN and RetinaNet. However, the method does not apply to anchor-free architectures because IOU-based sampling of proposals for contrastive learning is not feasible.
The **OWOD**[17] approach was developed before OpenDet. Both ideas share the part of contrastive clustering in latent space. The authors of OWOD also propose an unknown-aware RPN head and the Helmholtz energy-based unknown identification. They model the energy distribution of the known and unknown energy values with a set of shifted Weibull distributions. Similarly to GMM, they fit the distributions using a validation dataset. Due to the RPN requirement, Faster R-CNN is the only compatible architecture.
### Other uncertainty estimates in Object Detection
Previous work by Choi et. al. [5] introduced **Gaussian** YOLOv3. The authors redesign YOLO's loss function to model localization uncertainty directly. Each coordinate of a bounding box is represented by a pair of Gaussian parameters (mean and variance) instead of just one value. There is no ground truth for the variance of coordinates, but Gaussian negative-log-likelihood loss conveniently requires ground truth for the mean only, which allows one to learn variance without providing the ground truth. We managed to extend this approach to Faster R-CNN and RetinaNet architectures. In FCOS, however, a "localization centerness" branch is designed to perform a very similar role as Gaussian localization uncertainty; trying to combine them both led to poor results.
The popular **Monte-Carlo Dropout** (MCD) [9] method has already been adapted to the object detection task [22, 24]. The most straightforward adaptation technique was proposed in ref. [24], in which the authors suggested averaging both classification and regression vectors over the number of Monte-Carlo forward passes for each proposal separately. The variance of these vectors for each proposal is the final uncertainty measure.
Faster R-CNN-based **Object Localization Network** (OLN) [18] learns to detect objects focusing on localization instead of foreground-background classification. The authors argued that focusing on learning "objectness cues", for the price of obtaining class-agnostic outputs is the best way to achieve cross-dataset generalization. In practice, their method requires replacing the default classification losses in both Faster R-CNN stages with a pair of centerness and IOU losses.
As an alternative baseline focused on objectness for Faster R-CNN, we attempt to establish a threshold for the RPN objectness score of a bounding box proposal. This score is later converted into the final model output. Although the RPN objectness score is not used in the ROI head, we aim to determine its potential usefulness for the OOD detection tasks.
## 3 Problem Setup
Let us denote the set \(C\) of ID classes as \(K_{ID}=\{1,\ \dots\,C\}\) and given object detection dataset \(D=\{X,\ \Upsilon\}\) where \(X\) and \(\Upsilon\) denote the input images and labels. The set of labeled images consists of \(N\) samples, \(X=\{x_{1},\ \dots\,x_{N}\}\) along with the labels associated with the sets of objects included in each of N images \(\Upsilon=\{Y_{1},\dots\,Y_{N}\}\). Next, for each image, we have \(Y_{i}=\{y_{1},\ \dots\,y_{K}\}\) that represents a set of labels for \(K\) object instances. Each instance \(y=[c,\ l]\) consists of class labels \(c\in K_{ID}\) and locations \(l=[x,\ y,\ w,\ h]\), where \(x,y,w\), and \(h\) denote the object's bounding box center coordinates and size.
An OOD dataset is defined identically, except for the fact that it is semantically disparate with the ID dataset, meaning there are no common classes between ID and OOD datasets - \(K_{OOD}=\{C+1,\ \dots\ \}\).
Let us also introduce the test dataset \(D_{test}\) consisting of ID and OOD objects and an object detector \(\theta:x\mapsto Y\) trained on the train split (\(D_{train}\)) of \(D\).
The OOD detection task is performed for \(\theta\)-predicted ID and OOD \(\hat{\Upsilon}_{\theta}=\{\hat{\Upsilon}_{ID},\ \hat{\Upsilon}_{ODOD}\}\) samples where ID samples are true positive predictions \(\hat{\Upsilon}_{ID}:=\{\hat{y}_{i}\in K_{ID}\times\mathbb{R}^{4}:\exists y_{t} \in D_{test},\ IOU(\hat{y}_{i},\ y_{t})>\lambda,\ \hat{c}=c_{t}\ \}\). The notation states that true positive predictions are all pairs of class labels and bounding box locations that sufficiently overlap with any ground truth test bound
ing box of the same class. Analogously, OOD samples are those predictions, that sufficiently overlap with any ground truth bounding box of unknown class \(\hat{\Upsilon}_{OOD}:=\{\hat{y}_{o}\in K_{ID}\times\mathbb{R}^{4}:\exists y_{t} \in D_{test},\;IOU(\hat{y}_{o},\,y_{t})>\lambda,\;c_{t}\in K_{OOD}\;\}\). The IOU threshold \(\lambda\) is typically set to 0.5. Now, the goal of an OOD detector is to perform binary classification of \(\hat{\Upsilon}_{\theta}\), which relies on the uncertainty estimator \(\Phi_{\theta}:\hat{y}\mapsto\phi\) that is intended to assign high scores to OOD samples and low ones to ID samples. Next, the classification is performed by comparing a predefined uncertainty threshold with the uncertainty score associated with each sample.
## 4 Our approach
### Neuron Activation Patterns
NAPs have been previously used for uncertainty estimation in image classifiers [4, 25]. Image processing neural networks (NNs) use multiple layers that transform input samples sequentially into the desired form. Nonlinear activation functions present in each hidden layer of a NN significantly contribute to the network's ability to approximate complex, multidimensional relationships. Of all activation functions, ReLU is nowadays the most common choice for CV problems. All ReLU-activated layers produce matrices of either positive or zeroed values, which correspond respectively to active and inactive layers' units. An activation pattern is obtained by assigning a _true_ to positive units and _false_ to zeros. Thus, for a given network layer, NAP is a binary interpretation of which neurons or convolution units of the layer were activated during the processing of an image.
### Uncertainty estimation
Algorithm 1 describes how NAPTRON estimates the uncertainty of predicted bounding boxes. First, for every training image, we perform an object detection inference, simultaneously extracting binary NAPs corresponding to output bounding boxes. The patterns are extracted from a pre-selected layer. Then, for every true positive prediction, we store the extracted pattern in a memory structure. Each object class has a dedicated memory instance. For a given test sample we perform the inference and NAP extraction again. Next, for each inferred bounding box, we find the Hamming distance between its NAP and the nearest pattern out of those stored in the memory structure corresponding to the predicted label. The Hamming distance to the nearest known NAP is the NAPTRON uncertainty estimate.
The authors of [25] provided an efficient implementation of finding the minimal Hamming distance to known NAPs, which makes real-time uncertainty estimation possible. In this implementation, the Hamming distances between the test NAP and all known NAPs are computed concurrently. The minimal distance is the output of the uncertainty estimation algorithm.
### Pattern extraction for bounding boxes
In every state-of-the-art object detector, one can distinguish a detection backbone part (e.g. ResNet50) that extracts features of an input image and a detection head that performs classification and regression of proposal bounding boxes (priors) based on the extracted features. Typically, the classification and regression are computed in two separate subnetworks that work in parallel. The _extractNAPS_ function extracts binary NAPs of the \(l\)-th ReLU-activated layer of the classification branch in the detection head of object detector \(\theta\). For Faster R-CNN, the ROI-head consists of fully connected (FC) layers. The operation is straightforward because there is no ambiguity in the link of hidden layers' activations to the final output. Assuming \(P\) proposals generated for a single image by the RPN-head, each layer processes \([P,In]\)-shaped feature maps into \([P,Out]\)-shaped feature maps, where \(In\) is the number of neurons of the previous layer and \(Out\) is the number of neurons of the current layer. Thus, for each layer and each proposed bounding box, we can easily distinguish an activation pattern of length \(Out\).
Nevertheless, detection heads of single-stage architectures such as RetinaNet and FCOS, use convolutional layers instead of FC layers. Each hidden layer takes as an input \([W,H,C]\)-shaped feature maps and processes them without changing their dimensions. The final layer changes the number of channels of the matrix from \(C\) to \(K*A\), where \(W\) and \(H\) are the width and height of processed feature maps, respectively, \(K\) is the number of classes, and \(A\) is the number of priors with centers in each of \(W*H\) locations of the feature maps. In our approach, all the \(A\) possible bounding boxes predicted in the \((w,h)\) location are associated with a \(C\)-long activation vector located at \((w,h)\) of the chosen hidden layer's output. In other words, any \((w,h)\)-centered output bounding box is attributed to the activation values located in the same coordinates of the hidden feature maps.
```
Data:\(\theta\), \(D_{train},x_{test}\), Layer index \(l\) Result:\(\phi_{test}\) for\(c\)in\(K_{ID}\)do// initialize a data structure for each class \(M_{c}\leftarrow\emptyset\) endforfor(\(x\), \(y\)) in \(D_{train}\)do\(\hat{Y}\leftarrow\theta(x)\); \(NAPS_{x}\gets extractNAPS(\theta,x,l)\); for(\(\hat{y}\), \(NAP\)) in (\(\hat{Y}\), \(NAPS_{x}\))do if\(\hat{y}\) is a true positive predictionthen // Store NAP in the data structure associated with predicted class \(\hat{c}\) \(M_{\hat{c}}\gets M_{\hat{c}}\,||\,NAP\); end for end for // testphase \(\hat{Y}\leftarrow\theta(x_{test})\); \(NAPS_{test}\gets extractNAPS(\theta,x_{test},l)\); for(\(\hat{y}\), \(NAP\)) in (\(\hat{Y}\), \(NAPS_{test}\))do \(NAP_{nn}\gets NearestNeighbour(M_{\hat{c}},NAP\)); // Uncertainty of \(\hat{y}\) \(\phi\gets HammingDistance(NAP,NAP_{nn})\); end for
```
**Algorithm 1**NAPTRON uncertainty estimation
## 5 Experiments
We propose an evaluation protocol to examine the effectiveness of the proposed NAPTRON detector in various aspects. These include the ability to identify OOD objects, recognize known from unknown samples, and gradually acquire knowledge about new categories when labels are available for certain unknown objects.
**Datasets.** The experiments were conducted in two domain shift scenarios. In the first scenario, the detector was trained on the train split
of the PASCAL VOC [8] dataset consisting of 20 classes. Evaluation protocol for known and unknown objects was performed on validation split of the COCO [20] dataset consisting of 80 classes, including the 20 known ones. The validation split includes 20631 known objects and 15704 unknown objects. In the second scenario, the detector was trained on images from the train split of the BDD100k [32] dataset that consists of only 4 known classes (i.e., "pedestrian", " bicycle", "car" and "traffic sign"). Evaluation protocol was performed on the full validation split of the BDD100k dataset consisting of all 10 classes, including the 4 known ones. The validation split includes 152025 known objects and 33920 unknown objects.
**Architectures.** For our experiments, we chose three well-established object detection architectures: Faster R-CNN (two-stage, anchor-based) [27], RetinaNet (single-stage, anchor-free) [19], and FCOS (single-stage, anchor-based) [30]. The selected architectures represent different properties that enable our findings to be universal across many state-of-the-art object detection architecture types. All models were trained with default configuration parameters provided in the MMDetection [3] framework.
### Parameter sensitivity analysis
The NAPTRON algorithm has a couple of parameters that may affect OOD detection performance quality. We examined in detail the impact of those parameters, i.e.:
* layer index,
* distance reduction,
* binarization percentile threshold \(p\),
* train samples softmax probability threshold \(s\),
* NMS softmax score threshold.
**Layer index.** Choosing an optimal layer to extract binary NAPs from is very difficult when detecting OOD samples in the image classification task [25] since modern classifiers consist of a large number of layers. Detection heads of standard architectures - such as Faster R-CNN, RetinaNet, and FCOS - have only a few layers (i.e., 2-4), so choosing the correct one should be much easier. We also try extracting patterns from the \([W,H,C]\)-shaped feature maps originating from the feature extracting backbone to the detection head - denoted Layer 0 in Tables 1, 2, and 3. For RetinaNet and FCOS, extracting NAPs from Layer 0 requires identical operations as extracting from other layers, as described in Section 4.3. However, for Faster R-CNN, we had to flatten the feature maps to match the dimensionality of the FC layers. To do so, we computed the \(C\)-long vector of the means of every channel and binarized it by zeroing \(p\) percent of the lowest values in the vector.
**Distance reduction.** The authors of [25] estimate uncertainty by finding the minimal Hamming distance between the test NAP and all known NAPs of the predicted class; we check whether computing the average distance to the known NAPs yields improved results.
**Binarization percentile threshold \(p\).** Zeroing a certain percentage of units, which have the lowest magnitude across an activation pattern, was effective in the image classification setup. Our technique of extracting NAPs from detection heads makes the binarization step optional, so the default binarization threshold value equals \(0.0\).
**Training samples IOU threshold.** To construct the database of known patterns, one needs to perform object detection on the training images and choose only those activation patterns that correspond to the true positive predictions. Typically, a prediction is deemed correct if it overlaps any object with IOU greater than 0.5. We want to check if setting a higher IOU threshold (e.g., 0.9), and consequently choosing only the patterns that correspond very accurately to an object might lead to more accurate OOD detection results.
**Training samples softmax probability threshold \(s\).** The reason for examining the effect of this parameter is the same as for the IOU threshold explained above. The default softmax threshold value equals \(0.0\), meaning no true positive (TP) sample is discarded.
**NMS softmax score threshold.** Every OOD detector is sensitive to this parameter. The impact of the NMS threshold on all OOD methods' performance is described and studied in Section 5.3.
**Results.** The experiments were performed on the first evaluation scenario (PASCAL-VOC \(\rightarrow\) COCO); AUROC is used as the performance metric. Note that the values of the first columns of each Table 1, 2, and 3 are identical - all were generated for the default setup of the parameters (IOU \(\geq 0.5\), \(s\geq 0.0\), \(p=0.0\)).
is chosen. The choice of the layer is the most important parameter to tune. We recommend extracting patterns from the penultimate layer of the detection head.
### OOD detection
In the next experiment, we compared the performance of the proposed algorithm with the state-of-the-art OOD object detectors:
* confidence score from Faster R-CNN (Standard)
* objectness score from Region Proposal Network in Faster R-CNN (RPN)
* Energy [21]
* Virtual Outlier Synthesis (VOS) [7]
* Gaussian Mixture Models (GMM) [23]
* OpenDet [11]
* Open World Object Detection (OWOD) [17]
* Gaussian YOLOv3 (Gaussian) [5]
* Monte-Carlo Dropout (MCD) [24]
* Object Localization Network (OLN) [18]
All methods were compared using the officially published code of each algorithm with recommended configuration parameters, except the MCD method, which we implemented ourselves. It is important to notice that some of the methods were only designed for certain DNN architectures (e.g., Faster R-CNN); and we applied the methods only for the dedicated models.
**Metrics.** We measure OOD detection performance using two metrics: (1) the false positive rate at the true positive rate of 95% - FPR@95TPR and (2) the area under the receiver operating characteristic curve (AUROC). The metrics are computed separately for each class and then averaged.
**Performance comparison.** Table 4 shows the performance of the proposed algorithm and other methods. As can be observed, NAPTRON achieves the best FPR@95TPR for every evaluation scenario, detector architecture pair considered in this work, and the best AUROC for 3 out of 6 evaluation cases. No other method yields such consistently good results. VOS had been designed specifically for the Faster R-CNN architecture and detects OOD samples roughly just as well as NAPTRON. However, these methods could not be compared using other architectures since there is no straightforward way to apply VOS for anything but Faster R-CNN. We suspect that even if it could be applied to the single-stage architectures, it would perform similarly to Energy because both methods rely on the energy score.
Additionally, the results showed that the open-sets methods (OpenDet, GMM, and OWOD) are not the most effective OOD detectors. They fail to outperform the baseline detector (Standard) consistently. GMM manages to do so in 3 or 4 (depending on the metric) out of 6 cases, OpenDet in 1 out of 4, and OWOD in 1 or 2 out of 2.
The rest of the methods perform very poorly in our challenging experimental setup. Gaussian, OLN, RPN, and MCD are unable to beat the baseline detector (Standard).
**NMS sensitivity.** For the above experiments, we set a low NMS threshold (0.01) so that all the possible predicted objects are accounted for when computing the metrics. However, since varying NMS softmax confidence threshold makes a significant difference in regular object detection performance, we investigate the impact of the NMS threshold on OOD detection performance. Selected NMS sensitivity plots of all considered OOD methods are presented in Fig. 1, 2 and 3. We included 4 out of 12 plots generated for each architecture, dataset scenario, and metric combination.
All the figures confirm that the quality of all the OOD methods is sensitive to underlying object detectors' NMS threshold variations. However, the ranking of OOD detectors does not change much for different NMS values since most of the methods' curves share a common trend. Typically, performance metrics reach their upper limit around 0.5 and decline sharply when the softmax probability threshold is between 0.8 and 0.99. These sharp shifts of metrics' levels for high thresholds occur because object detectors hardly ever predict OOD objects with such a high probability, so OOD metrics are being computed for scarce data. This issue does not matter much in practice because, in real-world applications of object detectors, the NMS threshold is usually set somewhere between 0.5 and 0.8 - thus filtering most of the FPs and letting through most of the TPs.
### Object detection
Many of the state-of-the-art OOD detection methods require altering default components of models to obtain improved uncertainty
Figure 1: Faster R-CNN-based OOD detectors NMS sensitivity evaluated on COCO. Metric - FPR@95TPR.
Figure 3: Faster R-CNN-based OOD detectors NMS sensitivity evaluated on BDD100k. Metric - AUROC.
Figure 2: FCOS-based OOD detectors NMS sensitivity evaluated on COCO. Metric - AUROC.
scores. It is expected that these alterations should not impose any damage on regular object detection quality. Therefore, we performed additional experiments to evaluate the influence of the customization of standard object detectors. In other words, how modification of the original architecture affects the performance of the object detector.
Object detector performance heavily depends on a predefined, application-dependent confidence score threshold. The authors of [10] observed that decreasing the NMS confidence score threshold and consequently massively increasing the number of false positive detections increases mAP. This phenomenon undermines the validity of using mAP as the primary quality metric. Therefore, as a way to gauge the impact of the OOD methods on object detection quality, we conduct a visual collation of TPR vs FP curves (see Figures 4 and 5). All the plots were horizontally limited to 100 000 FPs.
Drawing conclusions about the performance can hardly be completed merely by looking at the curves, so we compute the area under each curve (AUC), limiting the FP number to \(2N\) where \(N\) is the number of all known ground truth objects in the test dataset. Next, we compare each method's AUC with the standard detector's AUC - Table 5 shows the results. Positive \(\Delta\)AUC values signify a higher (better) curve, while negative \(\Delta\)AUC indicates inferior performance.
MC-Dropout detectors is, on average, lower than the baseline, but the differences are smaller than for other methods.
### Visual evaluation
Ideally, an OOD detector provides an independent uncertainty score, yet complementary to the objects detector softmax score. By design, the softmax score is solely responsible for distinguishing background from foreground objects, and the uncertainty score separates known objects from unknown ones.
The relationship between both the scores is nontrivial and simple statistic coefficients, such as the Pearson correlation coefficient, do not explain it sufficiently. Therefore, we provide visual 2D characteristics for the best-performing OOD detectors. A perfect characteristic would depict a cloud of blue points (OOD objects) vertically separable from a cloud of green points (TPs), and a cloud of red points (FPs) horizontally separable from the TPs. Figure 6 shows the NAPTRON uncertainty and softmax score relationship. We can observe that the blue OOD triangles are placed, in general, higher than the green TPs, especially for the most certain samples on the right side of the plot.
Overconfident OOD and FPs are the users' nightmare but our approach enables us to filter at least some of them. Examples of the samples identified as OOD by the proposed NAPTRON are in Figures 7 and 8. For all the examined algorithms, both scores are imperfect. In many cases, FPs have a high softmax score, whereas TPs have a lower score. Analogously, a low uncertainty score may be attributed to an OOD object. We observe that every type of uncertainty score is correlated with the softmax confidence score of an underlying object detector. This outcome is intuitive because we expect the TPs to have low uncertainty and high softmax probability. Thankfully, object detectors tend to assign lower probabilities to OOD objects, and OOD detectors assign higher uncertainty to FPs even though they are not explicitly meant to do so. These circumstances provide an opportunity to use softmax probability and one or more types of uncertainty scores combined in a manner that would boost both regular object detection and OOD detection performance. Finding an optimal way to combine multiple scores would require another set of experiments; this is beyond the scope of this work. |
2306.17404 | QuAVF: Quality-aware Audio-Visual Fusion for Ego4D Talking to Me
Challenge | This technical report describes our QuAVF@NTU-NVIDIA submission to the Ego4D
Talking to Me (TTM) Challenge 2023. Based on the observation from the TTM task
and the provided dataset, we propose to use two separate models to process the
input videos and audio. By doing so, we can utilize all the labeled training
data, including those without bounding box labels. Furthermore, we leverage the
face quality score from a facial landmark prediction model for filtering noisy
face input data. The face quality score is also employed in our proposed
quality-aware fusion for integrating the results from two branches. With the
simple architecture design, our model achieves 67.4% mean average precision
(mAP) on the test set, which ranks first on the leaderboard and outperforms the
baseline method by a large margin. Code is available at:
https://github.com/hsi-che-lin/Ego4D-QuAVF-TTM-CVPR23 | Hsi-Che Lin, Chien-Yi Wang, Min-Hung Chen, Szu-Wei Fu, Yu-Chiang Frank Wang | 2023-06-30T05:14:45Z | http://arxiv.org/abs/2306.17404v1 | # QuAVF: Quality-aware Audio-Visual Fusion for Ego4D Talking to Me Challenge
###### Abstract
This technical report describes our QuAVF@NTU-NVIDIA submission to the Ego4D Talking to Me (TTM) Challenge 2023. Based on the observation from the TTM task and the provided dataset, we propose to use two separate models to process the input videos and audio. By doing so, we can utilize all the labeled training data, including those without bounding box labels. Furthermore, we leverage the face quality score from a facial landmark prediction model for filtering noisy face input data. The face quality score is also employed in our proposed quality-aware fusion for integrating the results from two branches. With the simple architecture design, our model achieves \(67.4\%\) mean average precision (mAP) on the test set, which ranks**first** on the leaderboard and outperforms the baseline method by a large margin. Code is available at: [https://github.com/hsic-the-lin/Ego4D-QuAVF-TTM-CVPR23](https://github.com/hsic-the-lin/Ego4D-QuAVF-TTM-CVPR23)
## 1 Introduction
Ego4D [2] is a large-scale dataset introduced by Meta AI, specifically designed for the purpose of egocentric video understanding. Within the dataset, the Talking to Me (TTM) challenge focuses on the identification of social interactions in egocentric videos. Specifically, given a video and audio segment containing tracked faces of interest, the objective is to determine whether the person in each frame is talking to the camera wearer. This task holds significant importance for studying social interactions, and the understanding of egocentric social dynamics serves as a crucial element in various applications, including virtual assistants and social robots.
Drawing inspiration from the winning approach by Xue et al. [6], we attempt to fuse features from both modalities at an earlier stage. Therefore, in our initial approach, referred to as the Audio-Vision joint model (AV-joint), as shown in Figure 2, we incorporate a fusion of vision and audio features immediately after the backbone network, prior to aggregating temporal information. The AV-joint model is trained by jointly optimizing the vision and audio branches. However, despite employing a significantly larger backbone architecture (_ResNet-50_[3] and _Whisper_[4]), the AV-joint model does not yield substantial performance improvements over the baseline model. Although our initial trial did not yield satisfactory results, a thorough analysis of the limited improvement led to several key observations that motivated our final approach. Firstly, as described in the original Ego4D paper [2], the determination of the TTM label is based on vocal activity, irrespective of whether the person is visible in the scene. Consequently, a significant portion of the training data lacks the corresponding bounding box label. (about 0.7M frames out of 1.7M frames with TTM label do not have bounding box label)
In our initial approach, we addressed the absence of bounding box labels by using zero padding. However, this approach can have adverse effects on the optimization process of the vision branch, as it may be trained on a large amount of non-realistic images. Additionally, since the visual and audio branches are trained jointly, the quality of the visual inputs can potentially impact the audio branch, particularly when fused at an early stage. Because the quality of the data is influenced by various factors, such as the methods employed to handle data without bounding box labels (e.g., zero padding), limitations of the hardware used to record egocentric videos, and potential inaccuracies in bounding box annotations, hence, improving data quality is not a straightforward task. One simple approach would be to discard data without bounding box labels, but this would significantly reduce the available data and waste audio activity annotations. To address these challenges, we explore disentangling the two modalities.
In our subsequent experiments, we discovered that using only the audio input resulted in superior performance compared to our initial AV-joint model(as shown in Table 1.) This finding further reinforces our assumption that the quality of the visual data can impede the optimization process of the audio branch. As a result, in our final approaches, we employ separate models to process the audio and image modalities. For the audio branch, we leverage the powerful encoder from _Whisper_[4], a robust speech recognition model, as we observed that the semantic information |
2309.11828 | Heegaard splittings and the tight Giroux Correspondence | This paper presents a new proof of the Giroux Correspondence for tight
contact $3$-manifolds using techniques from Heegaard splittings and convex
surface theory. We introduce tight Heegaard splittings, which generalise the
Heegaard splittings naturally induced by an open book decomposition of a
contact manifold. Via a process called refinement, any tight Heegaard splitting
determines an open book, up to positive open book stabilisation. This allows us
to translate moves relating distinct tight Heegaard splittings into moves
relating their associated open books. We use these tools to show that every
Heegaard splitting of a contact 3-manifold may be stabilised to a splitting
associated to a supporting open book decomposition. Finally, we prove the tight
Giroux Correspondence, showing that any pair of open book decompositions
compatible with isotopic contact structures become isotopic after a sequence of
positive open book stabilisations. | Joan Licata, Vera Vértesi | 2023-09-21T07:00:34Z | http://arxiv.org/abs/2309.11828v2 | # Heegaard splittings and the tight Giroux correspondence
###### Abstract.
This paper presents a new proof of the Giroux Correspondence for tight contact \(3\)-manifolds using techniques from Heegaard splittings and convex surface theory. We introduce _tight Heegaard splittings_, which generalise the Heegaard splittings naturally induced by an open book decomposition of a contact manifold. Via a process called _refinement_, any tight Heegaard splitting determines an open book, up to positive open book stabilisation. This allows us to translate moves relating distinct tight Heegaard splittings into moves relating their associated open books. We use these tools to show that every Heegaard splitting of a contact \(3\)-manifold may be stabilised to a splitting associated to a supporting open book decomposition. Finally, we prove the tight Giroux Correspondence, showing that any pair of open book decompositions compatible with isotopic contact structures become isotopic after a sequence of positive open book stabilisations.
## 1. Introduction
In this paper we prove the \(3\)-dimensional Giroux Correspondence for tight contact manifolds, showing that positive open book stabilisation suffices to relate any pair of open book decompositions supporting isotopic tight contact structures. Our methods lie firmly in the realm of contact topology, encoding equivalence classes of contact structures through combinatorial, rather than geometric, data. We hope the use of convex surface theory will make this proof accessible to a broad topological audience.
First studied as a purely topological object, an open book decomposition realises a \(3\)-manifold as a link with a fibered complement [1]. Stallings proposed surgery operations to relate distinct open books on a fixed manifold, and Harer proved the sufficiency of these moves [1, 2]. Later, Thurston-Winkelnkemper showed that an open book decomposition determines an equivalence class of contact structures on the underlying manifold, and Giroux famously extended this result, characterising the open books associated to a fixed contact structure on the \(3\)-manifold as those related by a single one of Stallings's moves, _positive stabilisation_[13, 14]. Recently, Breen-Honda-Huang have independently proven an analogous characterisation for contact manifolds in all odd dimensions [1].
Our approach begins with another classical decomposition, a Heegaard splitting of a \(3\)-manifold. Any open book decomposition determines a Heegaard splitting, but not all Heegaard splittings can be realised through this process. For example, every Heegaard splitting induced by an open book has Hempel distance less than or equal to \(2\), while there exist Heegaard splittings with arbitrarily large distance. The splittings induced by open books are called _contact Heegaard splittings_, and we note that any contact Heegaard splitting uniquely determines an isotopy class of open book decompositions.
Given an arbitrary Heegaard splitting of a contact manifold, we show how to stabilise it to produce a contact Heegaard splitting. This provides a new and accessible proof of the following result:
**Theorem 1**.: _Any contact 3-manifold \((M,\xi)\) admits a compatible open book decomposition \((B,\pi)\)._
We define a more general notion of a Heegaard splitting compatible with a contact structure:
**Definition 1.1**.: A Heegaard splitting \((\Sigma,U,V)\) of \((M,\xi)\) is _tight_ if \(\Sigma\) is convex and \(\xi\) restricts tightly to each handlebody \(U,V\).
Although not every tight Heegaard splitting is a contact splitting, we can nevertheless construct an open book from any tight splitting. We introduce a process called _refinement_ which involves stabilising a tight Heegaard splitting to a contact splitting. Refinement involves many choices - a Heegaard diagram for the splitting, sets of convex compressing discs inducing the diagram, properly embedded arcs on said discs - and different choices will produce different contact splittings.
Nevertheless, the associated open books are closely related:
**Theorem 1.2**.: _Suppose \((\Sigma,U,V)\) is a tight Heegaard splitting and let \((B,\pi)\) and \((B^{\prime},\pi^{\prime})\) be open book decompositions associated to refinements of \((\Sigma,U,V)\). Then \((B,\pi)\) and \((B^{\prime},\pi^{\prime})\) admit a common positive open book stabilisation._
When the original contact manifold is tight, any convex Heegaard surface defines a tight splitting, and hence determines a class of open books;
this answers a question of Rubinstein [14]. A contact manifold which is overtwisted may nevertheless admit a tight Heegaard splitting, and in this case we again recover a class of open book decompositions.
We also identify a move between tight Heegaard splittings (_positive Heegaard stabilisation_) with the property that the refinements of splittings with a common positive Heegaard stabilisiatation yield open books with a common positive open book stabilisation. This observation suggests that tight Heegaard splittings are worth studying in their own right, as opposed to merely as a route to open books. We have included some questions for further investigation in the next section.
In this paper, however, the primary motivation for studying tight Heegaard splittings is a proof of the following theorem:
**Theorem 2**.: _[Tight Giroux Correspondence] Two open book decompositions \((B,\pi)\) and \((B^{\prime},\pi^{\prime})\) of \(M^{3}\) support isotopic tight contact structures if and only if they are related by a sequence of positive stabilisations and destabilisations._
The proof begins by considering the pair of contact Heegaard splittings associated to the pair of open books for \((M,\xi)\). The Reidmeister-Singer Theorem asserts that any pair of Heegaard splittings for \(M\) will become isotopic after sufficiently many Heegaard splitting stabilisations of each. We may choose these stabilisations to correspond to stabilisations of the associated open books, leading us to study a pair of open book decompositions that induce isotopic Heegaard splittings. The isotopy discretisation argument of Colin (Theorem 2.11) decomposes smooth isotopies of convex splitting surfaces as a sequence of convex isotopies and bypass attachments. This produces a sequence of convex Heegaard surfaces related to each other by bypass attachments, and by the work above, we may associate an open book decomposition supporting \(\xi\) to each of these. Finally, we show that these open books are related by positive open book stabilisation.
### Reading guide and open questions
Section 2 provides background for the paper, stating well known technical results in the generality that will be useful later on. It also introduces notation for the upcoming sections, but a reader familiar with convex surface theory is advised to restrict their attention to the discussion of bypasses in Section 2.3. Section 3 discusses open books and Heegaard decompositions in the context of contact geometry. The first part may also be viewed as a background section, but one that establishes the perspective used in the main results that follow. Section 3.2 introduces positive stabilisation for Heegaard splittings and presents some essential technical results. Section 4 presents a short proof that every contact manifold admits a supporting open book; this result is an application of ideas from the previous section, but is not used later on. The technical heart of the paper lies in Section 5, where we define the refinement of a tight Heegaard splitting and show that the positive stabilisation class of the
associated open book is well defined. With these tools in hand, the proof of the tight Giroux Correspondence is short, and the final section discusses the potential for and obstacles to extending this approach to overtwisted manifolds.
Throughout the paper, we require all Heegaard surfaces to be convex, but we consider a variety of compatibility conditions between the contact structure and the handlebodies. The strictest relationship is seen in the contact Heegaard splittings (Section 3.1) directly induced by a supporting open book. At the other extreme, Section 4 constructs an open book from an arbitrary Heegaard splitting, at a cost of greatly increasing the genus of the splitting via contact \(1\)-handle addition. Section 5 considers a middle ground, stabilising tight Heegaard splittings (Definition 3.5) to yield contact splittings. Rubinstein notes that in the Heegaard genus \(2\) case, "open book decompositions are nearly always more complicated than minimal Heegaard splittings" [12]. Although this is a topological observation rather than a contact one, it is consistent with the fact that we use stabilisation to transform an arbitrary Heegaard splitting to a contact splitting. However, stabilisation destroys other information carried by the splitting; for example, a Heegaard splitting of distance at least \(3\) implies that the underlying manifold is hyperbolic.
Below, we include some additional questions exploring the relationship between contact structures and Heegaard splittings.
**Question 1.3**.: _Is there a bound on the distance of a tight Heegaard splitting?_
**Question 1.4**.: _Given a convex Heegaard surface in \((M,\xi)\), what is the minimal number of stabilisations required to make the splitting contact?_
As indicated above, we prove the Giroux Correspondence only for tight contact manifolds. Nevertheless, many of the notions developed in the paper apply equally well to tight splittings of overtwisted manifolds, leading to other natural questions:
**Question 1.5**.: _For an overtwisted contact structure \(\xi\) on \(M\), what is the minimal genus of a tight Heegaard splitting?_
**Question 1.6**.: _For an overtwisted contact manifold, can the Heegaard genus and the minimal genus of a tight splitting be arbitrarily far apart?_
### Acknowledgment
This material is based in part upon work supported by the National Science Foundation under Grant No. DMS-1929284 while the authors were in residence at the Institute for Computational and Experimental Research in Mathematics in Providence, RI, during the Braids program. The first author received support from the Australian National University's Outside Studies Program and the second author was supported by the FWF grant "Cut and Paste Methods in Low Dimensional Topology" P 34318. The second author would also like to thank the friendly and calm environment the Erdos Center provided.
## 2. Background
This section provides technical background that will be relied upon in the rest of the paper. We assume familiarity with contact structures, Legendrian knots, and the Thurston-Bennequin number; readers unfamiliar with these are directed to [1]. We have subsections on several topics: 2.1 convex surfaces, 2.2 contact handles, and 2.3 bypasses. The organisation is intended to be as transparent as possible for readers who wish to move to Section 3 or Section 4 and refer back as needed.
### Convex surfaces
Here we give a brief introduction to the essentials of convex surface theory; for a more thorough treatment, the reader is referred to [11]. Given \(\xi=\ker\alpha\) a contact structure on \(M\), a surface \(\Sigma\) embedded in \(M\) is _convex_ if there is a _contact vector field_\(X\) (i.e., a vector field whose flow preserves \(\xi\)) transverse to \(\Sigma\). If \(\partial\Sigma\neq\emptyset\), we require \(\partial\Sigma\) to be Legendrian. Using the transverse direction given by \(X\), one can build a neighbourhood \(\nu(\Sigma)\cong\Sigma\times I,\xi|_{\nu(\Sigma)}\) with an \(I\)-invariant contact structure. (When \(\Sigma\subset\partial M\), the transverse vector field yields an \(I\)-invariant half-neighbourhood.). The existence of such a (half-)neighbourhood is an alternative criterion for the convexity of \(\Sigma\).
The locus of points where \(\alpha(X)=0\) is a 1-dimensional submanifold \(\Gamma\) called the _dividing curve_. The dividing curve separates \(\Sigma\) into two submanifolds \(\Sigma_{\pm}:=\{x:\pm\alpha(X)>0\}\). Any embedded surface can be made convex via a \(C^{\infty}\)-isotopy, and an isotopy that keeps \(\Sigma\) convex is called a _convex isotopy_. For a given convex surface, the choice of transverse convex vector field is not unique, but different choices will yield dividing sets that differ only by isotopy on \(\Sigma\).
The dividing curve on a convex surface can be used to compute the relative twisting of the contact planes along Legendrian curves. If \(C\) is Legendrian on the convex surface \(\Sigma\), then
\[tw_{C}(\xi,T\Sigma)=-\frac{1}{2}|\Gamma\cap C|,\]
where \(tw_{C}(\xi,T\Sigma)\) denotes the relative twisting of the plane fields \(\xi|_{C}\) and \(T\Sigma|_{C}\) along \(C\).
Let \(L\) be a boundary component of \(\Sigma\) and take a standard neighbourhood \((\nu(L),\xi|_{\nu}(L))\) contactomorphic to
\[\left(S^{1}\times D^{2},\xi=\ker(\sin(n\vartheta)\,dx+\cos(n\vartheta)\,dy) \right),\]
where \(\vartheta\) is the coordinate parameterising \(L\) and \(x,y\) are coordinates on \(D^{2}\). The boundary component \(L\) is _standard_ if \(\Sigma\) restricts to \(\{y=0,x\geq 0\}\) in this model.
As shown by Kanda [10], a surface can be \(C^{0}\)-isotoped to this position by an isotopy supported in \(\nu(L)\) to standard position and then by a \(C^{\infty}\)-small isotopy to be convex if and only \(tw_{L}(\xi,T\Sigma)\leq 0\).
We can similarly measure relative twisting of Legendrian arcs with endpoints on \(\Gamma\): Let \(\Sigma\) be a convex surface and \(C\) a Legendrian arc with endpoints on \(\Gamma\). Then
\[tw_{C}(\xi,T\Sigma)=-\frac{1}{2}|\Gamma\cap C|,\]
where the endpoints of \(C\) on \(\Gamma\) are each counted with multiplicity \(\frac{1}{2}\).
Consider two convex surfaces \(\Sigma\) and \(\Sigma^{\prime}\) that intersect each other transversely along a closed Legendrian curve. The surfaces \(\Sigma\) and \(\Sigma^{\prime}\) intersect _standardly_ along a component \(L\) of \(\Sigma\cap\Sigma^{\prime}\) if, in the standard neighbourhood \(\nu(L)\), we have
\[\Sigma\cap\nu(L)=\{y=0\}\qquad\text{ and }\qquad\Sigma^{\prime}\cap\nu(L)=\{x=0\}.\]
If \(L\) is a boundary component of \(\Sigma\) and \(\Sigma^{\prime}\), then we require
\[\Sigma\cap\nu(L)=\{y=0,x\geq 0\}\text{ and }\Sigma^{\prime}\cap\nu(L)=\{x=0,y \geq 0\}.\]
Again, Kanda [10] showed that if \(tw_{L}(\xi,T\Sigma)=tw_{L}(\xi,T\Sigma^{\prime})\leq 0\), then standard intersection can be achieved - in a slightly smaller neighbourhood - by a \(C^{0}\)-isotopy supported in \(\nu(L)\) that keeps \(L\) fixed and both \(\Sigma\) and \(\Sigma^{\prime}\) convex. Once \(\Sigma\) and \(\Sigma^{\prime}\) intersect standardly, the intersection points \(\Gamma_{\Sigma}\cap L\) and \(\Gamma_{\Sigma^{\prime}}\cap L\) alternate along \(L\).
The union of convex surfaces with Legendrian boundary gives a _piecewise convex surface_. More precisely, this is a surface \(\Sigma=\cup\Sigma_{i}\) where each \(\Sigma_{i}\) is convex with Legendrian boundary and such that the following hold:
1. for distinct \(i,j,k\), \(\Sigma_{i}\cap\Sigma_{j}\cap\Sigma_{k}=\emptyset\);
2. \(\Sigma_{i}\cap\Sigma_{j}=\partial\Sigma_{i}\cap\partial\Sigma_{j}\) is Legendrian, and at each component of a double intersection, the surfaces \(\Sigma_{i}\) and \(\Sigma_{j}\) intersect standardly.
We can smooth a piecewise convex surface along any component of the double intersections.
**Lemma 2.1** (Edge rounding).: _Let \(\Sigma\) and \(\Sigma^{\prime}\) be convex surfaces with a standard intersection along some Legendrian curve \(L\) which is a boundary component of \(\Sigma\) and \(\Sigma^{\prime}\). Then we may form a smooth convex surface \(\Sigma^{\prime\prime}\) by replacing \(\Sigma\) and \(\Sigma^{\prime}\) in \(\nu(L)\) so that \(\Gamma_{\Sigma^{\prime\prime}}\) restricts to \(\Gamma_{\Sigma}\) and \(\Gamma_{\Sigma^{\prime}}\) away from \(\nu(L)\) and connects the components of \(\Gamma_{\Sigma}\) and \(\Gamma_{\Sigma^{\prime}}\) in a direction opposite to the orientation of \(L\), as on Figure 1._
Figure 1. Left: Two convex surfaces meeting along a Legendrian curve. Right: After smoothing, the new convex surface.
Note that in the local model \(\nu(L)\), the surfaces \(\{y=\varepsilon,x\geq\varepsilon\}\) and \(\{x=\varepsilon,y\geq\varepsilon\}\) are convex and intersect standardly at \(\{x=y=\varepsilon\}\). This observation allows us to construct a tubular neighbourhood \(\nu(\Sigma)\) for piecewise convex surfaces which is foliated by copies of the piecewise convex surface. A _convex isotopy_ of a piecewise convex surface is an isotopy that keeps it piecewise convex at all times. A piecewise convex surface is _closed_ if each boundary component of the \(\Sigma_{i}\) is in a double intersection. In particular, closed convex surfaces are closed piecewise convex.
One can also introduce corners along Legendrian simple closed curves on a convex surface \(\Sigma\). Consider two \(C^{0}\) convex isotopies in a standard neighbourhood \(\nu(L)\) of \(L\) so that the images \(\Sigma^{\prime}\) and \(\Sigma^{\prime\prime}\) in \(\nu(L)\) meet transversely along \(L\). Form the new cornered surface by cutting \(\Sigma^{\prime}\) and \(\Sigma^{\prime\prime}\) along \(L\) and then gluing a component of one to a component of the other inside \(\nu(L)\). There are two choices, depending on the preferred type of corner, and edge rounding either of the resulting cornered surfaces returns a smooth surface convexly isotopy to the original \(\Sigma\).
It is often convenient to allow deformations that preserves the convex isotopy class of the boundary of a contact 3-manifold, but are insensitive to changes in the characteristic foliation associated to a specific choice of contact form.
**Definition 2.2**.: Two contact structures \((M,\xi)\) and \((M^{\prime},\xi^{\prime})\) with piecewise convex boundary are _weakly contactomorphic_ if there is a contact embedding \(\iota\colon M\hookrightarrow M^{\prime}\) such that \(\iota(\partial M)\) is convex isotopic to \(\partial M^{\prime}\).
**Definition 2.3**.: Two embedded codimension-0 submanifolds \(N_{0},N_{1}\subset(M,\xi)\) with piecewise convex boundary are _weakly (contact) isotopic_ if there is an isotopy \(N_{s}\) between them such that \(\partial N_{s}\) is piecewise convex throughout.
Again, this notion allows us to disregard the specific characteristic foliation on the boundary of \(N_{i}\) and concentrate only on \(\Gamma_{\partial N_{i}}\).
**Theorem 2.4** (Giroux's Criterion).: _A convex surface \(\Sigma\) in some \((M,\xi)\) has a tight neighbourhood if and only if_
1. \(\Sigma\) _is a sphere and_ \(\Gamma_{\Sigma}\) _is connected; or_
2. \(\Sigma\) _is not a sphere and_ \(\Gamma_{\Sigma}\) _has no closed contractible components._
**Proposition 2.5**.: _A piecewise convex surface \(\Sigma\) in some \((M,\xi)\) has a tight neighbourhood if and only if, after rounding corners, it has a tight neighbourhood._
Suppose \((M,\xi)\) has a piecewise convex boundary, and let \(L\) be a Legendrian corner between the convex pieces \(\Sigma\) and \(\Sigma^{\prime}\). We say \(L\) is _acute_ if in the local model near \(L\), where \(\Sigma=\{y=0,x\geq 0\}\) and \(\Sigma^{\prime}=\{x=0,y\geq 0\}\), \(M=\{x,y\geq 0\}\). On the other hand, if \(M=\{y\leq 0\text{ or }x\leq 0\}\) then \(L\) is called _reflex_. For an acute corner, one can just simply round \(\Sigma\) and \(\Sigma^{\prime}\) at \(L\) inside \(M\), and declare the submanifold with boundary \(\Sigma^{\prime\prime}\) to be the _rounded
\(M\). But if \(L\) is a reflex corner, we first take a parallel copy of \(\Sigma\) inside \(M\), round so that \(\Sigma^{\prime\prime}\) does not touch \(\partial M\), and declare the manifold bounded by \(\Sigma^{\prime\prime}\) to be the rounded \(M\). This operation is independent of the choices made, up to weak isotopy.
**Theorem 2.6**.: _[_10_]_ _Let \(B\) be a topological \(3\)-ball with piecewise convex boundary. Up to weak contactomorphism on \(B\) after rounding, there is a unique tight contact structure inducing a connected dividing set on the rounded ball._
In fact, there exists a large topological class of curves on a convex surface which have Legendrian representatives. Quoting [11], an embedded graph \(C\) on a convex surface \(S\) is _non-isolating_ if \(C\) is transverse to \(\Gamma\); the univalent vertices of \(C\) (and no others) lie on \(\Gamma\); and every component of \(S\setminus(\Gamma_{S}\cup C)\) has a boundary component which intersects \(\Gamma\).
**Theorem 2.7** (Legendrian Realisation Principle).: _Given any non-isolating graph \(C\) on a convex surface \(S\), there exists a convex isotopy \(\phi_{s},s\in[0,1]\) such that the following hold:_
1. \(\phi_{0}=\text{id}\) _and_ \(\phi_{s}|_{\Gamma_{S}}=\text{id}\)_;_
2. \(\phi_{1}(\Gamma_{S})=\Gamma_{\phi_{1}(S)}\)_; and_
3. \(\phi_{1}(C)\) _is Legendrian._
**Proposition 2.8** (Partial gluing).: _Let \((M^{R},\xi^{R})\) and \((M^{L},\xi^{L})\) be contact 3-manifolds with piecewise convex boundary, and suppose that \(\varphi\colon\Sigma^{R}\to\Sigma^{L}\) is a diffeomorphism that identifies a pair of convex components of \(\partial M^{L}\) and \(\partial M^{R}\) and carries \(\Gamma_{\Sigma^{R}}\) to \(\Gamma_{\Sigma^{L}}\). Then up to weak contactomorphism, there is a unique contact structure \(\xi:=\xi^{L}\cup\xi^{R}\) on \(M=M^{R}\cup_{\varphi}M^{L}\) with a piecewise convex boundary that restricts (again, up to weak contactomorphism) to \(M^{R}\) and \(M^{L}\) as \(\xi^{R}\) and \(\xi^{L}\)._
### Contact handles
The basic building blocks for contact manifolds are contact handles. These were first introduced by Giroux [12], but in this paper we find it convenient to use a reformulation by Ozbagci [14] phrased in the language of convex surfaces. Since every \(3\)-dimensional \(k\)-handle is topologically a ball, contact handles are simple to describe up to weak contactmorphism using Theorem 2.6: there is a unique tight contact structure on \(D^{3}\) with smooth convex boundary and connected dividing set. This is the model for a _contact 0-handle_\((h^{0},\zeta^{0})\) and a _contact 3-handle_\((h^{3},\zeta^{3})\). Similarly, there is a unique tight contact structure on \(D^{1}\times D^{2}\) with dividing curve \(\Gamma\) as on Figure 2; this is the model for both a _contact 1-handle_\((h^{1},\zeta^{1})\) and a _contact 2-handle_\((h^{2},\zeta^{2})\).
As usual, contact 0-handles are attached to the empty set, but the attaching data is important for higher-index contact handles. Given a contact cobordism \((W,\xi)\) with convex boundary \(\partial_{+}W\cup-\partial_{-}W\), let \(\varphi^{1}\colon\partial D^{1}\times D^{2}\hookrightarrow\partial_{+}W\) be a diffeomorphism such that each \(D^{2}\) component of the image of \(\varphi\)
is intersected in an arc by the dividing curve \(\Gamma_{\partial_{+}W}\). Choose a representative of \(\xi\) so that \(L=\varphi^{1}(\partial D^{1}\times\partial D^{2})\) is Legendrian. Introduce a corner along \(L\); \(\varphi^{1}\) is still a map into the now-piecewise convex surface still denoted \(\partial_{+}W\). Using Proposition 2.8, glue \(h^{1}\) onto \(W\) to obtain a new contact manifold \((W\cup h^{1},\xi\cup\zeta^{1})\). By construction, this manifold already has smooth boundary. This is a _contact 1-handle attachment_.
To attach a _contact 2-handle_, start with a diffeomorphism \(\varphi^{2}\colon\partial D^{2}\times D^{1}\hookrightarrow\partial_{+}W\) with the property that the two arcs of \(\varphi^{2}(\Gamma_{\partial D^{2}\times D^{1}})\) align with the two arcs of \(\Gamma_{\partial_{+}W}\cap\varphi^{2}(\partial D^{2}\times D^{1})\). Then Legendrian realise the two curves \(\partial\varphi^{2}\colon\partial D^{2}\times\partial D^{1}\), break \(\partial_{+}W\) along them to obtain a cobordism with piecewise convex boundary, and then glue \(h^{2}\) using \(\varphi^{2}\).
Contact 3-handles are easier to attach, as one need not break the boundary before the attachment.
When \(W\) is embedded in some contact 3-manifold \((M,\xi)\), one may attach 1-handles to \(W\) inside \((M,\xi)\) along any Legendrian arc \(l\) properly embedded in \(M\setminus W\) with boundary on \(\Gamma_{\partial_{+}W}\subset\partial W\). In this case, the attachment is the (smoothing of) \(\big{(}W\cup\nu(l),\xi|_{W\cup\nu(l)}\big{)}\), where \(\nu(l)\) is a standard neighbourhood of \(l\). Up to weak isotopy, this construction depends only on the Legendrian isotopy class of \(l\) relative to its endpoints and not on the choice of standard neighbourhood or the particular representative of the Legendrian isotopy class.
Attaching a contact 1-handle preserves tightness:
**Lemma 2.9**.: _[_10_]_ _Let \(W\) be a contact manifold with a convex boundary and let \(W^{\prime}=W\cup\nu(l)\) be the manifold formed by attaching a contact \(1\)-handle. If \(W\) is tight, then so is \(W^{\prime}\)._
Similarly, given a properly embedded convex disc in \(M\setminus W\), one may add a standard or \(I\)-invariant neighbourhood of the disc to \(W\) as a 2-handle attachment. In order to ensure this is a contact \(2\)-handle attachment, the disc must have a tight neighbourhood and a Legendrian boundary on \(\partial_{+}W\) with Thurston-Bennequin number \(-1\).
Figure 2. Left: a contact \(1\)- or \(2\)-handle. Right: Adding a corner along the closed Legendrian \(L\) allows us to smoothly attach a contact \(1\)-handle.
As usual, attaching handles to a cobordism changes the boundary by surgery; in this case, contact handle attachment changes \(\partial_{+}W\) by convex surgery, so that the new boundary has a dividing set distinguished up to isotopy by the handle attachment.
### Bypasses
An isotopy of the convex surface \(\Sigma\) in a contact manifold either preserves the dividing set of \(\Sigma\) up to isotopy or changes it by a sequence of _bypasses_, each of which corresponds to pushing \(\Sigma\) across a particular contact three-ball. This ball may be characterised in a variety of ways; most familiar is viewing the ball as a neighbourhood of a half an overtwisted disc, but we will use an equivalent definition that is more convenient for our purposes.
Let \((\Sigma,\Gamma)\) be a convex surface in \((M,\xi)\). An arc \(c\subset\Sigma\) is _admissible_ if it is transverse to \(\Gamma\), \(\partial c\in\Gamma\), and the interior of \(c\) intersects \(\Gamma\) once. By a \(C^{\infty}\)-small convex isotopy of \(\Sigma\) the admissible arc \(c\) can be made Legendrian on \(\Sigma\); this condition is subsequently assumed without mention.
A _bypass disc_\(D\) is a convex half-disc with Legendrian and piecewise-smooth boundary \(\partial D=c\cup l\), where
1. \(D\) intersects \(\Sigma\) transversely exactly at \(c\);
2. \(D\) has a tight neighbourhood;
3. \(tw_{l}(\xi,TD)=0\).
Since \(c\) is admissible, \(tw_{c}(\xi,TD)=-1\). The fact that \(D\) has a tight neighbourhood allows one to draw a dividing curve on \(D\) as in Figure 3.
Suppose that \(\Sigma\) is oriented so that \(D\) is on its positive side. We will define a cobordism \(W\) that encloses \(\Sigma\cup D\). The operation which replaces the original \(\Sigma=\partial_{-}W\) by \(\partial_{+}W\) is called _attaching a bypass from the front_. If \(D\) is on the negative side of \(\Sigma\), replacing \(\Sigma=\partial_{-}W\) by \(\partial_{+}W\) is called _attaching a bypass from the back_.
We now construct the cobordism \(W\) that encloses \(\Sigma\cup D\) for bypass attachment from the front. Begin with a standard neighbourhood \(\nu(\Sigma)\), where \(\Sigma=\partial_{-}\nu(\Sigma)\). We also require that \(D\setminus\nu(\Sigma)\) has the same properties as the
Figure 3. One may isotope \(\Sigma\) across the bypass half-disc \(D\) cobounded by \(c\cup l\) by successively attaching a contact \(1\)–handle along \(l\) and then a contact \(2\)–handle along \(D\).
original \(D\), and we will not distinguish them in the following discussion. Attach the \(1\)-handle \(\nu(l)\) to \(\nu(\Sigma)\) and call the resulting contact cobordism \(W^{\prime}\). A further \(C^{\infty}\)-small convex isotopy of \(\partial_{+}W^{\prime}\) ensures that \(D\cap\partial_{+}W^{\prime}\) is a Legendrian knot. Necessarily, \(D\cap\partial_{+}W^{\prime}\) has Thurston-Bennequin number \(-1\), so a standard neighbourhood \(\nu(D^{\prime})\) of \(D^{\prime}=D\setminus W^{\prime}\) is a \(2\)-handle that we attach to \(W^{\prime}\) to obtain \(W\).
The \(1\)-handle \(\nu(l)\) and the \(2\)-handle \(\nu(D^{\prime})\) are in smoothly cancelling position, and the \(3\)-manifold \(W\) remains smoothly isotopic to \(\Sigma\times I\). We call this cobordism a _bypass slice_. Up to weak isotopy, the contact structure \(W\) depends only on \((\Sigma,\Gamma)\) and the isotopy class of \(c\) through admissible arcs. The dividing curve on \(\partial_{+}W\) differs from that of \(\partial_{-}W\) as shown in Figure 3. As usual, when \(\Sigma=\partial U\) for a submanifold \(U\), we omit \(\nu(\Sigma)\) from our construction and attach the handles directly to \(\partial U\).
Turning a bypass slice upside down exchanges the indices of the \(1\)- and \(2\)-handles, which remain in cancelling position. An upside down bypass slice is thus also a bypass slice, but for attaching the half-disc along the admissible arc \(c^{\mathrm{op}}\), which is the visible part of the belt sphere of the \(1\)-handle on \(\partial_{+}W\); see Figure 3. It follows that attaching a bypass from the back along \(c^{op}\) is an inverse operation to attaching a bypass from the front along \(c\).
Bypass slices are basic building blocks of contact structures on \((\Sigma\times I)\).
**Theorem 2.10**.: _[_10_]_ _Any contact structure \(\xi\) on \(\Sigma\times I\) with convex boundary can be decomposed as a concatenation of bypass slices._
Isotopies are also built up from bypasses in the following sense:
**Theorem 2.11**.: _[_10_]_ _[Colin's Isotopy Discretisation] Let \(\Sigma\) and \(\Sigma^{\prime}\) be convex surfaces in \((M,\xi)\) that are smoothly isotopic. Then there is a sequence of embedded convex surfaces \(\Sigma=\Sigma_{0},\Sigma_{1},\ldots,\Sigma_{k}=\Sigma^{\prime}\) such that for each \(0\leq i<k\), the surface \(\Sigma_{i+1}\) is obtained from \(\Sigma_{i}\) via a bypass attachment from the front or from the back._
## 3. Decompositions of contact manifolds
This section discusses contact Heegaard splittings and open book stabilisation. The statements proven in Section 3.1 are largely a matter of perspective and will be familiar to experts, but Section 3.2 has several technical results that we will rely heavily upon later.
### Heegaard splittings and open books
**Definition 3.1**.: Let \(M\) be a closed orientable \(3\)-manifold. The pair \((B,\pi)\) is an (embedded) _open book_ if \(B\) is an oriented link; \(\pi:M\setminus B\to S^{1}\) is a fibration; and for all \(t\in S^{1}\), \(\overline{\pi^{-1}(t)}\) is a Seifert surface for \(B\).
An open book decomposition \((B,\pi)\) for \((M,\xi)\) defines a Heegaard decomposition \(\mathcal{H}(B,\pi)\) for \(M\) with handlebodies
\[U =\overline{\pi^{-1}[0,1/2]},\] \[V =\overline{\pi^{-1}[1/2,1]}.\]
That the manifolds \(U\) and \(V\) are indeed handlebodies follows from the fact that the fibre over each point in \(S^{1}\) is an open surface, so \(U\) and \(V\) are closures of products of a surface with an interval. Choosing a smooth identification of \(\overline{M\setminus\pi^{-1}(1)}\) with \(S\times I\) for a model fibre \(S\) allows us to easily identify a set of compressing discs for each handlebody. Writing \(S_{t}\) for the _page_\(S\times\{t\}\), let \(\varphi_{s}^{t}\colon S_{s}\to S_{t}\) be the parallel transport fixing \(B\) and carrying \(S_{s}\) to \(S_{t}\). A set of properly embedded arcs \(\{a_{1},\dots,a_{k}\}\) on \(S=S_{0}\) is an _arc system_ if \(S\setminus\cup\{a_{1},\dots,a_{k}\}\) is a (cornered) disc. For any arc system, the discs
\[A_{i} =\bigcup_{t\in[0,1/2]}\varphi_{0}^{t}(a_{i})\] \[B_{i} =\bigcup_{t\in[1/2,1]}\varphi_{0}^{t}(a_{i})\]
are compressing discs for \(U\) and \(V\). Denote the union of \(A_{i}\) discs by \(\mathcal{A}\) and the union of \(B_{i}\) discs by \(\mathcal{B}\). Here and elsewhere, we will orient the Heegaard surface \(\Sigma=\overline{\pi^{-1}(\frac{1}{2})\cup-\pi^{-1}(0)}\) as the boundary of \(U\).
Torisu [10] showed that when \((B,\pi)\) supports \(\xi\), \(\Sigma\) is convex and \(\xi\) restricts tightly to each handlebody in the associated contact Heegaard splitting. In fact, this restriction characterises supporting open books:
**Proposition 3.2** ([10] Theorem 1.1).: _Fix an open book decomposition \((B,\pi)\) for \(M\). Then \(\xi\) is supported by \((B,\pi)\) if and only if \(\mathcal{H}(B,\pi)\) has the following properties:_
1. \(\Sigma\) _is convex with dividing curve_ \(\Gamma=B\)_;_
2. \(\xi|_{U}\) _and_ \(\xi|_{V}\) _are both tight._
_Moreover \(\xi\) is uniquely determined up to isotopy by this property._
A Heegaard splitting which is \(\mathcal{H}(B,\pi)\) for some open book is called a _contact Heegaard splitting_, and we show next how to identify contact splittings. Observe that in \(\mathcal{H}(B,\pi)\), the boundary of each disc in \(\mathcal{A}\cup\mathcal{B}\) intersects the binding \(B\) in 2 points. We call a compression disc whose boundary intersects \(\Gamma\) twice essentially a _product disc_, and by analogy with arcs on a surface, a set of compression discs which cut a handlebody into a ball is called a _disc system_. In a contact Heegaard splitting, each handlebody has a system of product discs, and as Torisu shows, this property characterises contact splittings:
**Proposition 3.3**.: _[_14_]_ _A Heegaard decomposition \((\Sigma,U,V)\) of \((M,\xi)\) is a contact Heegaard splitting if and only if_
1. \(\Sigma\) _is convex with dividing curve_ \(\Gamma\)_;_
2. \(\xi|_{U}\) _and_ \(\xi|_{V}\) _are both tight;_
3. _there exist systems of product discs_ \(\mathcal{A}\) _for_ \(U\) _and_ \(\mathcal{B}\) _for_ \(V\)_._
Cutting a handlebody along a disc system reduces it to a ball with paired discs on the boundary; after cutting along a system of product discs, there is a unique way to connect arcs of \(\Gamma\) whose endpoints lie on the same disc. This matches the result of smoothing in the case that the product discs are convex with Legendrian boundary, so we again call the resulting curve \(\Gamma\). In fact, this new \(\Gamma\), which now lies on a sphere, is connected: cutting along an arc system on a page of an open book yields a surface with connected boundary, so cutting \(U\) and \(V\) along \(\mathcal{A}\) and \(\mathcal{B}\) yields a connected \(\Gamma\).
The construction of an open book from a contact Heegaard splitting is compatible with a \(1\)-parameter family of Heegaard splittings:
**Proposition 3.4**.: _Let \((M,\xi)\) be a contact structure and let \((B,\pi)\) and \((B^{\prime},\pi^{\prime})\) be open books supporting \(\xi\). Then \(\mathcal{H}(B,\pi)\) and \(\mathcal{H}(B^{\prime},\pi^{\prime})\) are isotopic via an isotopy keeping \(\Sigma\) convex if and only if \((B,\pi)\) and \((B^{\prime},\pi^{\prime})\) are isotopic._
In light of this, we will view open books and Heegaard splittings as objects defined only up to convex isotopy.
A key aim of this paper is to broaden the class of Heegaard splittings that may be effectively used to study contact structures. To this end, we introduce the following definition:
**Definition 3.5**.: A Heegaard splitting \((\Sigma,U,V)\) of \((M,\xi)\) is a _tight_ if \(\Sigma\) is convex and \(\xi|_{U}\) and \(\xi|_{V}\) are both tight.
Every contact Heegaard splitting is tight, and Proposition 3.3 states that a tight splitting is contact if it admits a system of product discs.
### Stabilising open book decompositions
Stabilisation is an operation performed on open book decompositions of a \(3\)-manifold. First identified by Stallings in the topological context, stabilisation comes in two versions that are distinguished as positive and negative [10]. Positive stabilisation preserves the supported contact structure, up to isotopy, and is the only version considered in this paper. In this section we establish the equivalence of several perspectives on positive open book stabilisation, focusing on the question of recognising when a change to a Heegaard splitting is in fact a positive stabilisation of the underlying open book.
The literature offers several equivalent ways to define positive stabilisation, and it will be useful to move between these as needed:
1. Replace the abstract open book \((S,h)\) by the abstract open book \((S^{+},h^{+})\), where \(S^{\prime}\) is a plumbing of \(S\) by an annulus and \(h^{+}\) is the composition of \(h\) (extended via the identity to the new \(1\)-handle) and a positive Dehn twist around the core of the new annulus.
Although this is generally the easiest presentation to explain quickly, the reliance on abstract open books is a poor fit for our methods.
1. Let \((H^{+},\pi^{+})\) denote the open book for \(S^{3}\) with binding a positive Hopf link \(H^{+}\) in the unit sphere in \(\mathbb{C}^{2}\) and \(\pi^{+}\) the fibration over \(S^{1}\) defined by \((z_{1},z_{2})\mapsto\frac{z_{1}z_{2}}{|z_{1}z_{2}|}\). Inside \((H^{+},\pi^{+})\), choose a \(3\)-ball neighbourhood of a cocore arc of \((\pi^{+})^{-1}(0)\). Given an arbitrary \((B,\pi)\), a positive stabilisation is formed by taking the connect sum \((B,\pi)\#(H^{+},\pi^{+})\) using the designated \(3\)-ball in \((H^{+},\pi^{+})\) and a 3-ball in \((B,\pi)\) that is also a neighbourhood of some arc on \(S_{0}=\pi^{-1}(0)\) in \(M\). In this case we can arrange that the open book data match along the gluing \(S^{2}\). The open book \((B^{+},\pi^{+})\) obtained thus is a _(positive) stabilisation_ of \((B,\pi)\).
Now let \(\mathcal{H}(B,\pi)\) be the Heegaard splitting associated to an open book decomposition \((B,\pi)\) for \((M,\xi)\). It is straightforward to check that stabilising \((B,\pi)\) to \((B^{+},\pi^{+})\) induces a Heegaard splitting stabilisation of \(\mathcal{H}(B,\pi)\). Below, we introduce criteria to detect when a stabilisation of a contact Heegaard splitting in fact corresponds to a positive stabilisation of the underlying open book.
**Definition 3.6**.: Suppose that \((\Sigma,U,V)\) is a tight Heegaard splitting and let \(D\subset V\) be a convex half-disc with Legendrian and piecewise-smooth boundary \(\partial D=c\cup l\). Suppose that \(c\subset\Sigma\) and that \(l\) is properly embedded in \(V\) with \(\partial l=-\partial c\subset\Gamma_{\Sigma}\) and suppose further that
\[tw_{l}(\xi,TD)=tw_{c}(\xi,TD)=-\frac{1}{2}.\]
This data determines a new Heegaard splitting \((\Sigma^{\prime},U^{\prime},V^{\prime})\) where \(U^{\prime}\) is the smoothing of \(U\cup\nu(l)\) and \(V^{\prime}=V\setminus\nu(l)\). The splitting \((\Sigma^{\prime},U^{\prime},V^{\prime})\) is a _positive stabilisation_ of \(\mathcal{H}\).
A local model for \(D\) defining a positive stabilisation is shown in Figure 4.
The fact that \(\xi\) is tight on \(V\), and thus near \(D\), implies that the dividing curve on \(D\) is just an arc connecting \(c\) and \(l\). Thus \(l\) is Legendrian isotopic to \(c\).
At first glance, it may seem unfortunate to choose a term already in use, but the next lemma redeems this decision:
**Lemma 3.7**.: _If \(\mathcal{H}^{\prime}\) is a positive stabilisation of a contact splitting \(\mathcal{H}\), then \(\mathcal{H}^{\prime}\) is also a contact Heegaard decomposition: \(\mathcal{H}^{\prime}=\mathcal{H}(B^{\prime},\pi^{\prime})\). Furthermore \((B^{\prime},\pi^{\prime})\) is a positive stabilisation of \((B,\pi)\)._
Proof.: We first verify that the positive stabilisation is again a contact Heegaard splitting. Both the cocore disc of the new handle and \(D\cap V\) are product discs for their respective handlebodies. Cutting along these discs first returns the original manifold, so Condition 3 in Proposition 3.3 is satisfied. Furthermore, recall that attaching a contact \(1\)-handle preserves tightness, so Condition 2 is also satisfied. The surface \(\Sigma^{\prime}\) is convex by construction, so \(\mathcal{H}^{\prime}\) is a contact Heegaard splitting.
The existence of the half-disc \(D\) will show us how to decompose \((B^{\prime},\pi^{\prime})\) as a connect sum \((B,\pi)\#(H^{+},\pi^{+})\). Let \((Y,S^{2})\) be a standard neighbourhood of \(D\) in \((B^{\prime},\pi^{\prime})\). This ball is well defined up to weak isotopy based on the local model described in the hypotheses. It is immediate that \(Y\) is weakly contactomorphic to the complement of the distinguished ball in \((H^{+},\pi^{+})\) described above, but we will also show that it inherits the same decomposition into pieces of binding and pages.
To see this, we suppose that the original open book was the trivial decomposition of \((S^{3},\xi_{std})\) into two balls and verify that \((B^{\prime},\pi^{\prime})=(H^{+},\pi^{+})\). Since \(Y\) depends only on the local model, however, this will prove the general statement.
First, observe that \(Y\cap\Sigma^{\prime}\) is a punctured torus. Under the assumption that we began with a trivial decomposition, \(\Sigma^{\prime}\) is a torus with \(|\Gamma_{\Sigma^{\prime}}|=2\), as desired. Thus the stabilised Heegaard splitting has annular pages, and we will use the twisting hypothesis to detect the monodromy. The contact Heegaard splitting is completely determined by the intersection pattern of \(\Gamma_{\Sigma^{\prime}}\) and the boundary of the two product discs; we verify that the original twisting hypothesis implies that this matches the intersection pattern seen in \(\mathcal{H}(H^{+},\pi^{+})\). Specifically, examine the triangles labeled "\(+\)" in Figure 5. On the positively oriented Heegaard surface, the sides are cyclically ordered \(\partial B,\partial A,\Gamma\); this distinguishes \((H^{+},\pi^{+})\) from the negative stabilisation, which preserves the topological manifold but not the supported contact structure.
We note that open book decompositions of a fixed manifold are partitioned into positive stabilisation classes. In particular, the property that two open book admit a common positive stabilisation is an equivalence relation. This is easily seen by noting that the connect sum stabilising an
Figure 4. Local model for a half-disc defining a positive stabilisation
open book is taken by removing an arbitrarily small \(3\)-ball neighbourhood of an arc on a page. Consider two sequences of positive stabilisation of a fixed open book as a sequences of arcs on ordered pages. Then one may construct a common positive stabilisation using all the arcs on some set of ordered pages that restricts to the correct order on each of the two subsequences.
We conclude this section with an important result about bypass attachment which will be essential in the later sections:
**Proposition 3.8**.: _Suppose \(\mathcal{H}=(\Sigma,U,V)\) is a contact Heegaard decomposition of \((M,\xi)\) and let \((\Sigma^{\prime},U^{\prime},V^{\prime})\) be a new contact splitting where \(\Sigma^{\prime}\) is obtained from \(\Sigma\) by a bypass attachment along the bypass half-disc \(D\) in \(V\). If \(D\) is disjoint from a system of product discs \((\mathcal{A},\mathcal{B})\) for \(\mathcal{H}\), then \(\mathcal{H}\) and \(\mathcal{H}^{\prime}\) admit a common positive stabilisation._
_An analogous statement holds if the bypass half-disc is in \(U\)._
Proof of Proposition 3.8.: As before, label the Legendrian arcs of \(\partial D\) as \(c\subset\Sigma\) and \(l\subset V\). Let \(\mathcal{H}^{\prime\prime}\) be the contact Heegaard splitting built by attaching a contact \(1\)-handle along \(l\). We will show that \(\mathcal{H}^{\prime\prime}\) is a positive stabilisation of \(\mathcal{H}\) and \(\mathcal{H}^{\prime}\).
We begin by examining the bypass attachment arc \(c\subset\Sigma\). Since \(c\) crosses \(\Gamma\), it does not satisfy the hypotheses of Lemma 3.7 that ensure positive stabilisation but we claim that the hypothesis \(D\cap\mathcal{B}=\emptyset\) implies the existence of an alternative disc \(D^{\prime}\) with boundary \(c^{\prime}\cup l\) which satisfies the conditions of Lemma 3.7. Such a \(D^{\prime}\) demonstrates that \(\mathcal{H}^{\prime\prime}\) is a positive stabilisation of \(\mathcal{H}\).
So in the following we will concentrate on the existence of \(D^{\prime}\). The contact handlebody \(V\) is obtained from a ball \(B_{V}\) via contact \(1\)-handle attachments with co-cores \(B_{i}\), as indicated on the right hand side of Figure 6. Since \(D\) lies in the complement of the discs \(\mathcal{B}\), attaching a neighbourhood of \(D\) to \(U\) is a trivial bypass in \(B_{V}\); again, see Figure 6. The alternative
Figure 5. Left: Identify \(\overline{\pi^{-1}([0,\frac{1}{2}])}\) with \(A^{2}\times[0,\frac{1}{2}]\) to see the image of an arc system pushed through the two handlebodies. Centre: A convex Heegaard diagram for \(\mathcal{H}(H^{+},\pi^{+})\), with the connect sum sphere indicated. Right: \(\Sigma^{+}\).
disc \(D^{\prime}\) can be found in \(B_{V}\subset V\) as shown on the right-hand side of Figure 6. Explicitly, construct \(D^{\prime}\) from \(D\) by sliding \(D\) across the co-cores of the \(1\)-handles that meet the indicated subarc \(\gamma\) of \(\Gamma\).
We confirm that \(D^{\prime}\) defines a positive stabilisation. Since \(D\) is a bypass half-disc, the twisting of \(\xi\) along \(l\) relative to \(D\) is \(0\). The half-discs coincide near the endpoint where \(c=c^{\prime}\), but the orientation of \(D^{\prime}\) is opposite that of \(D\) at the other shared endpoint. This implies that the relative twisting of \(\xi\) and \(D^{\prime}\) is non-zero, but it cannot exceed \(-\frac{1}{2}\) because the interiors of \(D\) and \(D^{\prime}\) may be isotoped to be disjoint, by construction. Thus \(tw_{l}(\xi,D^{\prime})=-\frac{1}{2}\).
The other boundary component of \(D^{\prime}\) is the curve \(c^{\prime}\) properly embedded in \(\Sigma_{-}\). Since \(\Sigma\) is convex and \(c^{\prime}\cap\Gamma_{\Sigma}=\partial c^{\prime}\), it follows that \(tw_{c^{\prime}}(\xi,D^{\prime})=-\frac{1}{2}\) as well. This establishes that \(\mathcal{H}^{\prime\prime}\) is a positive stabilisation of \(\mathcal{H}\).
It remains to show that \(\mathcal{H}^{\prime\prime}\) is a positive stabilisation of \(\mathcal{H}^{\prime}\), as well. To see this, we exploit the duality of bypass attachment from the front and back: \(U\) can be obtained from \(U^{\prime}\) by attaching a bypass \(D^{\text{op}}\) along \(c^{\text{op}}\). As \(D\) was disjoint from \(\mathcal{A}\cup\mathcal{B}\), the opposite bypass disc \(D^{\text{op}}\) is also disjoint from \(\mathcal{A}\cup\mathcal{B}\). Attaching the \(1\)-handle corresponding to the bypass \(D^{\text{op}}\) once again produces \(\mathcal{H}^{\prime\prime}\), and a parallel argument establishes that this is a positive stabilisation.
## 4. Existence
Here we use the language of contact handle decompositions to present a proof for the existence of an open book compatible with any contact structure:
**Theorem 4.1**.: _Let \((\Sigma,U,V)\) be a Heegaard splitting of a contact \(3\)-manifold where \(\Sigma\) is convex. Then \((\Sigma,U,V)\) may be positively stabilised to a contact Heegaard splitting._
Figure 6. Left: Here, \(V\) is shown with indicative product discs. Observe that after cutting along all the product discs and smoothing, \(\Gamma\) is connected, so the bypass is trivial in the resulting tight ball. Right: After isotoping \(c_{+}\) through the product discs, the new \(c^{\prime}\) is disjoint from \(\Gamma\) and cobounds a new half-disc with \(l\).
**Corollary 4.2** (c.f. Theorem 1).: _Any contact \(3\)-manifold \((M,\xi)\) admits a supporting open book decomposition._
Proof.: Take any (smooth) Heegaard decomposition \((\Sigma,U,V)\) of \(M\) and choose bouquets of circles \(K_{U}\cong\vee_{i=1}^{g}S_{i}^{1}\) inside \(U\) and \(K_{V}\cong\vee_{j=1}^{g}S_{j}^{1}\) inside \(V\) such that each handlebody deformation retracts onto its respective bouquet. Legendrian realise each of \(K_{U}\) and \(K_{V}\). Retaining the same label, take standard Legendrian neighbourhoods \(\nu(K_{U})\) and \(\nu(K_{V})\). Each of these is a tight handlebody, but their union doesn't exhaust \((M,\xi)\).
The closure of the complement \((M\setminus\nu(K_{U}\cup K_{V}),\xi|_{M\setminus\nu(K_{U}\cup K_{V})})\) is a contact manifold diffeomorphic to \(\Sigma\times I\) with convex boundary. By Theorem 2.10, it decomposes into, say, \(k\) bypass slices. Each bypass slice in turn decomposes into a contact \(1\)-handle \((h_{i}^{1},\zeta_{i}^{1})\) and a contact \(2\)-handle \((h_{i}^{2},\zeta_{i}^{2})\). Thus \((\Sigma\times I,\xi)\) is weakly contact isotopic to the smoothed
\[(\nu(\Sigma),\xi(\Gamma_{0}))\bigcup_{i=1}^{k}\left((h_{i}^{1},\zeta_{i}^{1}) \cup(h_{i}^{2},\zeta_{i}^{2})\right),\]
where \((\nu(\Sigma),\xi(\Gamma_{0}))\) is an \(I\)-invariant half-neighbourhood of \(\Sigma_{0}=\partial\nu(K_{U})\). Moreover, for dimension reasons one can assume that the attaching region of any \(1\)-handle \(h_{j}^{1}\) or \(2\)-handle \(h_{j}^{2}\) is disjoint from \(\cup_{i<j}(\partial h_{i}^{2},\zeta_{i}^{2})\). This means that the bypass attaching arc \(c_{i}\) is also disjoint from \(\cup_{i<j}(\partial h_{i}^{2},\zeta_{i}^{2})\).
Consider
\[U^{\prime}=\nu(K_{U})\cup\bigcup_{i=1}^{k}h_{i}^{1}\]
and set \(V^{\prime}=M\setminus U^{\prime}\). We claim that \(M=U^{\prime}\cup V^{\prime}\) is a contact Heegaard decomposition of \((M,\xi)\). Indeed, \(U^{\prime}\) is tight and decomposes along product discs to a ball with a connected dividing set, as both these properties persist under \(1\)-handle addition. To make a similar claim for \(V^{\prime}\), we turn the picture upside down. Given
\[V^{\prime}=\nu(K_{V})\cup\bigcup_{i=1}^{k}h_{i}^{2},\]
we view the \(h_{i}^{2}\) as \(1\)-handles attached to \(\partial\nu(K_{V})\) in the opposite order. Thus \(V^{\prime}\) is also tight and admits a system of product discs, as desired.
Theorem 1 now follows from the equivalence of contact Heegaard decompositions and open books.
## 5. Tight Heegaard splittings
In this section we show that even non-contact Heegaard splittings of a tight contact manifold can be used to produce an open book. A tight Heegaard splitting (Definition 3.5), together with certain systems of compressing discs, allows us to construct a new contact Heegaard splitting, and thus, an open book. This process is called _refinement_. We show that the positive stabilisation class of the resulting open book depends only on the original tight Heegaard splitting of \((M,\xi)\). (Theorem 5.11).
### Refinement: Special Case
We first consider the special case where \(\Sigma\) is decorated with Legendrian attaching curves for a pair of disc systems for the two handlebodies. More precisely, let \(\mathcal{H}=(\Sigma,U,V)\) be a tight Heegaard splitting of \((M,\xi)\). Let \(\mathcal{A}=\{A_{1},\ldots,A_{g}\}\) be a disc system for \(U\) such that for each \(i\), \(\partial A_{i}\) is Legendrian, \(A_{i}\) is convex, and the intersection \(A_{i}\cap\Sigma\) is standard. Let \(\mathcal{B}=\{B_{1},\ldots,B_{g}\}\) be a disc system for \(V\) with the same properties. We further assume that the multicurves \(\partial\mathcal{A}\), \(\partial\mathcal{B}\), and \(\Gamma\) are in general position on \(\Sigma\). We call the pair \((\mathcal{A},\mathcal{B})\) a _convex compressing disc system_ for \(\mathcal{H}\). Note that in an arbitrary Heegaard diagram on a convex surface, it need not be possible to simultaneously Legendrian realise \(\partial\mathcal{A}\cup\partial\mathcal{B}\); this more general situation is addressed in Section 5.2.
Given a tight Heegaard splitting with a convex compressing disc system \((\mathcal{A},\mathcal{B})\), we will show how to construct a contact Heegaard splitting \(\widetilde{\mathcal{H}}(\mathcal{A},\mathcal{B})=(\widetilde{\Sigma}, \widetilde{U},\widetilde{V})\) for \((M,\xi)\), and thus, an open book for \((M,\xi)\). The contact Heegaard splitting \(\widetilde{\mathcal{H}}(\mathcal{A},\mathcal{B})\) is called the _contact refinement_ of \(\mathcal{H}\) via \((\mathcal{A},\mathcal{B})\).
Roughly speaking, we construct the refinement corresponding to \((\mathcal{A},\mathcal{B})\) by tunnelling along a spine of \(A_{i}\setminus\Gamma_{A_{i}}\) and \(B_{j}\setminus\Gamma_{B_{j}}\)1 and transferring each of the excavated \(1\)-handles to the opposite handlebody. This has the effect of breaking the \(A_{i}\) and \(B_{j}\) into collections of product discs in a new contact Heegaard splitting. Let us now describe this more precisely.
Footnote 1: For example, on the characteristic foliation one can take the union of the (usual) graphs formed by the positive and negative singularities, separately.
Suppose that \((\mathcal{A},\mathcal{B})\) is a convex compressing disc system for the splitting. Note that \(\Gamma_{\Sigma}\) cannot be empty; as \(\partial A_{i}\) and \(\partial B_{j}\) are Legendrian, both \(\Gamma_{A_{i}}\) and \(\Gamma_{B_{i}}\) are also non-empty.
**Lemma 5.1**.: _Each component of \(\Gamma_{A_{i}}\) intersects \(\partial A_{i}\) and each component of \(\Gamma_{B_{i}}\) intersects \(\partial B_{i}\). Each component of \(\Gamma_{\Sigma}\) intersects both \(\partial\mathcal{A}\) and \(\partial\mathcal{B}\). Each \(\partial A_{i}\) and each \(\partial B_{i}\) intersects \(\Gamma_{\Sigma}\)._
Proof.: If any component \(\gamma\) of \(\Gamma\) were disjoint from \(\partial A\cup\partial B\), then \(\gamma\) would persist after cutting along \(\mathcal{A}\cup\mathcal{B}\) and smoothing. This would yield a disconnected dividing set on a tight \(3\)-ball, contradicting Theorem 2.4.
For the final claim, observe that any component \(A_{i}\) or \(B_{j}\) disjoint from \(\Gamma_{\Sigma}\) is necessarily an overtwisted disc.
On each disc \(A_{i}\), points of \(\Gamma_{A_{i}}\) and \(\Gamma_{\Sigma}\) alternate along \(\partial A_{i}\). Let \(X_{i}^{\mathcal{A}}\) be a collection of intervals in \(A_{i}\setminus\Gamma_{A_{i}}\) with disjoint interiors and endpoints on \(\partial A_{i}\cap\Gamma_{\Sigma_{A}}\) such that \(X_{i}^{\mathcal{A}}\) cuts \(A_{i}\) into subdiscs each containing a single component of \(\Gamma_{A_{i}}\). Using the Legendrian Realisation Principle, perform a \(C^{\infty}\)-small convex isotopy of \(A_{i}\) relative to \(\partial A_{i}\) to ensure that the arcs \(X_{i}^{\mathcal{A}}\) are Legendrian on \(A_{i}\). Let \(\nu(\mathbf{X}^{\mathcal{A}})\) be a standard contact neighbourhood of the union of these intervals and define \(\overline{V}\) to be the smoothing of \(V\cup\nu(\mathbf{X}^{\mathcal{A}})\). Let \(\overline{U}\) be \(M\setminus\overline{V}\).
Taking the union of \(V\) and \(\nu(\mathbf{X}^{\mathcal{A}})\) requires standard intersections. To achieve this, one must first isotope \(\Sigma\) to make the intersections \(\partial\nu(\mathbf{X}^{\mathcal{A}})\cap\Sigma\) Legendrian. Then \(\nu(\mathbf{X}^{\mathcal{A}})\) must initially extend vertically to ensure that the intersection is standard. With these conditions in place, \(V\cup\nu(\mathbf{X}^{\mathcal{A}})\) can be smoothed to have a convex boundary. Since this is both possible to do and painstaking to describe, we may suppress such details in the following.
Continuing, let \(\mathbf{X}^{\mathcal{B}}\) be an analogous collection of arcs cutting \(\mathcal{B}\) into subdiscs each containing one component of \(\mathcal{B}_{\Gamma}\). Repeat this process to produce new handlebodies \(\widetilde{U}=\overline{U}\cup\nu(\mathbf{X}^{\mathcal{B}})\) and \(\widetilde{V}=M\setminus\widetilde{U}\). The resulting \(\widetilde{\mathcal{H}}=\widetilde{\mathcal{H}}(\mathcal{A},\mathcal{B}, \mathbf{X})\) is the _refinement_ of \((\mathcal{H},\mathcal{A},\mathcal{B},\mathbf{X}=\mathbf{X}^{\mathcal{A}}\cup \mathbf{X}^{\mathcal{B}})\).
Refinement produces contact Heegaard splittings:
Figure 7. Left: The figure shows a compressing disc \(A_{i}\) with its dividing set \(\Gamma_{A_{i}}\) with orange arcs \(\mathbf{X}\) separating components of \(\Gamma_{A_{i}}\) Right: When \(A_{i}\) is part of a convex compressing disc system, remove a standard neighbourhood of the \(\mathbf{X}\) arcs to refine the splitting. The compressing discs for the new splitting are the remaining shaded regions. As shown, we may also remove neighbourhoods of points of \(\Gamma_{\Sigma}\cap A_{i}\) in bigons cut out by \(\Gamma_{A_{i}}\); however, this changes \(\Sigma\) only by a convex isotopy.
**Proposition 5.2**.: _The refinement \(\widetilde{\mathcal{H}}(\mathcal{A},\mathcal{B},\mathbf{X})\) is a contact Heegaard splitting for \((M,\xi)\)._
Proof.: The Heegaard surface \(\widetilde{\Sigma}\) is convex by construction, and we will show that the new handlebodies \(\widetilde{U}\) and \(\widetilde{V}\) are each each tight and admit a system of convex product discs.
Recall that \(\overline{V}\) denotes the intermediate handlebody \(V\cup\nu(\mathbf{X}^{\mathcal{A}})\) and \(\overline{U}:=M\setminus\overline{V}\). By hypothesis, smoothing \(U\setminus\mathcal{A}\) yields a tight ball, and the handlebody \(\overline{U}\) is obtained from this ball via contact contact 1-handle attachments whose co-cores are the product discs \(\mathcal{A}\setminus\nu(\mathbf{X}^{\mathcal{A}})\). The final handlebody \(\widetilde{U}\) is obtained by further contact 1-handle attachments of \(\nu(\mathbf{X}^{\mathcal{B}})\); each of these again admits a co-core product disc intersecting \(\Gamma_{\widetilde{\Sigma}}\) twice. It follows that cutting along the new product discs again yields a tight ball, and hence one with a connected dividing set. Since \(\widetilde{U}\) is reconstructed from a tight ball by contact \(1\)-handle attachment, it follows that \(\widetilde{U}\) is tight. An identical argument applies to \(\widetilde{V}\).
In fact, the the choice of arcs \(\mathbf{X}=\mathbf{X}^{\mathcal{A}}\cup\mathbf{X}^{\mathcal{B}}\) does not affect the contact splitting.
**Lemma 5.3**.: _Up to convex isotopy of \(\widetilde{\Sigma}\), the refinement \((\mathcal{H},\mathcal{A},\mathcal{B},\mathbf{X})\) is independent of the choice of arcs \(\mathbf{X}\)._
Proof of Lemma 5.3.: On each disc, any sets of \(\mathbf{X}\) arcs are related by a sequence of arc slides. We show that each arc slide preserves \(\Gamma_{\widetilde{\Sigma}}\) up to isotopy.
We first examine a standard neighbourhood of the union of two \(\mathbf{X}\) arcs in \(\mathcal{A}\) with a shared endpoint. Since each arc \(X_{i}\) is disjoint from \(\Gamma_{\mathcal{A}}\), \(tw_{X_{i}}(\xi,T\mathcal{A})=0\). Thus the dividing set on a standard neighbourhood \(\nu(X_{i})\) of \(X_{i}\) is isotopic relative to its boundary to \(\nu(X_{i})\cap\mathcal{A}\). Each \(X_{i}\) cuts off a disc with a single component of \(\Gamma_{\mathcal{A}}\), so the dividing set on a standard neighbourhood \(\nu(X_{i})\) of \(X_{i}\) is isotopic relative to its boundary to \(\nu(X_{i})\cap\mathcal{A}\). After smoothing, curves of \(\Gamma_{\nu(\mathcal{A})}\) connect to those of \(\Gamma_{\Sigma}\) following Lemma 2.1. Figure 8 shows the local model for the smoothed \(\widetilde{\Sigma}\) given two possible \(X\) configurations related by a single arc slide. It is easy to verify that their dividing sets are isotopic, so the refinements agree.
Since the refinement is a contact Heegaard splitting, there is a well defined open book associated to the original splitting and its system of convex compressing discs. Before investigating how the open book depends on the ancillary data beyond \(\mathcal{H}\), we turn our attention to the case of topological Heegaard splittings where the boundaries of the compressing discs are not Legendrian.
### Refinement: General Case
In the previous section, we began with a tight splitting and a pair of disc systems whose boundaries were simultaneously Legendrian on \(\Sigma\). We now relax this condition and consider convex Heegaard surfaces where the curves \(\partial\mathcal{A}\) are Legendrian realisable and, separately, the curves \(\partial\mathcal{B}\) are Legendrian realisable. We want to define the refinement in this case, as well. In order to do, so we consider a parallel copy \(\Sigma^{\prime}\) of \(\Sigma\), chosen so that \(\partial\mathcal{A}\) is Legendrian on \(\Sigma\) and \(\partial\mathcal{B}\) is Legendrian on \(\Sigma^{\prime}\).
**Definition 5.4**.: A _triple decomposition_\(\mathcal{H}^{\prime}=(N,U,V^{\prime})\)_underlying_ the Heegaard splitting \(\mathcal{H}=(\Sigma,U,V)\) is a decomposition of \(M\) into pieces \(U\), \(N\) and \(V^{\prime}\), where \((N,\xi|_{N})\) is weakly isotopic to an \(I\)-invariant half-neighbourhood of \(\Sigma\). The isotopy is relative to \(\Sigma=\Sigma\times\{0\}\) and a neighbourhood of \(\Gamma_{\Sigma}\times I\). Set \(\Sigma^{\prime}=\partial_{+}N=-\partial V^{\prime}\).
**Definition 5.5**.: Suppose that \(\mathcal{H}\) is a tight Heegaard splitting with convex splitting surface \(\Sigma\). A _convex compressing disc system_\((\mathcal{A},\mathcal{B})\) for \(\mathcal{H}\) is a set of convex compressing discs \(\mathcal{A}=\{A_{1},\dots,A_{g}\}\), each with Legendrian boundary on \(\Sigma\), and similarly, a set of convex compressing discs \(\mathcal{B}=\{B_{1},\dots,B_{g}\}\) with Legendrian boundary on \(\Sigma^{\prime}\) for some underlying triple decomposition \(\mathcal{H}^{\prime}\).
In fact, convex compressing disc systems are common. Given a convex Heegaard splitting, any set of smooth compressing discs for the two handlebodies gives rise to a system of convex compressing discs. Let \(\mathcal{A}^{s}=\{A_{1}^{s},\dots,A_{g}^{s}\}\) and \(\mathcal{B}^{s}=\{\mathcal{B}_{1}^{s},\dots,B_{g}^{s}\}\) be systems of smooth compressing discs for \(U\) and \(V\), respectively, and assume that \(\partial\mathcal{A}^{s}\) and \(\partial\mathcal{B}^{s}\) are both in general position with \(\Gamma\). Since any system of compressing discs is non-separating, \(\partial\mathcal{A}^{s}\) is non-isolating on \(\Sigma\) and may be Legendrian realised; this requires a convex isotopy of the original Heegaard surface, but we again call the resulting surface \(\Sigma\). As a next step isotope \(\mathcal{A}^{s}\) relative to its boundary to be convex and call it \(\mathcal{A}\). Take a half neighbourhood \(\nu(\Sigma)\) of \(\Sigma\) and
Figure 8. Inset: two configurations for \(\mathbf{X}\) related by a handle slide. Main figures: the resulting Heegaard surfaces are convexly isotopic.
consider the curves \(\partial\mathcal{B}^{s}\) on \(\Sigma\times\{1\}\). Slightly abusing notation, we still denote \(\mathcal{B}^{s}\cap(V\setminus\nu(\Sigma))\) by \(\mathcal{B}^{s}\). Legendrian realise \(\partial\mathcal{B}^{s}\) by isotoping \(\Sigma\times\{1\}\) and call the result \(\Sigma^{\prime}\). This isotopy can be achieved in an arbitrarily small neighbourhood, \(\Sigma^{\prime}\cap\Sigma=\emptyset\) and we may assume that a neighbourhood of \(\Gamma\) is fixed throughout the process. As a last step, isotope \(\mathcal{B}^{s}\) to be convex and call it \(\mathcal{B}\).
We use the isotopy between \(N\) and \(\nu(\Sigma)\) to identify a copy of \(\partial\mathcal{B}\) on \(\Sigma\times\{1\}\), defining a "projection" of \(\partial\mathcal{B}\) onto \(\Sigma=\Sigma\times\{0\}\) that not need be Legendrian. During this operation the intersections of \(\Gamma\) and \(\partial\mathcal{B}\) remain fixed, but we are free to alter the intersections of \(\partial\mathcal{A}\) and \(\partial\mathcal{B}\) arbitrarily without changing anything in the upcoming construction. For simplicity, we always assume that the triple \((\partial\mathcal{A},\partial\mathcal{B},\Gamma)\) is in general position.
To recover the special case introduced first, take \(N\) to be an \(I\)-invariant half neighbourhood of \(\Sigma\) instead of merely being weakly isotopic to such a neighbourhood.
#### 5.2.1. Refinement
The refinement in the general case is similar to the special case above, with modifications only to account for the role of \(N\).
As above, we choose \(\mathbf{X}^{\mathcal{A}}\) and \(\mathbf{X}^{\mathcal{B}}\) and Legendrian realise them on \(\mathcal{A}\) and \(\mathcal{B}\), respectively. Fix the Legendrian arcs of \(\mathbf{X}^{\mathcal{A}}\), but extend \(\mathbf{X}^{\mathcal{B}}\) through \(N\) by the curves \((\Gamma\cap\partial\mathcal{B})\times I\). Since a neighbourhood of \(\Gamma\times I\) is fixed by the isotopy bringing \(N\) to \(\Sigma\times I\), it follows that the arcs \(\partial\mathcal{B}\times I\) are automatically Legendrian; we may ensure that the extended arcs are smooth by choosing \(\mathbf{X}^{\mathcal{B}}\) to be "straight" near \(\Sigma^{\prime}\). Denote the extended arcs by \(\mathbf{X}^{\mathcal{B}}\) so that the refining process proceeds verbatim: for the intermediate splitting, \(\overline{V}\) is the smoothed \(V^{\prime}\cup\nu(\mathbf{X}^{\mathcal{A}})\) and \(\overline{U}=M\setminus\overline{V}\). Finally, \(\widetilde{U}=\overline{U}\cup\nu(\mathbf{X}^{\mathcal{B}})\) and \(\widetilde{V}=M\setminus\widetilde{U}\). The obtained Heegaard splitting \(\widetilde{\mathcal{H}}=\widetilde{\mathcal{H}}(\mathcal{A},\mathcal{B}, \mathbf{X})\) is the _refinement_ of \((\mathcal{H},\mathcal{A},\mathcal{B},\mathbf{X}=\mathbf{X}^{\mathcal{A}}\cup \mathbf{X}^{\mathcal{B}})\).
**Proposition 5.6**.: _The refinement \(\widetilde{\mathcal{H}}=\widetilde{\mathcal{H}}(\mathcal{A},\mathcal{B}, \mathbf{X})\) is a contact Heegaard splitting of \((M,\xi)\) and is independent of the choice of auxiliary arcs \(\mathbf{X}\) used in the construction._
Proof.: The proof of Proposition 5.2 applies directly to show that \(U\) and \(V^{\prime}\) are tight handlebodies each cut into a ball by a system of product discs. To see that the criteria of Proposition 3.3 are met, we set \(\Sigma=\partial U\) to be the convex splitting surface. Extending \(V^{\prime}\) across \(N\) to \(\Sigma\) naturally extends the existing product discs by a collar neighbourhood in \(N\), but they remain product discs. To show that this extended \(V^{\prime}\) is tight, it suffices to note that \(N\) is weakly isotopic to a product neighbourhood of a convex surface.
The proof of Lemma 5.3 applies verbatim to the general setting.
As often in convex surface theory, we see that the smooth object is sufficiently determined by combinatorial input. In this case, observe that the
open book constructed via refinement depends only on the combinatorics of the dividing sets on the two-complexes \(\Sigma\cup\mathcal{A}\) and \(\Sigma^{\prime}\cup\mathcal{B}\). Of note, the intersections between \(\partial A\) and \(\partial B\) are immaterial. We regard maintaining two surfaces \(\Sigma\) and \(\Sigma^{\prime}\) as a technicality, rather than an essential feature, and when it is unlikely to cause confusion, we may simply write \(\Sigma\) both for \(\Sigma\) and \(\Sigma^{\prime}\) and \(V\) instead of \(V^{\prime}\).
**Lemma 5.7**.: _Suppose that \((\mathcal{A},\mathcal{B})\) is a convex compressing disc system for the tight Heegaard splitting \(\mathcal{H}\). If \(\mathcal{H}^{\prime}\) is a positive stabilisation along a disc \(D\) in the complement of \((\mathcal{A},\mathcal{B})\), then the refinements of \(\mathcal{H}\) and \(\mathcal{H}^{\prime}\) differ by a positive stabilisation, and hence, the associated open books differ by a positive stabilisation._
Proof.: This lemma follows from observing that, because \(D\) and \(\mathcal{A}\cup\mathcal{B}\) are mutually disjoint, one may perform the associated Heegaard splitting stabilisations in any order.
**Corollary 5.8**.: _Suppose \(\mathcal{H}=(\Sigma,U,V)\) and \(\mathcal{H}^{\prime}=(\Sigma^{\prime},U^{\prime},V^{\prime})\) are tight Heegaard splittings related by single bypass attachment to the front or back of \(\Sigma\). If \((\mathcal{A},\mathcal{B})\) is a convex compressing disc system disjoint from the bypass half-disc \(D\), then the refinements of \(\mathcal{H}\) and \(\mathcal{H}^{\prime}\) admit a common positive stabilisation._
**Example 5.9**.: Tight splittings are strictly more general than contact splittings. For example, in a contact splitting, \(\Gamma_{\Sigma}\) divides \(\Sigma\) into two connected components, while the example presents a tight splitting where \(\Sigma\setminus\Gamma_{\Sigma}\) has four components.
Figure 9 shows a convex Heegaard diagram for \(S^{3}\) together with the dividing set on \(\Sigma\). In addition to the line segments, there are curved green arcs indicating which points are connected by the dividing sets on the meridional discs. After cutting either solid torus along the indicated meridian and smoothing the resulting ball, the dividing set is a connected curve. By Giroux's Criterion (Theorem 2.4), the sphere has a tight neighbourhood and can be filled by a tight ball.
**Remark 5.10**.: There is an alternative approach to the construction of a triple described above. Given a tight Heegaard splitting \((\Sigma,U,V)\) for \((M,\xi)\), suppose now that \(\mathcal{A}\) and \(\mathcal{B}\) are disc systems for the handlebodies satisfying
Figure 9. A Heegaard splitting for \(S^{3}\) where \(\Gamma\) cuts \(\Sigma\) into four components. The curved green arcs indicate arcs of \(\Gamma_{A}\) and \(\Gamma_{B}\).
only the requirement that \(\partial A,\partial B,\) and \(\Gamma_{\Sigma}\) are in general position on \(\Sigma\). If the graph \(\partial A\cup\partial B\) is non-isolating, then an application of the Legendrian Realisation Principle ensures a convex isotopy of \(\Sigma\) that renders the graph Legendrian. After a further isotopy of the discs relative to their boundaries, \((\mathcal{A},\mathcal{B})\) may be assumed a convex compressing disc system. In the case that the original \(\mathcal{A}\cup\mathcal{B}\) is isolating, we claim that it is always possible to perform finger moves on \(\partial A\cup\partial B\) to produce a non-isolating graph \(\partial\mathcal{A}^{\prime}\cup\partial\mathcal{B}^{\prime}\). One must then show that, up to positive stabilisation, the open book constructed via refinement is independent of the choice of finger moves made at this initial step. Although this is possible, the flavour of argument is rather different from the rest of the paper, so we have chosen to restrict our discussion to the approach above.
### Invariance of the refinement
Above, we established that refinement promotes a tight Heegaard splitting to a contact Heegaard splitting. A priori, the latter depends on the choice of convex compressing system, but in this section we show that differing choices preserve the positive stabilisation class of the resulting open book.
**Theorem 5.11**.: _Let \((\mathcal{A},\mathcal{B})\) and \((\mathcal{A}^{\prime},\mathcal{B}^{\prime})\) be two convex compressing disc systems for the tight Heegaard splitting \(\mathcal{H}\) of \((M,\xi)\). Then the refinements \(\mathcal{H}(\mathcal{A},\mathcal{B})\) and \(\mathcal{H}(\mathcal{A}^{\prime},\mathcal{B}^{\prime})\) admit a common positive stabilisation._
In order to prove Theorem 5.11, we will introduce some moves relating distinct convex compressing disc systems. Each move discretely changes the combinatorics of \((\partial\mathcal{A},\partial\mathcal{B},\Gamma_{\Sigma})\) and the dividing curves on \(\mathcal{A}\) and \(\mathcal{B}\). After introducing each move, we will show that the refinements associated to the two systems are related by a sequence of positive stabilisations. Finally, we conclude in Proposition 5.16 that the moves considered here suffice to relate any pair of convex compressing disc systems for a fixed tight splitting.
Throughout, let \(\mathcal{H}=(\Sigma,U,V)\) be a tight Heegaard splitting for \((M,\xi)\) and let \((N,U,V^{\prime})\) be an underlying triple decomposition. The first two moves occur in an \(I\)-invariant neighbourhood of \(\Sigma\) (or \(\Sigma\)).
Figure 10. Local models for moves on convex compressing disc systems.
[T]: Triple point move._ Let \((\mathcal{A},\mathcal{B})\) be a convex compressing disc system and let \(Ann\subset\Sigma\) be an annulus with Legendrian boundary \(\partial A_{1}\cup-\alpha\). Suppose that \((\partial B\cup\Gamma)\cap Ann\) consists of arcs connecting \(\partial A_{1}\) to \(\alpha\). These arcs are required to be parallel, with the exception of a single component of \(\partial B\cap Ann\) and a single component of \(\Gamma\cap Ann\) which cross once. Define \(A_{1}^{\prime}\) to be a convex surface properly embedded in \(U\) that is convex isotopic, relative to \(\alpha\), to the smoothing of \(A_{1}\cup Ann\). We require also that \(\mathcal{A}\cap A_{1}^{\prime}=\emptyset\). Set \(\mathcal{A}^{\prime}=\{A_{1}^{\prime},A_{2},\ldots,A_{g}\}\). Then \((\mathcal{A}^{\prime},\mathcal{B})\) is a new convex compressing disc system and we say that \((\mathcal{A},\mathcal{B})\) and \((\mathcal{A}^{\prime},\mathcal{B})\) are related to each other by a _triple point move._ See Figure 10.
The analogous move for \(\mathcal{B}\) is also called a triple point move.
**Proposition 5.12**.: _[Triple point move] If \((\mathcal{A}^{\prime},\mathcal{B}^{\prime})\) is obtained from \((\mathcal{A},\mathcal{B})\) by a single triple point move, then the refinements \(\mathcal{H}(\mathcal{A},\mathcal{B})\) and \(\mathcal{H}(\mathcal{A}^{\prime},\mathcal{B}^{\prime})\) have a common positive stabilisation._
Proof.: Figure 11 shows a local model for the triple point move. Here, we assume that the entire move takes place within an \(I\)-invariant neighbourhood of \(\Sigma\). By assuming the isotopy is sufficiently small, we may ensure that the actual crossing remains away from the dividing sets on the compression discs, as shown in the figure. In the cases where one or both of the discs in the local model is a product disc, the associated refined Heegaard diagrams will be isotopic and there is nothing to check. We therefore turn our attention to the case when neither \(A\) nor \(B\) is a product disc, and intervals \(X_{i}\) are shown in orange in Figure 11. In the refined splittings, a neighbourhood of each of these is added as a one-handle to the opposite handlebody. The top picture in Figure 12 shows the refined Heegaard surfaces.
Figure 11. Local model for a triple point move. The right-hand picture shows the original disc together with \(Ann\) before smoothing, while the left hand picture shows the smoothed disc.
In order to show that the two refinements have a common stabilisation, we identify a pair of discs, shaded in red in the second row, which satisfy the hypotheses of Definition 3.6. We attach \(1\)-handles to \(V\) along the bold arcs in each picture and smooth the resulting Heegaard surfaces. It follows from Lemma 3.7 that the open books associated to the stabilised Heegaard diagrams are positive stabilisations of the originals. On the other hand, one may verify by inspection that the dividing sets on the two pictures in the bottom row are isotopic, as desired. The dividing curve is determined by the fact that \(tw_{l}(\xi,TD)=\frac{-1}{2}\): as seen in Figure 12, the dividing curve \(\Gamma_{\partial\nu(l)}\) before smoothing rotates half clockwise-turn less than \(D\cap\partial\nu(l)\).
The proof in the case that \(\partial B\) crosses a point of \(\partial A\cap\Gamma_{\Sigma}\) is similar.
Figure 12. Inset: Heegaard surfaces associated to a triple point move, before smoothing the refining tunnels. Top: add contact \(1\)–handles to \(V\) along the bold curves. Bottom: after smoothing, the resulting stabilised convex Heegaard surfaces are isotopic. The orange curves in the final figure indicate the intersection of the red discs with the added tunnels in order making the relative twisting easier to see.
_[F]: Finger move._ Let \((\mathcal{A},\mathcal{B})\) be a convex compressing disc system and let \(Ann\subset\Sigma\) be an annulus with Legendrian boundary \(\partial A_{1}\cup-\alpha\). Suppose that \(\Gamma\cap Ann\) consists of parallel arcs from \(A_{1}\) to \(\alpha\), together with an additional boundary parallel arc anchored at \(\alpha\). We further assume that \(Ann\) is disjoint from \(\partial\mathcal{B}\cap\Gamma\) and that \(\partial\mathcal{B}\cap Ann\) consists only of parallel arcs between \(\partial A_{1}\) and \(\alpha\). Define \(A^{\prime}_{1}\) to be a convex surface properly embedded in \(U\) that is convex isotopic, relative to \(\alpha\), to the smoothing of \(A_{1}\cup Ann\). We require also that \(\mathcal{A}\cap A^{\prime}_{1}=\emptyset\). Set \(\mathcal{A}^{\prime}=\{A^{\prime}_{1},A_{2},\ldots,A_{g}\}\). Then \((\mathcal{A}^{\prime},\mathcal{B})\) is a new convex compressing disc system and we say that \((\mathcal{A},\mathcal{B})\) and \((\mathcal{A}^{\prime},\mathcal{B})\) are related to each other by a _finger move._ See Figure 10.
We use the term _finger move_ also to denote the reverse of this move, or the analogous moves of \(\mathcal{B}\).
**Proposition 5.13**.: _[Finger move] If \((\mathcal{A}^{\prime},\mathcal{B}^{\prime})\) is obtained from \((\mathcal{A},\mathcal{B})\) by a single finger move, then the refinements \(\mathcal{H}(\mathcal{A},\mathcal{B})\) and \(\mathcal{H}(\mathcal{A}^{\prime},\mathcal{B}^{\prime})\) have a common positive stabilisation._
Proof.: Consider Figures 13 and 14. The first picture in Figure 13 shows the original disc \(A_{1}\) and the portion of the finger move annulus which contains the new intersection with \(\Gamma_{\Sigma}\). To construct \(A^{\prime}_{1}\), smooth the piecewise convex surface shown in the central picture to get the new dividing set shown on the right. Since \(\Gamma_{\mathcal{A}^{\prime}}\) has an additional component, constructing the refinement requires tunneling along an additional arc, shown in orange in
Figure 13. Left: \(A_{1}\) (red) and \(Ann\) (grey). Centre: a piecewise convex surface with Legendrian boundary. Right: A convex smoothing of the centre surface, showing the new dividing set on \(A^{\prime}_{1}\).
Figure 14. Observe that this arc cobounds a subdisc of \(A^{\prime}\) with the orange arc on \(\Sigma\) running parallel to the finger; these arcs are disjoint from \(\Gamma\) except at their endpoints, so the twisting of \(\xi\) relative to \(TD\) is \(-\frac{1}{2}\) in each case. The hypotheses of Definition 3.6 are satisfied, so the open book associated to the new refinement is a positive stabilisation of the open book associated to the original refinement, as desired.
_[1]: Interior bypass._ Let \((\mathcal{A},\mathcal{B})\) be a convex compressing disc system. Suppose that there is another convex compressing disc \(A^{\prime}_{1}\) for \(U\) with Legendrian boundary that is obtained from \(A_{1}\) by a bypass attachment along a bypass half-disc \(D\) disjoint from \(\mathcal{A}\setminus A_{1}\). Then for \(\mathcal{A}^{\prime}=\{A^{\prime}_{1},A_{2},\ldots,A_{g}\}\), we say that the convex compressing disc systems \((\mathcal{A},\mathcal{B})\) and \((\mathcal{A}^{\prime},\mathcal{B})\) are related to each other by an _interior bypass move._ See Figure 10.
**Proposition 5.14**.: _[Interior bypass] If \((\mathcal{A}^{\prime},\mathcal{B}^{\prime})\) is obtained from \((\mathcal{A},\mathcal{B})\) by a single interior bypass move, then the refinements \(\mathcal{H}(\mathcal{A},\mathcal{B})\) and \(\mathcal{H}(\mathcal{A}^{\prime},\mathcal{B}^{\prime})\) have a common positive stabilisation._
Proof.: Isotoping the interior of a convex compressing disc \(A\) across a bypass half-disc changes the dividing set \(\Gamma_{A}\) as shown in the inset in Figure 15. Since we have previously shown that the refinement of \((\Sigma,\mathcal{A},\mathcal{B})\) is independent of the choice of arcs \(\mathbf{X}\), we may choose \(\mathbf{X}\) to agree with the orange curves shown in the inset. Note, too, that these may be assumed to be local
pictures and we place no restrictions on the other curves of \(\Gamma_{\mathcal{A}}\) or \(\mathbf{X}\); if the bypass arc \(c\) intersects the same component of \(\Gamma_{\mathcal{A}}\) twice, then the bypass is necessarily trivial and may be disregarded.
Figure 15 shows a local model for the refinement after tunneling along the chosen \(\mathbf{X}\) before and after the isotopy across the bypass. Each picture also shows three bold arcs whose endpoints lie on the dividing set of the new tunnels. Isotoping these arcs to lie on the stabilised Heegaard surface traces out shaded red discs whose boundaries are otherwise disjoint from \(\Gamma\). Since these discs lies in a neighbourhood of the original \(A\), their dividing sets are isotopic to the restriction of the original \(\Gamma_{A}\), as shown. It follows that each of these discs satisfies the hypotheses of Definition 3.6, so Lemma 3.7 implies that stabilising the Heegaard splitting by tunneling along the bold arc stabilises the associated open book decomposition. Performing these stabilisations yields the pair of isotopic Heegaard surfaces shown in the bottom of Figure 15, but the dividing sets are not yet isotopic.
To complete the argument, we turn to Figure 16 where the isotopic Heegaard surface are shown with their dividing sets and the bypass attachment arcs that relate them. Note that bypasses along these arcs necessarily exist, by the hypotheses of the interior bypass move. Attaching a pair of bypasses in the left-hand figure renders the dividing set isotopic to that of the right-hand figure. Note that with respect to the orientation of \(\Sigma\) as \(\partial U\), one of these bypasses is attached from the front and the other, from the back. Since the bypass arcs in each picture are disjoint from the indicated system of convex compressing discs for the stabilised refinements, Proposition 3.8 implies that the two splittings admit a common positive stabilisation, as desired.
**Proposition 5.15**.: _[Handle slide] If \((\mathcal{A}^{\prime},\mathcal{B}^{\prime})\) is obtained from \((\mathcal{A},\mathcal{B})\) by a single handle slide, then the refinements \(\mathcal{H}(\mathcal{A},\mathcal{B})\) and \(\mathcal{H}(\mathcal{A}^{\prime},\mathcal{B}^{\prime})\) have a common positive stabilisation._
Proof.: As a first step, we observe that any collection of convex discs with Legendrian boundary on \(\Sigma\) defines a refinement as long as there is a subset
Figure 15. Given two compressing disc systems related by a single bypass, first stabilise their refinements to get a common Heegaard surface.
of the discs which constitute a disc system for each handlebody. The extension is immediate: choose sufficiently many \(\mathbf{X}\) arcs to separate components of \(\Gamma_{\mathcal{A}}\) in every disc and add tunnels along all of these.
We will show that the refinement of \((\{A_{1},A_{1}^{\prime},A_{2},\ldots,A_{g}\},\mathcal{B})\) is a positive stabilisation of each of \((\mathcal{A},\mathcal{B})\) and \((\mathcal{A}^{\prime},\mathcal{B})\). In fact, it suffices to show that \((\{A_{1},A_{1}^{\prime},A_{2},\ldots,A_{g}\},\mathcal{B})\) is a positive stabilisation of \((\mathcal{A},\mathcal{B})\), as the relationship between the curves \(\{A_{1},A_{2},A_{1}^{\prime}\}\) is symmetric up to finger moves which have already been shown to preserve the positive stabilisation class.
Given the pair of pants defining the handle slide move, we can explicitly construct \(A_{1}^{\prime}\) by smoothing a piecewise convex surface built from parallel copies of \(A_{1}\) and \(A_{2}\) and neighbourhood of \(\gamma\) on \(\Sigma\). Observe in Figure 17 that the dividing set and choice of \(\mathbf{X}\) on \(A_{1}\) and \(A_{2}\) dictate the dividing set and a canonical choice of \(\mathbf{X}\) on \(A_{1}^{\prime}\). The \(\mathbf{X}\) arcs on \(A_{1}^{\prime}\) come in two forms: arcs that are parallel to \(\mathbf{X}\) arcs on \(A_{1}\) or \(A_{2}\) and arcs which cross the band defined by \(\gamma\).
We consider the parallel arcs first. Let \(X_{1}\subset A_{1}\) be an arc which is tunneled along to build the refinement of \((\Sigma,\mathcal{A},\mathcal{B})\). We see in Figure 18 that
Figure 16. The dividing sets on the left and right Heegaard surfaces become isotopic after performing bypass along each of the indicated purple arcs. Since the bypass attachment arcs lie in the complement of the indicated disc systems for the stabilised refinements, it follows from Proposition 3.8 that the two contact Heegaard splittings admit a common positive stabilisation.
tunneling along a copy of the same arc in a disc locally parallel to \(A_{1}\) is a positive stabilisation.
Finally, we consider the \(\mathbf{X}\) arcs that start cross the band defined by \(\gamma\). Figure 19 shows that tunneling along each of these arcs is a positive stabilisation of the refinement of \((\Sigma,\mathcal{A},\mathcal{B})\), as desired.
#### 5.3.1. Sufficiency of disc moves
We conclude this section, and the proof of Theorem 5.11, by showing that the moves introduced above act transitively
Figure 17. Local models for \(A_{1}^{\prime}\).
Figure 18. Left inset: a copy of \(A_{1}\) with an \(X_{1}\) arc shown in orange. Right inset: parallel copies of \(A_{1}\) with parallel copies of \(X_{1}\). Left: After adding a tunnel along \(X_{1}\), the orange shaded disc satisfies the hypotheses of Definition 3.6. The bold arc is the second copy of \(X_{1}\). Right: Adding a tunnel along the bold arc is a positive stabilisation of the original refinement.
on the equivalence classes of convex compressing disc systems for a tight Heegaard splitting.
**Proposition 5.16**.: _Up to convex isotopy, any two convex compressing discs systems \((\mathcal{A},\mathcal{B})\) and \((\mathcal{A}^{\prime},\mathcal{B}^{\prime})\) for a given tight splitting \(\mathcal{H}\) of \((M,\xi)\) are related by a sequence of the elementary disc moves [F], [T], [I] and [H]._
Proof.: If \((\mathcal{A},\mathcal{B})\) and \((\mathcal{A}^{\prime},\mathcal{B}^{\prime})\) are smoothly isotopic, then we first multiply the isotopy by a cut-off function that vanishes away from a neighbourhood \(N^{\prime}\) of \(N\) that is an extension of \(N\) by \(I\)-invariant neighbourhoods of \(\Sigma\) and \(\Sigma^{\prime}\). This isotopy can be followed by finger moves between \(\partial A\) and \(\partial B\) which have no effect on the disc system and the elementary moves [F] and [T]. To be able to do this, at each step one needs to simultaneously Legendrian realise \(\partial\mathcal{A}\cup\partial A_{1}^{\prime}\), but as \(\partial A_{i}\cap\Gamma\neq\emptyset\), this is always possible by a \(C^{\infty}\)-isotopy of \(\Sigma\).
Now fixing the boundary of the compressing discs, Theorem 2.11 decomposes the rest of the isotopy as a composition of bypasses from the front or from the back. This is move [I].
Smoothly, any two systems of compressing discs can be made isotopic by handle slides; by first performing some isotopies and finger moves in \(N^{\prime}\), one can make sure that the handle slide is performed along an arc of \(\Gamma\), as required by the local model for move [H]. This finishes the proof.
Figure 19. Left: Orange \(\mathbf{X}\) arcs on \(A_{1}\) and \(A_{2}\) guide the tunnels shown; following Figure 17, the bold arcs indicate where tunnels should be excavated on \(A_{1}^{\prime}\). Right: The shaded discs show that tunneling along the bold arcs is positively stabilises the original splitting.
We have now established everything needed to conclude the invariance of the positive stabilisation class of the open book associated to a tight splitting:
Proof of Theorem 5.11.: By Proposition 5.16 the two convex compressing disc systems are related to each other by elementary disc moves, and Propositions 5.12,5.13, 5.14, and 5.15 show that each of these elementary disc moves preserves the positive stabilisation class of the open book associated to the refinement.
## 6. Proof of the Giroux Correspondence
We now have all the ingredients to prove the Giroux Correspondence for tight manifolds.
**Theorem 6.1** (c.f. Theorem 2).: _Let \((M,\xi)\) be a tight contact manifold and suppose that \((B,\pi)\) and \((B^{\prime},\pi^{\prime})\) are two open book decompositions supporting \(\xi\). Then \((B,\pi)\) and \((B^{\prime},\pi^{\prime})\) admit a common positive stabilisation._
Proof of Theorem 6.1.: Construct the contact Heegaard splittings \(\mathcal{H}=\mathcal{H}(B,\pi)\) and \(\mathcal{H}^{\prime}=\mathcal{H}(B^{\prime},\pi^{\prime})\) corresponding to \((B,\pi)\) and \((B^{\prime},\pi^{\prime})\), respectively.
Following the Reidemeister-Singer Theorem, \(\mathcal{H}\) and \(\mathcal{H}^{\prime}\) will become isotopic after sufficiently many Heegaard splitting stabilisations. Stabilise \(\mathcal{H}\) and \(\mathcal{H}^{\prime}\) accordingly, ensuring that each stabilisation is positive in the sense of Definition 3.6. This results in a pair of contact Heegaard decompositions for \((M,\xi)\) that are smoothly isotopic. It then suffices to prove the theorem for a pair of open books that induce smoothly isotopic contact Heegaard splittings.
Suppose now that the contact Heegaard splittings \(\mathcal{H}:=\mathcal{H}(B,\pi)=(\Sigma,U,V)\) and \(\mathcal{H}^{\prime}:=\mathcal{H}(B^{\prime},\pi^{\prime})=(\Sigma^{\prime},U ^{\prime},V^{\prime})\) are smoothly isotopic. Isotopy discretisation implies that \(\Sigma\) and \(\Sigma^{\prime}\) are related by a sequence of bypasses, so we may enumerate the intermediate convex splitting surfaces \(\Sigma=\Sigma_{0},\Sigma_{1},\ldots,\Sigma_{k}=\Sigma^{\prime}\). We claim that for any consecutive pair of Heegaard surfaces \(\Sigma_{i},\Sigma_{i+1}\) defining tight Heegaard splittings, there exists a convex compressing disc system disjoint from the bypass half-disc. The positive stabilisation class of the splitting is independent of the choice convex compressing disc system, so it follows from Corollary 5.8 that the refinements of the splittings admit a common positive stabilisation. Since this holds for each pair in the sequence, we conclude that \(\mathcal{H}\) and \(\mathcal{H}^{\prime}\) admit a common positive stabilisation.
Applying Lemma 3.7 yet again, it follows that \((B,\pi)\) and \((B^{\prime},\pi^{\prime})\) admit a common positive open book stabilisation, as desired.
To complete the argument, we need only show that a single bypass attachment may always be performed in the complement of a convex compressing disc system.
Choose a convex compressing disc system \((\mathcal{A},\mathcal{B})\) for the tight splitting \((\Sigma,U,V)\). Perform finger moves on \(\partial\mathcal{A}\) and \(\partial\mathcal{B}\) along the Legendrian attaching arc \(c\) until the boundaries of the compressing discs are all disjoint from \(c\). Now consider intersections between \(\mathcal{B}\) and \(D\). Isotope \(\mathcal{B}\) relative to \(\partial B\) to transform any simple closed curves of \(\mathcal{B}\cap D\) into arcs, and then perform internal bypasses along these arcs to push \(\mathcal{B}\) off \(D\). The result is a new system of convex compressing discs that is disjoint from \(D\), as desired.
## 7. Overtwisted manifolds
In this final section, we consider the case of overtwisted manifolds.
With the exception of Section 6, we have focused on tight Heegaard splittings rather than tight manifolds, so most of the technical results apply equally well to overtwisted contact manifolds divided into two tight handlebodies. In fact, we may generalise the Heegaard splittings we consider yet further by demanding more of the convex compressing disc systems.
**Definition 7.1**.: A _tightening system_ is a convex compressing disc system for \((\Sigma,U,V)\) with the property that each of \(U\setminus\mathcal{A}\) and \(V\setminus\mathcal{B}\) is a tight ball.
When \((\Sigma,U,V)\) is a tight Heegaard splitting, then every convex compressing disc system is a tightening system. However, the requirement that the handlebodies \(U\) and \(V\) be tight is stronger than needed in order to define the refinement of the splitting.
**Lemma 7.2**.: _Let \((\mathcal{A},\mathcal{B})\) be a tightening system for the splitting \(\mathcal{H}=(\Sigma,U,V)\). Then the refinement of \(\mathcal{H}\) is a contact Heegaard splitting._
The proof of Proposition 5.2 began by cutting \(U\) along \(\mathcal{A}\) and \(V\) along \(\mathcal{B}\) to get a pair of tight balls; since this is the defining property of a tightening system, the argument holds as written in the case when the original splitting was tight. This suggests a path towards proving the full Giroux Correspondence: if we can arrange a tightening system at each step, then our sequence of Heegaard splittings will again produce a sequence of open books in a fixed positive stabilisation class. The final step in this program is ensuring that tightening systems may be found in the complement of bypass discs, and here we are only partially successful.
We begin with the good news. A potential obstruction to passing \(\Sigma\) across a bypass half-disc in \(V\) occurs if a given tightening system \(\mathcal{A}\subset U\) intersects the bypass attachment arc on \(\Sigma\). However, a careful analysis of
internal bypasses allows us to replace any such \(\mathcal{A}\) with a different tightening system for \(U\) which remains a tightening system after the bypass attachment.
Unfortunately, this is only half the battle, as there are Heegaard splittings without tightening systems. The crux of the difficulty is the well known fact that overtwisted discs come in families; although it is trivial to choose a disc system that intersects a fixed overtwisted disc in an essential way, it is not always possible to kill all the overtwisted discs by intersection.
**Example 7.3** (Heegaard splitting that does not admit a tightening system).: Consider any genus-one convex Heegaard surface where the dividing set on the solid torus \(U\) is a pair of parallel meridians. It follows that any compressing disc has only inessential intersections with \(\Gamma\). If \(\Gamma\cap\partial\mathcal{A}=\emptyset\), then any Legendrian realisation of \(\partial\mathcal{A}\) is a \(\mathit{tb}=0\) unknot, and hence bounds an overtwisted discs. If \(|\Gamma\cap\partial\mathcal{A}|>2\), the proof of Proposition 5.13 shows that reducing the number of intersections via a finger move preserves the isotopy class of the dividing set on the smoothed ball \(U\setminus\mathcal{A}\). We may thus assume \(|\Gamma\cap\partial\mathcal{A}|=2\). In this case, cutting along \(\mathcal{A}\) and smoothing yields a ball with a disconnected dividing set, so the ball remains overtwisted. This shows that there is no tightening system for such a splitting.
The final example shows that tightening systems do sometimes exist.
**Example 7.4** (Overtwisted handlebody with a tightening system).: Consider a solid torus \(U\) with a convex meridional disc \(A\) with Legendrian boundary. The disc \(A\) is a tightening system for the solid torus if, after cutting along \(A\) and smoothing, the dividing set on the ball is connected. The left-hand side of Figure 20 shows a connected dividing set on the boundary of a ball, where the surface is decomposed as an annulus together with two red discs indicating where the cut along \(A\) occurred. The figure also shows two copies of an arc on \(A\) indicating where a bypass half-disc meets \(A\) in the solid torus. Note first that the bypass from the back along the dashed purple arc is trivial, so it necessarily exists in \(U\setminus A\). Passing \(A\) across the bypass attaches a half-disc to the orange curve from the front and to the purple arc from the back. Denote the new meridional disc by \(A^{\prime}\). The right-hand figure shows the dividing set on the ball formed by cutting the solid torus along \(A^{\prime}\); since the dividing set is disconnected, the ball is overtwisted, showing that the original contact structure on \(U\) was overtwisted.
|
2309.14788 | Small-Space Algorithms for the Online Language Distance Problem for
Palindromes and Squares | We study the online variant of the language distance problem for two
classical formal languages, the language of palindromes and the language of
squares, and for the two most fundamental distances, the Hamming distance and
the edit (Levenshtein) distance. In this problem, defined for a fixed formal
language $L$, we are given a string $T$ of length $n$, and the task is to
compute the minimal distance to $L$ from every prefix of $T$. We focus on the
low-distance regime, where one must compute only the distances smaller than a
given threshold $k$. In this work, our contribution is twofold:
- First, we show streaming algorithms, which access the input string $T$ only
through a single left-to-right scan. Both for palindromes and squares, our
algorithms use $O(k \cdot\mathrm{poly}~\log n)$ space and time per character in
the Hamming-distance case and $O(k^2 \cdot\mathrm{poly}~\log n)$ space and time
per character in the edit-distance case. These algorithms are randomised by
necessity, and they err with probability inverse-polynomial in $n$.
- Second, we show deterministic read-only online algorithms, which are also
provided with read-only random access to the already processed characters of
$T$. Both for palindromes and squares, our algorithms use $O(k
\cdot\mathrm{poly}~\log n)$ space and time per character in the
Hamming-distance case and $O(k^4 \cdot\mathrm{poly}~\log n)$ space and
amortised time per character in the edit-distance case. | Gabriel Bathie, Tomasz Kociumaka, Tatiana Starikovskaya | 2023-09-26T09:36:24Z | http://arxiv.org/abs/2309.14788v2 | # Small-Space Algorithms for the Online Language Distance Problem for Palindromes and Squares
###### Abstract
We study the online variant of the language distance problem for two classical formal languages, the language of palindromes and the language of squares, and for the two most fundamental distances, the Hamming distance and the edit (Levenshtein) distance. In this problem, defined for a fixed formal language \(L\), we are given a string \(T\) of length \(n\), and the task is to compute the minimal distance to \(L\) from _every_ prefix of \(T\). We focus on the low-distance regime, where one must compute only the distances smaller than a given threshold \(k\). In this work, our contribution is twofold:
1. First, we show _streaming_ algorithms, which access the input string \(T\) only through a single left-to-right scan. Both for palindromes and squares, our algorithms use \(O(k\operatorname{polylog}n)\) space and time per character in the Hamming-distance case and \(O(k^{2}\operatorname{polylog}n)\) space and time per character in the edit-distance case. These algorithms are randomised by necessity, and they err with probability inverse-polynomial in \(n\).
2. Second, we show _deterministic read-only_ online algorithms, which are also provided with read-only random access to the already processed characters of \(T\). Both for palindromes and squares, our algorithms use \(O(k\operatorname{polylog}n)\) space and time per character in the Hamming-distance case and \(O(k^{4}\operatorname{polylog}n)\) space and amortised time per character in the edit-distance case.
Approximate pattern matching, streaming algorithms, palindromes, squares 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 202 2012 202 2022 2022 2022 202 2022 2022 2022 2022 2022 2022 2202 2222 2022 222 222 222 2222 2222 222 2222 2222 222 2222 222 222 2222 222 2222 222 2222 2222 222 2222 222 2222 222 2222 2222 222 2222 2222 2222 2222 2222 2222 2222 2222 2222 2222 2222 2222 2222 2222 2222 2222 2222 2222 22222 2222 22222 2222 2222 2222 22222 2222 2222 22222 22222 222 2222 222222 22222 2222 2222 22222 22222 22222 22222 22222 22222 22222 22222 222222 22222 2222 222222 22222 22222 22222 22222 222222 22222 222222 22222 22222 222222 222222 2222222 222222 222222 222222 22222222 2222222 22222222 2222222 22222222 222222222 222222222 2222222222 2222222222222222
do not need to be computed. We consider the edit distance (defined as the minimum number of character insertions, deletions, and substitutions needed to transform one string into the other) and, as a preliminary step, the Hamming distance (allowing for substitutions only). We study the problem for two classical languages: the language PAL of all palindromes, where a palindrome is a string that is equal to its reversed copy, and the language SQ of all squares, where a square is the concatenation of two copies of a string. These two languages are very similar yet very different in nature: PAL is not regular but is context-free, whereas SQ is not even context-free. Formally, the problems we consider are defined as follows:
\(k\)-Lhd-Pal (resp. \(k\)-Lhd-Sq)
**Input:** A string \(T\) of length \(n\) and a positive integer \(k\).
**Output:** For each \(1\leq i\leq n\), report \(\min\{k+1,hd_{i}\}\), where \(hd_{i}\) is the minimum Hamming distance between \(T[1..i]\) and a string in PAL (resp. in SQ).
\(k\)-Led-Pal (resp. \(k\)-Led-Sq)
**Input:** A string \(T\) of length \(n\) and a positive integer \(k\).
**Output:** For each \(1\leq i\leq n\), report \(\min\{k+1,ed_{i}\}\), where \(ed_{i}\) is the minimum edit distance between \(T[1..i]\) and a string in PAL (resp. in SQ).
Amir and Porat [3] showed that there is a randomised streaming algorithm that solves the \(k\)-LHD-PAL problem in \(\tilde{O}(k)\) space and \(\tilde{O}(k^{2})\) time per input character.1 We continue their line of research and show streaming algorithms for all four problems that use \(\operatorname{poly}(k,\log n)\) time per character and \(\operatorname{poly}(k,\log n)\) space. While streaming algorithms are extremely efficient (in particular, the space complexities above account for _all_ the space used by the algorithms, including the space needed to store information about the input), it is important to note that they are randomised in nature, which means that they may produce incorrect results with a certain probability (inverse polynomial in the input size \(n\)). Motivated by this, we also study the problems in the read-only model, where random access to the input is allowed (and not accounted for in the space usage). In this model, we show _deterministic_ algorithms for the four problems that use \(\operatorname{poly}(k,\log n)\) time per character and \(\operatorname{poly}(k,\log n)\) space; see Table 1 for a summary. As a side result of independent interest, we develop the first \(\operatorname{poly}(k,\log n)\) space read-only algorithms for computing \(k\)-mismatch and \(k\)-edit occurrences of a pattern in a text.
Footnote 1: Hereafter, \(\tilde{O}(\cdot)\) hides factors polynomial in \(\log n\).
Due to the lack of space, descriptions of the algorithms for the Language edit distance problems (\(k\)-LED-PAL and \(k\)-LED-SQ) are omitted from this version of the paper, but can be found in the full one.
\begin{table}
\begin{tabular}{|l|l|l|l|l|} \hline Problem & Model & Time per character & Space complexity & Reference \\ \hline \hline \(k\)-LHD-PAL & Streaming & \(O(k\log^{3}n)\) & \(O(k\log n)\) & Thm 3.2 \\ \(k\)-LHD-SQ & Streaming & \(\tilde{O}(k)\) & \(O(k\log^{2}n)\) & Thm 3.3 \\ \(k\)-LHD-PAL/SQ & Read-only & \(O(k\log n)\) & \(O(k\log n)\) & Thms 4.8 and 4.10 \\ \hline \(k\)-LED-PAL/SQ & Streaming & \(\tilde{O}(k^{2})\) & \(\tilde{O}(k^{2})\) & Thms 5.1 and 5.2 \\ \(k\)-LED-PAL/SQ & Read-only & \(\tilde{O}(k^{4})\) (amortised) & \(\tilde{O}(k^{4})\) & Thms 5.3 and 5.4 \\ \hline \end{tabular}
\end{table}
Table 1: Summary of the complexities of the algorithms introduced in this work.
### Related work.
**Offline model.** In the classical _offline_ model, the problem of finding _all maximal substrings_ that are within Hamming distance \(k\) from PAL can be solved in \(O(nk)\) time as a simple application of the kangaroo jumps technique [18]. For the edit distance, Porto and Barbosa [32] showed an \(O(nk^{2})\) solution. For the SQ language, the best known solutions take \(O(nk\log k+\text{output})\) time for the Hamming distance [23] and \(O(nk\log^{2}k+\text{output})\) for the edit distance [26, 39, 40].
**Online model.** The problems \(k\)-LHD-PAL and \(k\)-LED-PAL can be viewed as a generalization of the classical online palindrome recognition problem (see [17] and references therein).
**Streaming algorithms for PAL and SQ.** Berebrink et al. [6] followed by Gawrychowski et al. [20] studied the question of computing the length of a maximal substring of a stream that belongs to PAL. Merkurev and Shur [29] considered a similar question for the SQ language.
## 2 Preliminaries
We assume to be given an alphabet \(\Sigma\), the elements of which, called _characters_, can be stored in a single machine word of the RAM model. For an integer \(n\geq 0\), we denote the set of all length-\(n\) strings by \(\Sigma^{n}\), and we set \(\Sigma^{\leq n}=\bigcup_{m=0}^{n}\Sigma^{m}\) as well as \(\Sigma^{*}=\bigcup_{n=0}^{\infty}\Sigma^{n}\). The empty string is denoted by \(\varepsilon\).
For two strings \(S,T\in\Sigma^{*}\), we use \(ST\) or \(S\cdot T\) indifferently to denote their concatenation. For an integer \(m\geq 0\), the string obtained by concatenating \(S\) to itself \(m\) times is denoted by \(S^{m}\); note that \(S^{0}=\varepsilon\). A string \(S\) is a _square_ if there exists a string \(T\) such that \(S=T^{2}\).
For a string \(T\in\Sigma^{n}\) and an index \(i\in[1..n]\),2 the \(i\)th character of \(T\) is denoted by \(T[i]\). We use \(|T|=n\) to denote the length of \(T\). For indices \(1\leq i,j\leq n\), \(T[i..j]\) denotes the substring \(T[i]T[i+1]\cdots T[j]\) of \(T\) if \(i\leq j\) and the empty string otherwise. When \(i=1\) or \(j=n\), we omit these indices, i.e., we write \(T[..j]=T[1..j]\) and \(T[i..]=T[i..n]\). We extend the above notation with \(T[i..j]=T[i..j-1]\) and \(T(i..j]=T[i+1..j]\). We say that a string \(P\) is a _prefix_ of \(T\) if there exists \(j\in[1..n]\) such that \(P=T[..j]\), and a _suffix_ of \(T\) if there exists \(i\in[1..n]\) such that \(P=T[i..]\). We use \(T^{R}\) to denote the reverse of \(T\), that is \(T^{R}=T[n]T[n-1]\cdots T[1]\). A string \(T\) is a _palindrome_ if \(T^{R}=T\).
Footnote 2: For integers \(i,j\in\mathbb{Z}\), denote \([i..j]=\{k\in\mathbb{Z}:i\leq k\leq j\}\), \([i..j)=\{k\in\mathbb{Z}:i\leq k<j\}\), and \((i..j]=\{k\in\mathbb{Z}:i<k\leq j\}\).
We define the _forward cyclic rotation_\(\mathsf{rot}(T)=T[2]\cdots T[n]T[1]\). In general, a cyclic rotation \(\mathsf{rot}^{*}(T)\) with shift \(s\in\mathbb{Z}\) is obtained by iterating \(\mathsf{rot}\) or the inverse operation \(\mathsf{rot}^{-1}\). A non-empty string \(T\in\Sigma^{n}\) is _primitive_ if it is distinct from its non-trivial rotations, i.e., if \(T=\mathsf{rot}^{*}(T)\) holds only when \(n\) divides \(s\).
Given two strings \(U,V\) and two indices \(i\in[1..|U|]\) and \(j\in[1..|V|]\), the _longest common prefix_ (LCP) of \(U[i..]\) and \(V[j..]\), denoted \(\mathsf{LCP}(U[i..],V[j..])\), is the length of the longest string that is a prefix of both \(U[i..]\) and \(V[j..]\).
Given two non-empty strings \(U,Q\) and an operator \(F\) defined over pairs of strings, we use the notation \(F(U,Q^{\infty})\) for the application of \(F\) to \(U\) and the prefix of \(Q^{\infty}=QQ\cdots\) that has the same length as \(U\), i.e., \(F(U,Q^{\infty})=F(U,Q^{m}[..|U|])\), where \(m\) is any integer such that \(|Q^{m}|\geq|U|\). We define \(F(Q^{\infty},U)\) symmetrically.
### Hamming distance, palindromes, and squares
The Hamming distance between two strings \(S,T\) (denoted \(\mathsf{hd}(S,T)\)) is defined to be equal to infinity if \(S\) and \(T\) have different lengths, and otherwise to the number of positions where the two strings differ (mismatches). We define the _mismatch information_ between two length-\(n\) strings \(S\) and \(T\), \(\mathsf{Ml}(S,T)\) as the set \(\{(i,S[i],T[i]):i\in[1..n]\text{ and }S[i]\neq T[i]\}\). For two strings \(P,T\), a position \(i\in[\left\lvert P\right\rvert..\left\lvert T\right\rvert]\) of \(T\) is a \(k\)_-mismatch occurrence_ of \(P\) in \(T\) if \(\mathsf{hd}(T(i-\left\lvert P\right\rvert..i],P)\leq k\). For an integer \(k\), we denote \(\mathsf{hd}_{\leq k}(X,Y)=\mathsf{hd}(X,Y)\) if \(\mathsf{hd}(X,Y)\leq k\) and \(\infty\) otherwise.
Due to the self-similarity of palindromes and squares, the Hamming distance from a string \(U\) to PAL and SQ can be measured in terms of the self-similarity of \(U\).
Each string \(U\in\Sigma^{m}\) satisfies \(\mathsf{hd}(U,\mathsf{PAL})=\mathsf{hd}(U[\,..\left\lvert m/2\right\rvert],U( \left\lceil m/2\right\rceil..\left\lvert^{R}\right\rceil)=\frac{1}{2}\mathsf{ hd}(U,U^{R})\).
Proof.: Denote \(U_{1}=U[\,..\left\lvert m/2\right\rvert]\) and \(U_{2}=U(\left\lceil m/2\right\rceil..]\). For the second equality, we have \(\mathsf{hd}(U,U^{R})=\mathsf{hd}(U_{1},U_{2}^{R})+\mathsf{hd}(U_{2},U_{1}^{R} )=2\cdot\mathsf{hd}(U_{1},U_{2}^{R})\).
The first equality is equivalent to \(\mathsf{hd}(U_{1},U_{2}^{R})=\mathsf{hd}(U,\mathsf{PAL})\). As the Hamming distance between \(U\) and the palindrome \(U_{2}^{R}U_{2}\) (or \(U_{2}^{R}aU_{2}\) if \(m\) is odd) is \(\mathsf{hd}(U_{1},U_{2}^{R})\), we have \(\mathsf{hd}(U_{1},U_{2}^{R})\geq\mathsf{hd}(U,\mathsf{PAL})\).
Conversely, let \(V\) be a palindrome such that \(\mathsf{hd}(U,V)=\mathsf{hd}(U,\mathsf{PAL})\). We decompose similarly \(V\) into \(V_{1}V_{1}^{R}\) (or \(V_{1}bV_{1}^{R}\) for odd \(m\)) and obtain \(\mathsf{hd}(U,V)\geq\mathsf{hd}(U_{1},V_{1})+\mathsf{hd}(U_{2},V_{1}^{R})\). Using the fact that \(\mathsf{hd}(U_{2},V_{1}^{R})=\mathsf{hd}(U_{2}^{R},V_{1})\) and applying the triangle inequality, we get \(\mathsf{hd}(U_{1},U_{2}^{R})\leq\mathsf{hd}(U,\mathsf{PAL})\).
Each string \(U\in\Sigma^{m}\) satisfies \(\mathsf{hd}(U,SQ)=\mathsf{hd}(U[..m/2],U(m/2..])\) if \(m\) is even and \(\mathsf{hd}(U,SQ)=\infty\) if \(m\) is odd.
Proof.: Every square has even length; hence, if \(m\) is odd, the distance between \(U\) and SQ is infinite. In what follows, we assume that \(m=2i\) for some \(i\in\mathbb{N}\). Let \(U_{1}=U[\,..\,i]\) and \(U_{2}=U(i..]\). By modifying the copy of \(U_{1}\) in \(U\) into \(U_{2}\), we obtain a square \(U_{2}U_{2}\); hence, \(\mathsf{hd}(U,\mathrm{SQ})\leq\mathsf{hd}(U_{1},U_{2})\).
For the converse inequality, let \(V^{2}\) be a square such that \(\mathsf{hd}(U,\mathrm{SQ})=\mathsf{hd}(U,V^{2})\). We have \(\left\lvert V\right\rvert=\left\lvert U_{1}\right\rvert=\left\lvert U_{2}\right\rvert\); hence, \(\mathsf{hd}(U,V^{2})=\mathsf{hd}(U_{1},V)+\mathsf{hd}(V,U_{2})\). Applying the triangle inequality, we obtain \(\mathsf{hd}(U,\mathrm{SQ})=\mathsf{hd}(U,V^{2})\geq\mathsf{hd}(U_{1},U_{2})\).
### Models of computation
In this work, we focus on two by now classical models of computation: streaming and read-only random access. In the streaming model, we assume that the input string \(T\) arrives as a stream, one character at a time. For each prefix \(T[1..i]\), we must report the distance to PAL or SQ as soon as we receive \(T[i]\). We account for all the space used, including the space needed to store any information about \(T\). In contrast, in the read-only model, we do not account for the space occupied by the input string. We assume that \(T\) is read from the left to the right. After having read \(T[1..i]\), we assume to have constant-time read-only random access to the first \(i\) characters of \(T\). Similar to the streaming model, the distance from \(T[1..i]\) to PAL or SQ must be reported as soon as we read \(T[i]\).
## 3 Warm-up: Streaming algorithms for the LHD problems
In this section, we present streaming algorithms for \(k\)-LHD-PAL and \(k\)-LHD-SQ. Our solutions use the Hamming distance sketches introduced by Clifford, Kociumaka, and Porat [12] to solve the streaming \(k\)-mismatch problem. There exists a function \(\mathsf{sk}_{k}^{\mathsf{hd}}\) (parameterized by a constant \(c>1\), integers \(n\geq k\geq 1\), and a seed of \(O(\log n)\) random bits) that assigns an \(O(k\log n)\)-bit sketch to each string in \(\Sigma^{\leq n}\). Moreover:
1. There is an \(O(k\log^{2}n)\)-time _encoding_ algorithm that, given \(U\in\Sigma^{\leq k}\), builds \(\mathsf{sk}_{k}^{\mathsf{hd}}(U)\).
2. There is an \(O(k\log n)\)-time algorithm that, given any two among \(\mathsf{sk}_{k}^{\mathsf{hd}}(U),\mathsf{sk}_{k}^{\mathsf{hd}}(V)\), or \(\mathsf{sk}_{k}^{\mathsf{hd}}(UV)\), computes the third one (provided that \(|UV|\leq n\)).
3. There is an \(O(k\log^{3}n)\)-time _decoding_ algorithm that, given \(\mathsf{sk}_{k}^{\mathsf{hd}}(U)\) and \(\mathsf{sk}_{k}^{\mathsf{hd}}(V)\), computes \(\mathsf{MI}(U,V)\) if \(\mathsf{hd}(U,V)\leq k\). The error probability is \(O(n^{-c})\).
### A streaming algorithm for \(k\)-Lhd-Pal
We first show that the sketches described in Fact 3 give a simple algorithm improving upon the result of Amir and Porat [3] and achieving the time complexity of \(\tilde{O}(k)\) per letter.
There is a randomised streaming algorithm that solves the \(k\)-LHD-PAL problem for a string \(T\in\Sigma^{n}\) using \(O(k\log n)\) bits of space and \(O(k\log^{3}n)\) time per character. The algorithm errs with probability inverse-polynomial in \(n\).
Using Property 2, we can reduce the \(k\)-LHD-PAL problem to that of computing the threshold Hamming distance between the current prefix of the input string and its reverse. The algorithm maintains the sketches \(\mathsf{sk}_{2k}^{\mathsf{hd}}(T[..i])\) and \(\mathsf{sk}_{2k}^{\mathsf{hd}}(T[..i]^{R})\). When it receives \(T[i]\), it constructs \(\mathsf{sk}_{2k}^{\mathsf{hd}}(T[i])\), updates both \(\mathsf{sk}_{2k}^{\mathsf{hd}}(T[..i])\) and \(\mathsf{sk}_{2k}^{\mathsf{hd}}(T[..i]^{R})\), and computes \(d=\mathsf{hd}_{\leq k}(T[..i],T[..i]^{R})\) (in \(O(k\log^{3}n)\) total time by Fact 3). Property 2 implies \(\mathsf{hd}_{\leq k}(T[..i],\mathrm{PAL})=d/2\). The error probability of the algorithm follows from the error probability for the decoding algorithm for Hamming distance sketches.
The algorithm uses \(O(k\log n)\) bits, which is nearly optimal: Indeed, by Property 2, if \(U=VW\), with \(|V|=|W|\), then \(\mathsf{hd}(U,U^{R})=2\cdot\mathsf{hd}(V,W^{R})\). Therefore, using a standard reduction from streaming algorithms to one-way communication complexity protocols, we obtain a lower bound of \(\Omega(k)\) bits for the space complexity of streaming algorithms for the \(k\)-LHD-PAL problem from the \(\Omega(k)\) bits lower bound for the communication complexity of the Hamming distance [21].
### A streaming algorithm for \(k\)-Lhd-Sq
In this section, we show the following theorem:
There is a randomised streaming algorithm that solves the \(k\)-LHD-SQ problem for a string \(T\in\Sigma^{n}\) using \(O(k\log^{2}n)\) bits of space and \(\tilde{O}(k)\) time per character. The algorithm errs with probability inverse-polynomial in \(n\).
Property 2 allows us to derive \(\mathsf{hd}_{\leq k}(T[..2i],\mathrm{SQ})\) from the sketches \(\mathsf{sk}_{k}^{\mathsf{hd}}(T[..i])\) and \(\mathsf{sk}_{k}^{\mathsf{hd}}(T[..2i])\): we can combine them to obtain \(\mathsf{sk}_{k}^{\mathsf{hd}}(T[..2i])\), and a distance computation on \(\mathsf{sk}_{k}^{\mathsf{hd}}(T[..i])\) and \(\mathsf{sk}_{k}^{\mathsf{hd}}(T[..2i])\) returns \(\mathsf{hd}_{\leq k}(T[..i],T(..2i])=\mathsf{hd}_{\leq k}(T[..2i],\mathrm{SQ})\).
Naively applying this procedure requires storing the sketch \(\mathsf{sk}_{k}^{\mathsf{hd}}(T[..i])\) until the algorithm has read \(T[..2i]\), that is, storing \(\Theta(n)\) sketches at the same time. To reduce the number of sketches stored, we use a filtering procedure based on the following observation:
**Observation 3.4**.: _If \(\mathsf{hd}(T[\mathinner{\ldotp}.2i],SQ)\leq k\) and \(\ell\in[1\mathinner{\ldotp}.i]\), then \(i+\ell\) is a \(k\)-mismatch occurrence of \(T[\mathinner{\ldotp}.\ell]\), that is, \(\mathsf{hd}(T[\mathinner{\ldotp}.\ell],T(i\mathinner{\ldotp}.i+\ell])\leq k\)._
For example, for \(k=1,\ell=2\) and \(i=3\), the word \(S=abc\mathbf{ac}c\) is a \(1\)-mismatch square (by Property 2.2) and the substring \(P^{\prime}=\mathbf{ac}\) is a \(1\)-mismatch occurrence of the prefix \(P=ab\) of \(S\).
This observation motivates our filtering procedure: if we choose some prefix \(P=T[\mathinner{\ldotp}.\ell]\) of the string, we only need to store every \(i\leq\ell\) such that \(i+\ell\) is a \(k\)-mismatch occurrence of \(P\). Clifford, Kociumaka and Porat [12] showed a data structure \(\mathcal{S}\) that exploits the structure of such occurrences and stores them using \(O(k\log^{2}n)\) bits of space while allowing reporting the occurrence at position \(i+\ell\) when \(T[i+\ell+\Delta]\) is pushed into \(\mathcal{S}\) - we say that \(\mathcal{S}\) reports the \(k\)-mismatch occurrences of \(P\) in \(T\) with _a constant delay_\(\Delta\)[12]. Our algorithm needs to receive the occurrence at position \(i+\ell\) when \(T[2i]\) is pushed into the stream, i.e. we require \(\mathcal{S}\) to report occurrences with a _non-decreasing_ delay. In Section 3.2.1 we present a modification of the data structure [12] to allow non-decreasing delays, and in Section 3.2.2 we explain how we use it to implement a space-efficient streaming algorithm for \(k\)-LHD-SQ.
#### Reporting \(k\)-mismatch occurrences with nondecreasing delay.
The algorithm of Clifford, Kociumaka, and Porat [12] reports additional information along with the position of the \(k\)-mismatch occurrences: it produces the _stream of \(k\)-mismatch occurrences of \(P\) in \(T\)_, defined as follows.
**Definition 3.5** ([12, Definition 3.2]).: _The stream of \(k\)-mismatch occurrences of a pattern \(P\) in a text \(T\) is a sequence \(S_{P}^{k}\) such that \(S_{P}^{k}[i]=(i,\ \mathsf{MI}(T(i-|P|\mathinner{\ldotp}.i],P),\ \mathsf{sk}_{k}^{\mathsf{hd}}(T[ \mathinner{\ldotp}.i-|P|]))\) if \(\mathsf{hd}(P,T(i-|P|\mathinner{\ldotp}.i])\leq k\) and \(\bot\) otherwise._
As explained next, the algorithm of [12] can report the \(k\)-mismatch occurrences with a prescribed delay.
**Corollary 3.6** (of [12]).: _There is a streaming algorithm that, given a pattern \(P\) followed by a text \(T\in\Sigma^{n}\), reports the \(k\)-mismatch occurrences of \(P\) in \(T\) using \(O(k\log^{2}n)\) bits of space and \(O(\sqrt{k\log^{3}n}+\log^{4}n)\) time per character. The algorithm can report each occurrence \(i\) with no delay (that is, upon receiving \(T[i]\)) or with any prescribed delay \(\Delta=\Theta(|P|)\) (that is, upon receiving \(T[i+\Delta]\)). For each reported occurrence \(i\), the underlying tuple \(S_{P}^{k}[i]\) can be provided on request in \(O(k\log^{2}n)\) time._
Proof.: If no delay is required, we use [12, Theorem 1.2], which reports \(k\)-mismatch occurrences of \(P\) in \(T\) and, upon request, provides the mismatch information \(\mathsf{MI}(T(i-|P|\mathinner{\ldotp}.i],P)\); this algorithm uses \(O(k\log^{2}n)\) bits of space and takes \(O(\sqrt{k\log^{3}n}+\log^{4}n)\) time per character. We also use [12, Fact 4.4] to maintain the sketch \(\mathsf{sk}_{k}^{\mathsf{hd}}(T[\mathinner{\ldotp}.i])\) (reported on request); this algorithm uses \(O(k\log n)\) bits of space and takes \(O(\log^{2}n)\) time per character.
Whenever requested to provide \(S_{P}^{k}[i]\) for some \(k\)-mismatch occurrence \(i\) of \(P\) in \(T\), we retrieve the mismatch information \(\mathsf{MI}(T(i-|P|\mathinner{\ldotp}.i],P)\) (in \(O(k)\) time) and the sketch \(\mathsf{sk}_{k}^{\mathsf{hd}}(T[\mathinner{\ldotp}.i])\) (in \(O(k\log^{2}n)\) time). Combining \(\mathsf{sk}_{k}^{\mathsf{hd}}(P)\) with \(\mathsf{MI}(T(i-|P|\mathinner{\ldotp}.i],P)\), we build \(\mathsf{sk}_{k}^{\mathsf{hd}}(T(i-|P|\mathinner{\ldotp}.i])\) (using [12, Lemma 6.4] in \(O(k\log^{2}n)\) time) and then derive \(\mathsf{sk}_{k}^{\mathsf{hd}}(T[\mathinner{\ldotp}.i-|P|])\) using Fact 3.1 (in \(O(k\log n)\) time). Overall, processing the request takes \(O(k\log^{2}n)\) time and \(O(k\log^{2}n)\) bits of space.
If a delay \(\Delta=\Theta(|P|)\) is required, our approach depends on whether there exists \(p\in[1\mathinner{\ldotp}.k]\) such that \(\mathsf{hd}(P[\mathinner{\ldotp}.|P|-p],P(p.\mathinner{\ldotp}.|P|])\leq 2k\) (such \(p\) is called a \(2k\)-period in [12]). This property is tested using a streaming algorithm of [12, Lemma 4.3], which takes \(O(k\log n)\) bits of
space, \(O(\sqrt{k\log n})\) time per character of \(P\), and requires \(O(k\sqrt{k\log n})\)-time post-processing (performed while reading \(T[\,.\,k]\)). If \(P\) satisfies this condition, then we just use [12, Theorem 4.2], whose statement matches that of Corollary 3.
Otherwise, [12, Observation 4.1] shows that \(P\) has at most one \(k\)-mismatch occurrence among any \(k\) consecutive positions in \(T\). In that case, we use the aforementioned approach to produce the stream \(S_{P}^{k}\) with no delay and the buffer of [12, Proposition 3.3] to delay the stream by \(\Delta\) characters. The buffering algorithm takes \(O(k\log^{2}n)\) bits of space and processes each character \(T[i]\) in \(O(k\log^{2}n+\log^{3}n)\) time (if \(P\) has \(k\)-mismatch occurrences at positions \(i\) or \(i-\Delta\)) or \(O(\sqrt{k\log n}+\log^{3}n)\) time (otherwise). Since the former case holds for at most two out of every \(k\) consecutive positions, we can achieve \(O(\sqrt{k\log^{3}n}+\log^{4}n)\) worst-case time per character by decreasing the delay to \(\Delta-k\) and buffering up to \(k\) characters of \(T\) and up to \(k\) elements of \(S_{P}^{k}\). While the algorithm processes \(T[i+\Delta]\), the latter buffer already contains \(S_{P}^{k}[i]\), but \(O(k)\) time is still needed to output this value (if \(S_{P}^{k}[i]\neq\bot\)).
The algorithm of Corollary 3 has a fixed delay \(\Delta\), i.e., it outputs \(S_{P}^{k}[i]\) upon receiving \(T[i+\Delta]\). Our application requires a variable delay: we need to access \(S_{P}^{k}[i+|P|]\) upon reading \(T[2i]\). We present a black-box construction that extends the data structure of Corollary 3 to support non-decreasing delays \(\Delta_{i}\), \(i\in[1\,.\,d]\). Naively, one could use the algorithm \(\mathcal{A}\) of Corollary 3 with a fixed delay \(\Delta_{1}\) and buffer the input characters so that \(\mathcal{A}\) receives \(T[i+\Delta_{1}]\) only when we actually process \(T[i+\Delta_{i}]\). Unfortunately, this requires storing \(T[i+\Delta_{1}.\,i+\Delta_{i})\), which could take too much space. Thus, we feed \(\mathcal{A}\) with \(T[1\,.\,\Delta_{1}]\) followed by blank characters \(\bot\) (issued at appropriate time steps without the necessity of buffering input characters) so that \(\mathcal{A}\) reports \(k\)-mismatch occurrences \(i\in[1\,.\,\Delta_{1}]\) with prescribed delays. Then, we use another instance of the algorithm of Corollary 3, with a fixed delay \(\Delta_{1+\Delta_{1}}\), to output \(k\)-mismatch occurrences \(i\in(\Delta_{1}.\,.\,\Delta_{1}+\Delta_{1+\Delta_{1}}]\); we continue this way until the whole interval \([1\,.\,d]\) is covered. We formalise this idea in the following lemma.
Let \(\Delta_{1}\leq\Delta_{2}\leq\dots\leq\Delta_{d}\) be a non-decreasing sequence of \(d=O(|P|)\) integers \(\Delta_{i}=\Theta(|P|)\), represented by an oracle that reports each element \(\Delta_{i}\) in constant time.
There is a streaming algorithm that, given a pattern \(P\) followed by a text \(T\), reports the \(k\)-mismatch occurrences of \(P\) in \(T\) using \(O(k\log^{2}n)\) bits of space and \(O(\sqrt{k\log^{3}n}+\log^{4}n)\) time per character. The algorithm reports each occurrence \(i\in[1\,.\,d]\) with delay \(\Delta_{i}\), that is, upon receiving \(T[i+\Delta_{i}]\). For each reported occurrence \(i\in[1\,.\,d]\), the underlying tuple \(S_{P}^{k}[i]\) can be provided on request in \(O(k\log n)\) time.
Proof.: We use multiple instances \(\mathcal{A}_{1},\dots,\mathcal{A}_{t}\) of the algorithm of Corollary 3. We define a sequence \((s_{r})_{r=0}^{t}\) so that \(\mathcal{A}_{r}\) works with a fixed delay \(\Delta_{s_{r-1}}\), it is given \(T[1\,.\,s_{r})\cdot\bot^{\Delta_{s_{r-1}}}\), and it reports \(k\)-mismatch occurrences \(i\in[s_{r-1}\,.\,s_{r})\). Specifically, we set \(s_{0}=1\) and \(s_{r}=s_{r-1}+\Delta_{s_{r-1}}\), with \(t\) chosen as the smallest integer such that \(s_{t}>d\). Note that \(s_{r}-s_{r-1}=\Delta_{s_{r-1}}\geq\Delta_{1}\) implies \(t\leq 1+\frac{d}{\Delta_{1}}=O(1)\).
We assign three different roles to the algorithms \(\mathcal{A}_{1},\dots,\mathcal{A}_{r}\): _passive_, _active_, and _inactive_. While we process \(T[j]\), the algorithm \(\mathcal{A}_{r}\) is passive if \(j<s_{r}\), active if \(j\in[s_{r-1}\,.\,s_{r})\), and inactive if \(j\geq s_{r}\). Our invariant is that, once we process \(T[j]\), each passive algorithm \(\mathcal{A}_{r}\) has already received \(T[1\,.\,j]\), the unique active algorithm \(\mathcal{A}_{r}\) has already received \(T[1\,.\,s_{r})\cdot\bot^{1+i-s_{r-1}}\), where \(i\) is the largest integer such that \(i+\Delta_{i}\leq j\), and each inactive algorithm \(\mathcal{A}_{r}\) has already received its entire input, that is, \(T[1\,.\,s_{r})\cdot\bot^{\Delta_{s_{r-1}}}\).
Upon receiving \(T[j]\), we simply forward \(T[j]\) to all passive algorithms. Moreover, if \(j=i+\Delta_{i}\) for some \(i\in[1\,.\,d]\), we feed the active algorithm with \(\bot\) so that it checks whether \(i\) is a \(k\)-mismatch occurrence of \(P\) in \(T\) and, upon request, outputs \(S_{P}^{k}[i]\).
Let us argue that this approach is correct from the perspective of a fixed algorithm \(\mathcal{A}_{r}\). As we process \(T[1..s_{r})\), the algorithm is passive, and it is fed with subsequent characters of \(T\). For \(j=s_{r}-1\), the position \(i=s_{r-1}-1\) is the maximum one such that \(i+\Delta_{i}\leq j\). Consequently, the input \(T[1..s_{r})\) already satisfies the invariant for passive algorithms. For subsequent iterations \(j\in[s_{r}..s_{r+1})\), as \(\mathcal{A}_{r}\) is active, it receives \(\bot\) whenever \(i\) increases, so its input stays equal to \(T[1..s_{r})\cdot\bot^{1+i-s_{r-1}}\). The length of this string is \(s_{r}+i-s_{r-1}=i+\Delta_{s_{r-1}}\), so the algorithm indeed checks whether \(i\) is a \(k\)-mismatch occurrence of \(P\) in \(T\) at each such iteration (recall that its fixed delay is \(\Delta_{s_{r-1}}\)), and it satisfies the invariant for active algorithms. Once we reach \(j=s_{r+1}-1\), we have \(i=s_{r}-1=s_{r-1}+\Delta_{s_{r-1}}-1\), so the input becomes \(T[1..s_{r})\cdot\bot^{\Delta_{s_{r-1}}}\), and it already satisfies the invariant for inactive algorithms. The state of inactive algorithms does not change, so this invariant remains satisfied as \(\mathcal{A}_{r}\) stays inactive indefinitely.
The time and space complexity follow from the fact that \(t=O(1)\).
#### Algorithm
We now show how to use the data structure of Lemma 3.2 to implement our filtering procedure using low space.
**Claim 3.8**.: For each \(j\in[1..[\log n]]\), let \(\mathsf{Occ}_{j}\) be the set of \(k\)-mismatch occurrences of \(P_{j}\) in \(T_{j}=T[3\ell_{j}/2..4\ell_{j})\). If \(\mathsf{hd}(T[..2i],\mathrm{SQ})\leq k\) and \(2i\in[3\ell_{j}..6\ell_{j})\), then \(p=i-\ell_{j}/2\in\mathsf{Occ}_{j}\).
Proof.: Since \(\ell_{j}\leq i\), Observation 3.4 implies that \(i+\ell_{j}\) is a \(k\)-mismatch occurrence of \(P_{j}\) in \(T\). Moreover, when \(2i\in[3\ell_{j}..6\ell_{j})\), we have \(3\ell_{j}/2\leq i3\ell_{j}\), therefore that \(k\)-mismatch occurrence of \(P_{j}\) is fully contained within \(T_{j}\), and it appears at index \(i+\ell_{j}-3\ell_{j}/2=i-\ell_{j}/2\) in \(T_{j}\).
In what follows, we use \(p\) to denotes indices in \(T_{j}\), and \(i\) to denote indices in the original text \(T\). As \(T_{j}=T(3\ell_{j}/2..4\ell_{j})\), the correspondance is given by \(i=p+3\ell_{j}/2\). In other words, we only need to compute \(\mathsf{hd}_{\leq k}(T[..2i],\mathrm{SQ})\) when \(i-\ell_{j}/2\in\mathsf{Occ}_{j}\). As noted in Property 2.2, it suffices to know the sketches \(\mathsf{sk}_{k}^{\mathsf{hd}}(T[..2i])\) and \(\mathsf{sk}_{k}^{\mathsf{hd}}(T[..i])\). We store \(\mathsf{sk}_{k}^{\mathsf{hd}}(P_{j})=\mathsf{sk}_{k}^{\mathsf{hd}}(T[..\ell_{ j}])\) and maintain \(\mathsf{sk}_{k}^{\mathsf{hd}}(T[..2i])\) in a rolling manner as we receive the characters of the text.
We use the algorithm of Lemma 3.2, asking for \(k\)-mismatch occurrences of \(P_{j}\) in \(T_{j}\), to report \(\mathsf{sk}_{k}^{\mathsf{hd}}(T_{j}[..i-\ell_{j}])=\mathsf{sk}_{k}^{\mathsf{hd }}(T(\ell_{j}..i])\) for every \(i\in\mathsf{Occ}_{j}\). We specify the delay sequence as \(\Delta_{p}=p-\ell_{j}/2\) for \(p\in[\ell_{j}..5\ell_{j}/2)\) so that the conditions of Lemma 3.2 are satisfied. (Note that \(\Delta_{p}\) does not need to be defined for \(p<\ell_{j}\), as there cannot be a \(k\)-mismatch occurrence of \(P_{j}\) before position \(\ell_{j}\)) This way, for every \(i\in[3\ell_{j}/2..3\ell_{j})\), we receive \(S_{P_{j}}^{k}[i+\ell_{j}]\) (which corresponds to a potential \(k\)-mismatch occurrence starting at position \(i+1\)) while processing \(T_{j}[p+\Delta_{p}]\) for \(p=i+\ell_{j}-3\ell_{j}/2=i-\ell_{j}/2\). As \(\Delta_{p}=p-\ell_{j}/2\), this correponds to position \(p^{\prime}=2p-\ell_{j}/2\) in \(T_{j}\), or position \(i^{\prime}=2p+\ell_{j}=2i\) in \(T\), i.e. this happens precisely as we are processing \(T[2i]\). See Figure 1 for an illustration of the above. If \(S_{P_{j}}^{k}[i+\ell_{j}]\) is blank, we move on to the next position. Otherwise, we retrieve the sketch \(\mathsf{sk}_{k}^{\mathsf{hd}}(T_{j}[..i])=\mathsf{sk}_{k}^{\mathsf{hd}}(T[3 \ell_{j}/2..i])\), combine it with \(s_{j}=\mathsf{sk}_{k}^{\mathsf{hd}}(T[..3\ell_{j}/2))\) (stored separately) and \(\mathsf{sk}_{k}^{\mathsf{hd}}(T[..2i])\) to obtain \(\mathsf{sk}_{k}^{\mathsf{hd}}(T[..i])\) and \(\mathsf{sk}_{k}^{\mathsf{hd}}(T(i..2i])\), and use the latter two sketches to compute \(\mathsf{hd}_{\leq k}(T[..i],T(i..2i])\), which is equal to \(\mathsf{hd}_{\leq k}(T[..2i],\mathrm{SQ})\) by Property 2.2.
We proceed with the complexity analysis of our algorithm. The \(k\)-mismatch pattern matching algorithm of Lemma 3.2 uses \(O(k\log^{2}n)\) bits of space and \(\tilde{O}(k)\) time per character, and we maintain \(O(\log n)\) instances of this algorithm. However, since all the patterns \(P_{j}\) are prefixes of \(T\), the instances can share the pattern processing phase. Moreover, since \(P_{j}\cdot T_{j}\) are prefixes of \(T\) and any position is contained in at most three fragments \(T_{j}\), at most three instances contribute to the time and space complexity at any given moment. Thus, the entire
algorithm uses \(O(k\log^{2}n)\) bits of space and \(\tilde{O}(k)\) time per character, which completes the proof of Theorem 3.3.
Our streaming algorithm for \(k\)-LED-SQ (Theorem 5.2) relies on the streaming algorithm for \(k\)-LHD-SQ. It requires testing \(\mathsf{hd}(T[\,.\,2i],\mathrm{SQ})\leq k\) only for selected positions \(i\), and thus it benefits from the following variant of Theorem 3.3:
There is a randomised streaming algorithm that, given a string \(T\in\Sigma^{n}\), upon receiving \(T[2i]\), can be requested to test whether \(\mathsf{hd}(T[\,.\,2i],SQ)\leq k\) and, if so, report the mismatch information between \(T[\,.\,2i]\) and a closest square. The algorithm uses \(O(k\log^{2}n)\) bits of space and processes each character in \(\tilde{O}(\sqrt{k})\) or \(\tilde{O}(k)\) time, depending on whether the request has been issued at that character.
Proof.: We follow the algorithm behind Theorem 3.3 with minor modifications. First, instead of maintaining \(\mathsf{sk}_{k}^{\mathsf{hd}}(T[\,.\,2i])\) explicitly, we apply [12, Fact 4.4], which uses \(O(k\log n)\) bits of space, takes \(O(\log^{2}n)\) time per character, and reports \(\mathsf{sk}_{k}^{\mathsf{hd}}(T[\,.\,2i])\) on demand in \(O(k\log^{2}n)\) time.
To process a request concerning position \(2i\), we retrieve \(\mathsf{sk}_{k}^{\mathsf{hd}}(T[\,.\,2i])\) and ask the pattern-matching algorithm of Lemma 3 to output \(S_{P_{j}}^{k}[i]\) (normally, the algorithm only reports whether \(i\) is a \(k\)-mismatch occurrence of \(P_{j}\) in \(T_{j}\)). In this case, we build \(\mathsf{sk}_{k}^{\mathsf{hd}}(T[\,.\,i])\) and \(\mathsf{sk}_{k}^{\mathsf{hd}}(T(\,.\,2i])\) as in the proof of Theorem 4.10. The decoding algorithm not only results in \(\mathsf{hd}_{\leq k}(T[\,.\,i],T(i\,.\,2i])=\mathsf{hd}_{\leq k}(T[\,.\,2i], \mathrm{SQ})\) but, if \(\mathsf{hd}(T[\,.\,2i],\mathrm{SQ})\leq k\), also the underlying mismatch information.
The space complexity of the modified algorithm is still \(O(k\log^{2}n)\) bits. The running time is \(\tilde{O}(\sqrt{k})\) if we do not ask the algorithm to test \(\mathsf{hd}(T[\,.\,2i],\mathrm{SQ})\leq k\) and \(\tilde{O}(k)\) if we do.
## 4 Deterministic read-only algorithms for the LHD problems
In this section, we present deterministic read-only algorithms for \(k\)-LHD-PAL and \(k\)-LHD-SQ. We start by recalling structural results for \(k\)-mismatch occurrences used by the algorithms.
### Structure of \(k\)-mismatch occurrences
[[10]] A string \(U\) is \(d\)-mismatch periodic if there exists a primitive string \(Q\) such that \(|Q|\leq|U|/128d\) and \(\mathsf{hd}(U,Q^{\infty})\leq 2d\). Such a string \(Q\) is called the \(d\)-mismatch period of \(U\).
Figure 1: Illustration of our filtering procedure. Here, \(P^{\prime}\) is a \(k\)-mismatch occurrence of \(P_{j}\) at position \(i+\ell_{j}\) in \(T\) and position \(p=i-\ell_{j}/2\) in \(T_{j}\), reported with delay \(\Delta_{p}=p-\ell_{j}/2\) in \(T_{j}\), hence it arrives at time \(2i\) in \(T\).
The condition \(|Q|\leq|U|/128d\) implies that \(Q\) is equal to some substring of \(U\); hence, given the starting and ending positions of \(Q\) in \(U\) and random access to \(U\), we can simulate random access to \(Q\).
[From [22, Claim 7.1]] Let \(U\) and \(V\) be strings such that \(U\) is a prefix of \(V\), and \(|V|\leq 2|U|\). If \(U\) is \(d\)-mismatch periodic with \(d\)-mismatch period \(Q\), then \(V\) either is not \(d\)-mismatch periodic or has \(d\)-mismatch period \(Q\).
Charalampopoulos, Kociumaka, and Wellnitz [10] showed that the set of \(k\)-mismatch occurrences has a very regular structure:
[See [10, Section 3]] Let \(P\) and \(T\) be two strings such that \(|P|\leq|T|\leq 3/2|P|\).
1. If \(P\) is not \(k\)-mismatch periodic, then there are \(O(k)\)\(k\)-mismatch occurrences of \(P\) in \(T\).
2. If \(P\) is \(k\)-mismatch periodic with period \(Q\), then any two \(k\)-mismatch occurrences \(i\leq i^{\prime}\) of \(P\) in \(T\) satisfy \(i\equiv i^{\prime}\pmod{|Q|}\) and \(\mathsf{hd}(T(i-|P|..i^{\prime}|,Q^{\infty})\leq 3k\).
They also presented efficient offline algorithms for computing the \(k\)-mismatch period and the \(k\)-mismatch occurrences in the so-called PILLAR model. In this model, one is given a family of strings \(\mathcal{X}\) for preprocessing. The elementary objects are fragments \(X[i..j]\) of strings \(X\in\mathcal{X}\). Given elementary objects \(S,S_{1},S_{2}\), the PILLAR operations are:
1. \(\mathsf{Access}(S,i)\): Assuming \(i\in[1..|S|]\), retrieve \(S[i]\).
2. \(\mathsf{Length}(S)\): Retrieve the length \(|S|\) of \(S\).
3. \(\mathsf{LCP}(S_{1},S_{2})\): Compute the length of the longest common prefix of \(S_{1}\) and \(S_{2}\).
4. \(\mathsf{LCP}^{R}(S_{1},S_{2})\): Compute the length of the longest common suffix of \(S_{1}\) and \(S_{2}\).
5. \(\mathsf{IPM}(S_{1},S_{2})\): Assuming that \(|S_{2}|\leq 2|S_{1}|\), compute the set of the starting positions of occurrences of \(S_{1}\) in \(S_{2}\), which by Fine and Wilf periodicity lemma [15] can be represented as one arithmetic progression.
In the read-only model, operations \(\mathsf{Access}\) and \(\mathsf{Length}\) can be implemented in constant time and \(O(\log m)\) bits. The operations \(\mathsf{LCP}\) and \(\mathsf{LCP}^{R}\) can be implemented naively via character-by-character comparison in \(O(\min\{|S_{1}|,|S_{2}|\})\) total time and \(O(\log m)\) bits. Finally, the \(\mathsf{IPM}\) operation can be implemented in \(O(|S_{1}|+|S_{2}|)\) total time and \(O(\log m)\) bits (see e.g. [34]).
As a corollary, we immediately obtain: [From [10, Lemma 4.4]] Given random access to a string \(U\), testing whether it is \(d\)-mismatch periodic, and, if so, computing its \(d\)-mismatch period, can be done using \(O(d|U|)\) time and \(O(d)\) space.
### Read-only algorithm for the pattern matching with \(k\) mismatches
The above implementation of the PILLAR operations further implies an offline algorithm that finds all \(k\)-mismatch occurrences of \(P\) in \(T\) in \(\tilde{O}(k^{2}\cdot|T|)\) time and \(\tilde{O}(k^{2})\) space (see [10, Main Theorem 8]). Nevertheless, we provide a more efficient online algorithm that additionally provides the mismatch information for every \(k\)-mismatch occurrence of \(P\).
There is a deterministic online algorithm that finds all \(k\)-mismatch occurrences of a length-\(m\) pattern \(P\) within a text \(T\) using \(O(k\log m)\) space and \(O(k\log m)\) worst-case time per character. The algorithm outputs the mismatch information along with every reported \(k\)-mismatch occurrence of \(P\).
Consistently with the streaming algorithm of [12], our algorithm uses a family of exponentially-growing prefixes to filter out candidate positions. However, in order to use the structural properties of Fact 4.3 efficiently, we construct a different family \(\mathcal{P}\) to ensure
that we are either working in an approximately periodic region of the text or processing an aperiodic prefix.
We first add to \(\mathcal{P}\) the prefixes \(R_{j}=P[\,.\,.
reads \(T(rb-\ell_{j+1}+\ell_{j}\,..\,(r+1)b]\); since \(\ell_{j+1}\leq\frac{3}{2}\ell_{j}\), at most two subroutines are active at any given time. The implementation of the subroutine depends on whether \(P_{j}\) is \(k\)-mismatch periodic or not.
\(P_{j}\) is not \(k\)-mismatch periodic.In this case, for every received \(k\)-mismatch occurrence \(i^{\prime}\) of \(P_{j}\), the subroutine stores the mismatch information \(\mathsf{MI}(T(i^{\prime}-\ell_{j}\,..\,i^{\prime}],P_{j})\) and, as the algorithm receives subsequent characters \(T[i]\) for \(i\in(i^{\prime}\,..\,i^{\prime}+\ell_{j+1}-\ell_{j}]\), we maintain \(\mathsf{MI}(T(i^{\prime}-\ell_{j}\,..\,i],P[..\,\ell_{j}+i-i^{\prime}])\) as long as there are at most \(k\) mismatches. If this is still the case for \(i=i^{\prime}+\ell_{j+1}-\ell_{j}\), we report a \(k\)-mismatch occurrence of \(P_{j+1}\) and output \(\mathsf{MI}(T(i^{\prime}-\ell_{j}\,..\,i],P[..\,\ell_{j}+i-i^{\prime}])= \mathsf{MI}(T(i-\ell_{j+1}\,..\,i],P_{j+1})\). By Observation 4.7, no \(k\)-mismatch occurrence of \(P_{j+1}\) is missed. Moreover, Fact 4.3 guarantees that the subroutine receives \(O(k)\)\(k\)-mismatch occurrences of \(P_{j}\), and thus it uses \(O(k)\) space and \(O(k)\) time per character.
\(P_{j}\) is \(k\)-mismatch periodic with period \(Q_{j}\).In this case, we wait for the leftmost \(k\)-mismatch occurrence \(p\in(rb-\ell_{j+1}+\ell_{j}\,..\,(r+1)b-\ell_{j+1}+\ell_{j}]\) of \(P_{j}\) and ignore all the subsequent occurrences of \(P_{j}\). We use the received mismatch information \(\mathsf{MI}(T(p-\ell_{j}\,..\,p],P_{j})\) and the preprocessed mismatch information \(\mathsf{MI}(P_{j+1},Q_{j}^{\infty})\) to construct \(\mathsf{MI}(T(p-\ell_{j}\,..\,p],Q_{j}^{\infty})\); by the triangle inequality, the size of this set is guaranteed to be at most \(3k\). As the algorithm receives subsequent characters of \(T[i]\) for \(i\in(p\,..\,(r+1)b]\), we maintain \(\mathsf{MI}(T(p-\ell_{j}\,..\,i],Q_{j}^{\infty})\) as long as the number of mismatches does not exceed \(6k+1\). Whenever \(i\geq p+\ell_{j+1}-\ell_{j}\) and \(i\equiv p+\ell_{j+1}-\ell_{j}\pmod{|Q_{j}|}\), we extract \(\mathsf{MI}(T(i-\ell_{j+1}\,..\,i],Q_{j}^{\infty})\) from \(\mathsf{MI}(T(p-\ell_{j}\,..\,i],Q_{j}^{\infty})\) and use the precomputed mismatch information \(\mathsf{MI}(P_{j+1},Q_{j}^{\infty})\) to construct \(\mathsf{MI}(T(i-\ell_{j+1}\,..\,i],P_{j})\). If it is of size at most \(k\), we report \(i\) as a \(k\)-mismatch occurrence of \(P_{j}\).
As for the correctness, we argue that we miss no \(k\)-mismatch occurrence \(i\in(rb\,..\,(r+1)b]\) of \(P_{j+1}\) in \(T\). Since \(\mathsf{hd}(T(i-\ell_{j+1}\,..\,i],P_{j+1})\leq k\) and \(\mathsf{hd}(P_{j+1},Q_{j}^{\infty})\leq 2k+1\), we have \(\mathsf{hd}(T(i-\ell_{j+1}\,..\,i],Q_{j}^{\infty})\leq 3k+1\). Moreover, by Observation 4.7, \(i-\ell_{j+1}+\ell_{j}\) is a \(k\)-mismatch occurrence of \(P_{j}\). Fact 4.3 further implies that \(i-\ell_{j+1}+\ell_{j}\equiv p\pmod{|Q_{j}|}\) and \(\mathsf{hd}(T(p-\ell_{j}\,..\,i-\ell_{j+1}],Q_{j}^{\infty})\leq 3k\). Consequently, \(\mathsf{hd}(T(p-\ell_{j}\,..\,i],Q_{j}^{\infty})\leq 6k+1\), and thus we compute \(\mathsf{MI}(T(i-\ell_{j+1}\,..\,i],Q_{j}^{\infty})\) and report \(i\) as a \(k\)-mismatch occurrence of \(P_{j+1}\).
We conclude with the complexity analysis: the working space is \(O(k)\), dominated by the maintained mismatch information. Moreover, whenever we compute \(\mathsf{MI}(T(i-\ell_{j+1}\,..\,i],P_{j})\), the size of this set is, by the triangle inequality, at most \(6k+1+2k+1\leq 8k+2\), and it can be computed in \(O(k)\) time.
Summary.Overall, each subroutine of each level takes \(O(k)\) space and \(O(k)\) time per character. Since there are \(t=O(\log m)\) levels and each level contains at most two active subroutines, the algorithm takes \(O(k\log m)\) space and \(O(k\log m)\) time per text character. Although our pattern preprocessing algorithm is an offline procedure, we can run it while the algorithm reads the first \(m/2\) characters of the text. Then, while the algorithm reads further \(m/2\) characters, it can process two characters at a time to catch up with the input stream. This does not result in any delay on the output because the leftmost \(k\)-mismatch occurrence of \(P\) is at position \(m\) or larger.
### Read-only algorithm for \(k\)-Lhd-Pal
There is a deterministic online algorithm that solves the \(k\)-Lhd-PAL problem for a string of length \(n\) using \(O(k\log n)\) space and \(O(k\log n)\) worst-case time per character.
The algorithm uses a filtering approach to select positions where a prefix close to PAL can end. Define a family \(\mathcal{P}=\{P_{j}=T[\ldots[(3/2)^{j}]]:j\in[1\ldots[\log_{3/2}n]]\}\) of prefixes of the text, and let \(\ell_{j}=|P_{j}|\), setting \(\ell_{0}=0\) for notational convenience.
Consider \(j\in[1\ldots[\log_{3/2}n]]\) and a position \(i\in(2\ell_{j-1}\ldots 2\ell_{j}]\). If \(\mathsf{hd}(T[\ldots i],\mathrm{PAL})\leq k\), then \(i\) is a \(2k\)-mismatch occurrence of \(P_{j}^{R}\) in \(T\). Moreover, \(\mathsf{hd}(T[\ldots i],\mathrm{PAL})=\mathsf{hd}(T(i-i^{\prime}\ldots i],P_{j }[1\ldots i^{\prime})^{R})\) for \(i^{\prime}=\lfloor i/2\rfloor\).
Proof.: Note that \(i>2\ell_{j-1}\geq\ell_{j}\) implies that \(P_{j}\) is a prefix of \(T[\ldots i]\) and, equivalently, \(P_{j}^{R}\) is a suffix of \(T[\ldots i]^{R}\). Property 2.1 implies \(2\cdot\mathsf{hd}(T[\ldots i],\mathrm{PAL})=\mathsf{hd}(T[\ldots i],T[\ldots i ]^{R})\geq\mathsf{hd}(T(i-\ell_{j}\ldots i],P_{j})\). Thus, if \(\mathsf{hd}(T[\ldots i],\mathrm{PAL})\leq k\), then \(i\) is a \(2k\)-mismatch occurrence of \(P_{j}\) in \(T\). Since \(T[\ldots i^{\prime}]\) is a prefix of \(P_{j}\), Property 2.1 further implies \(\mathsf{hd}(T[\ldots i],\mathrm{PAL})=\mathsf{hd}(T(i-i^{\prime}\ldots i],T[ \ldots i^{\prime}]^{R})=\mathsf{hd}(T(i-i^{\prime}\ldots i],P_{j}[1\ldots i^{ \prime})^{R})\).
The algorithm constructs the family \(\mathcal{P}\) as it reads the text. For each level \(j\), we implement a subroutine responsible for positions \(i\in(2\ell_{j-1}\ldots 2\ell_{j}]\). First, while reading \(T[\ell_{j}\ldots 2\ell_{j-1})\), we launch the pattern-matching algorithm of Theorem 4.2 in order to compute the \(2k\)-mismatch occurrences of \(P_{j}^{R}\) in \(T_{j}=T[\ldots 2\ell_{j})\) and feed the pattern-matching algorithm with the pattern \(P_{j}\) and a prefix \(T[\ldots 2\ell_{j-1})\) of \(T_{j}\), ignoring any output produced. The total number of characters provided is \(\ell_{j}+2\ell_{j-1}\leq 7\cdot(2\ell_{j-1}-\ell_{j})\), so we can feed the algorithm with \(O(1)\) characters for every scanned character of \(T\). Then, while reading \(T[2\ell_{j-1}\ldots 2\ell_{j})\), we feed the pattern-matching algorithm with subsequent characters of \(T\). For every reported \(2k\)-mismatch occurrence \(i\) of \(P_{j}^{R}\) in \(T_{j}\), we retrieve the mismatch information \(\mathsf{MI}(T(i-\ell_{j}\ldots i],P_{j}^{R})\) and obtain \(\mathsf{MI}(T(i-i^{\prime}\ldots i],P_{j}[\ldots i^{\prime}]^{R})\) by removing the entries corresponding to the leftmost \(\ell_{j}-i^{\prime}\) positions. We report the size of this set (or \(\infty\) if the size exceeds \(k\)) as \(\mathsf{hd}_{\leq k}(T[\ldots i],\mathrm{PAL})\).
By Claim 4.2, all positions \(i\in(2\ell_{j-1}\ldots 2\ell_{j}]\) such that \(\mathsf{hd}(T[\ldots i],\mathrm{PAL})\leq k\) pass the test and the distance \(\mathsf{hd}(T[\ldots i],\mathrm{PAL})\) is equal to the size of the set \(\mathsf{MI}(T(i-i^{\prime}\ldots i],P_{j}[\ldots i^{\prime}]^{R})\). As for the complexity analysis, observe that, for each level \(j\), the pattern-matching algorithm uses \(O(k\cdot j)\) space and takes \(O(k\cdot j)\) time per character. Since, at any time, there is a constant number of active levels, the main algorithm uses \(O(k\log n)\) space and takes \(O(k\log n)\) time per character.
### Read-only algorithm for \(k\)-Lhd-Sq
There is a deterministic online algorithm that solves the \(k\)-LHD-SQ problem for a string \(T\in\Sigma^{n}\) using \(O(k\log n)\) space and \(O(k\log n)\) worst-case time per character.
Our algorithm is very similar to the pattern-matching algorithm of Theorem 4.2. We use the same sequence \(\mathcal{P}=(P_{j})_{j=1}^{t}\) of prefixes, now defined for \(P=T\). Again, we set \(\ell_{j}=|P_{j}|\) for \(j\in[1\ldots t]\). Instead of Observation 4.2, we use Observation 3.2 to argue that our filtering procedure is correct.
Processing \(\mathcal{P}\).We build \(\mathcal{P}\) in an online fashion so that the prefix \(P_{j}\) is constructed while scanning \(T(\ell_{j}\ldots[3\ell_{j}/2]]\). If \(P_{j}\) is \(k\)-mismatch periodic, then we also identify \(P_{j+1}\) and build \(\mathsf{MI}(P_{j+1},Q_{j}^{\infty})\).
For subsequent indices \(j\in[0\ldots[\log_{3/2}n]]\), we add the prefix \(R_{j}\) to \(\mathcal{P}\) as soon as it has been read. Then, we launch an offline procedure that applies Corollary 4.2 to test whether \(R_{j}\) is \(k\)-mismatch periodic and, if so, retrieves the period \(Q\). If \(R_{j}\) is \(k\)-mismatch periodic, we build \(\mathsf{MI}(R_{j},Q^{\infty})\) and extend \(R_{j}\) while maintaining the mismatch information with the
appropriate prefix of \(Q^{\infty}\). We proceed until we reach length \(|R_{j+1}|\) or \(2k+1\) mismatches, whichever comes first. We add the obtained extension \(R^{\prime}_{j}\) to \(\mathcal{P}\) and store the mismatch information \(\mathsf{MI}(R^{\prime}_{j},Q^{\infty})\). If \(\mathsf{hd}(R^{\prime}_{j},Q^{\infty})\leq 2k\), then \(R^{\prime}_{j}=R_{j+1}\) is \(k\)-mismatch periodic with the same period \(Q\). Otherwise, by Claim 4.2, neither \(R^{\prime}_{j}\) nor \(R_{j+1}\) are \(k\)-mismatch periodic. Processing each \(j\) takes \(O(|R_{j+1}|k)\) time and \(O(k)\) space, and this computation needs to be completed while the algorithm reads \(T(|R_{j}|..|R_{j+1}|]\). This gives \(O(k)\) time per position since \(\lfloor\frac{3}{2}|R_{j}|\rfloor\leq|R_{j+1}|\leq\lceil\frac{3}{2}|R_{j}|\rceil\).
Across all indices \(j\in[0..\lfloor\log_{3/2}n\rfloor]\), the preprocessing algorithm takes \(O(k)\) space and time per character (since no two indices are processed simultaneously).
Computing the distances.For each level \(j\in[1..t]\), we implement a subroutine responsible for even positions \(i\in[2\ell_{j}..2\ell_{j+1})\); this procedure is active as we read \(T[\ell_{j}..2\ell_{j+1})\). As described above, the pattern \(P_{j}\) is identified while the algorithm reads \(T(\ell_{j}..\lceil 3\ell_{j}/2\rceil]\) and, if \(P_{j}\) is \(k\)-mismatch periodic, the period \(Q_{j}\) and the mismatch information \(\mathsf{MI}(P_{j+1},Q_{j}^{\infty})\) are also computed at that time. While reading \(T[\lceil 3\ell_{j}/2\rceil..2\ell_{j})\), we launch the pattern-matching algorithm of Theorem 4.5 to report the \(k\)-mismatch occurrences of \(P_{j}\) in \(T_{j}=T[..\ell_{j}+\ell_{j+1})\) and feed this algorithm with the pattern \(P_{j}\) and the prefix \(T[..\,2\ell_{j})\) of the text \(T_{j}\). The total number of characters provided is \(3\ell_{j}\leq 6\cdot\frac{1}{2}\ell_{j}\), so can feed the pattern-matching algorithm with \(O(1)\) character for every scanned character of \(T\). Then, while reading \(T[2\ell_{j}..\ell_{j}+\ell_{j+1})\), we feed the pattern-matching algorithm subsequent text characters. For every \(i^{\prime}\in[2\ell_{j}..\ell_{j}+\ell_{j+1})\), we learn whether \(i^{\prime}\) is a \(k\)-mismatch occurrence of \(P_{j}\) and, if so, we obtain the mismatch information \(\mathsf{MI}(P_{j},T(i^{\prime}-\ell_{j}..i^{\prime}))\). How we utilise this output depends on whether \(P_{j}\) is \(k\)-mismatch periodic or not: if \(P_{j}\) is not \(k\)-mismatch periodic, then \(T_{j}\) contains \(O(k)\)\(k\)-mismatch occurrences of \(P_{j}\) and storing them explicitly requires little space. When \(P_{j}\) is \(k\)-mismatch periodic, \(T_{j}\) must exhibit similar periodicity, which we can use to avoid storing all occurrences explicitly.
\(P_{j}\) is not \(k\)-mismatch periodic.In this case, for every received \(k\)-mismatch occurrence \(i^{\prime}\) of \(P_{j}\), we store the mismatch information \(\mathsf{MI}(T(i^{\prime}-\ell_{j}..i^{\prime}),P_{j})\) and, as the algorithm receives subsequent characters \(T[i]\) for \(i\in(i^{\prime}..2(i^{\prime}-\ell_{j})]\), we maintain \(\mathsf{MI}(T(i^{\prime}-\ell_{j}..i],T[..\ell_{j}+i-i^{\prime}])\) as long as there are at most \(k\) mismatches. If this is still the case for \(i=2(i^{\prime}-\ell_{j})\), we report that \(T[..i]\) is a \(k\)-mismatch square, with \(\mathsf{hd}(T[..i],\mathrm{SQ})=\mathsf{hd}(T(i^{\prime}-\ell_{j}..i],T[..\ell _{j}+i-i^{\prime}])=\mathsf{hd}(T(i/2..i],T[..i/2])\). By Observation 3.4, no \(k\)-mismatch square \(T[..i]\) is missed. Moreover, Fact 4.3 guarantees that there are \(O(k)\)\(k\)-mismatch occurrences of \(P_{j}\), and thus we use \(O(k)\) space and \(O(k)\) time per character to process all of them.
\(P_{j}\) is \(k\)-mismatch periodic with period \(Q_{j}\).In this case, we wait for the leftmost \(k\)-mismatch occurrence \(p\in[2\ell_{j}..\ell_{j}+\ell_{j+1})\) of \(P_{j}\) and ignore all the subsequent occurrences of \(P_{j}\). We use the received mismatch information \(\mathsf{MI}(T(p-\ell_{j}..p],P_{j})\) and the preprocessed mismatch information \(\mathsf{MI}(P_{j+1},Q_{j}^{\infty})\) to construct \(\mathsf{MI}(T(p-\ell_{j}..p],Q_{j}^{\infty})\); by the triangle inequality, the size of this set is guaranteed to be at most \(3k\). As the algorithm receives subsequent characters of \(T[i]\) for \(i\in(p..2\ell_{j+1})\), we maintain \(\mathsf{MI}(T(p-\ell_{j}..i],Q_{j}^{\infty})\) as long as the number of mismatches does not exceed \(6k+1\). Whenever \(i/2\geq p-\ell_{j}\) and \(i/2\equiv p-\ell_{j}\pmod{|Q_{j}|}\), we extract \(\mathsf{MI}(T(i/2..i],Q_{j}^{\infty})\) from \(\mathsf{MI}(T(p-\ell_{j}..i],Q_{j}^{\infty})\) and use the precomputed mismatch information \(\mathsf{MI}(P_{j+1},Q_{j}^{\infty})\) to construct \(\mathsf{MI}(T[..i/2],Q_{j}^{\infty})\) first, and then derive \(\mathsf{MI}(T[..i/2],T(i/2..i])\). If the latter is of size at most \(k\), we report \(T[..i]\) as a \(k\)-mismatch square.
As for the correctness, we argue that we miss no \(k\)-mismatch square \(T[..i]\) with
(\(2\ell_{j}..2\ell_{j+1}\)). Since \(\mathsf{hd}(T(i/2..i],T[\,.\,i/2])\leq k\) and \(\mathsf{hd}(P_{j+1},Q_{j}^{\infty})\leq 2k+1\), as a corollary we obtain \(\mathsf{hd}(T(i/2..i],Q_{j}^{\infty})\leq 3k+1\). Moreover, by Observation 3.4, \(i/2+\ell_{j}\) is a \(k\)-mismatch occurrence of \(P_{j}\). Fact 4.3 further implies that \(i/2+\ell_{j}\equiv p\pmod{|Q_{j}|}\) and \(\mathsf{hd}(T(p-\ell_{j}..i/2],Q_{j}^{\infty})\leq 3k\). Consequently, \(\mathsf{hd}(T(p-\ell_{j}..i],Q_{j}^{\infty})\leq 6k+1\), and thus we compute \(\mathsf{MI}(T[\,.\,i/2],T(i/2..i])\) and report \(T[1..i]\) as a \(k\)-mismatch square.
We conclude with the complexity analysis: the working space is \(O(k)\), dominated by the maintained mismatch information. Moreover, whenever we compute \(\mathsf{MI}(T[\,.\,i/2],T(i/2..i])\), the size of this set is, by the triangle inequality, at most \(6k+1+2k+1\leq 8k+2\), and it can be computed in \(O(k)\) time.
Summary.Overall, each level takes \(O(k\log n)\) space and \(O(k\log n)\) time per character, dominated by the pattern-matching algorithm of Theorem 4.5. However, since constantly many levels are processed at any given time, the entire algorithm still uses \(O(k\log n)\) space and \(O(k\log n)\) time per character.
## 5 Language Edit Distance problems
The _edit distance_ between two strings \(U\) and \(V\), denoted by \(\mathsf{ed}(U,V)\), is the minimum number of character insertions, deletions, and substitutions required to transform \(U\) into \(V\). Similar to the Hamming distance, the edit distance from a string \(U\) to PAL and SQ can be expressed in terms of self-similarity of \(U\). This allows us to use similar approaches as for the Language Hamming distance problems, with tools for the Hamming distance replaced with appropriate tools for the edit distance.
By replacing the Hamming distance sketch [12] with the edit distance sketch of Bhattacharya and Koucky [7]. There is a randomised streaming algorithm that solves the \(k\)-LED-PAL problem for a string of length \(n\) using \(\tilde{O}(k^{2})\) bits of space and \(\tilde{O}(k^{2})\) time per character.
Furthermore, the results of Bhattacharya and Koucky [7] show a reduction from the edit distance to the Hamming distance via locally consistent string decompositions, which allows reducing the \(k\)-LED-SQ problem to \(k\)-LHD-SQ, solved via Proposition 3.2: There is a randomised streaming algorithm that solves the \(k\)-LED-SQ problem for a string of length \(n\) using \(\tilde{O}(k^{2})\) bits of space and \(\tilde{O}(k^{2})\) time per character.
Finally, by replacing the online read-only algorithm for finding the \(k\)-mismatch occurrences of a pattern in a text with an online read-only algorithm for finding \(k\)-error occurrences and the structural results for the Hamming distance with the structural results for the edit distance, we obtain algorithms for \(k\)-LED-PAL and \(k\)-LED-SQ: There is a deterministic online read-only algorithm that solves the \(k\)-LED-PAL problem for a string of length \(n\) using \(\tilde{O}(k^{4})\) bits of space and \(\tilde{O}(k^{4})\) time per character.
There is a deterministic online read-only algorithm that solves the \(k\)-LED-SQ problem for a string of length \(n\) using \(\tilde{O}(k^{4})\) bits of space and \(\tilde{O}(k^{4})\) amortised time per character. |
2309.14950 | Multi-Source Domain Adaptation for Object Detection with Prototype-based
Mean-teacher | Adapting visual object detectors to operational target domains is a
challenging task, commonly achieved using unsupervised domain adaptation (UDA)
methods. Recent studies have shown that when the labeled dataset comes from
multiple source domains, treating them as separate domains and performing a
multi-source domain adaptation (MSDA) improves the accuracy and robustness over
blending these source domains and performing a UDA. For adaptation, existing
MSDA methods learn domain-invariant and domain-specific parameters (for each
source domain). However, unlike single-source UDA methods, learning
domain-specific parameters makes them grow significantly in proportion to the
number of source domains. This paper proposes a novel MSDA method called
Prototype-based Mean Teacher (PMT), which uses class prototypes instead of
domain-specific subnets to encode domain-specific information. These prototypes
are learned using a contrastive loss, aligning the same categories across
domains and separating different categories far apart. Given the use of
prototypes, the number of parameters required for our PMT method does not
increase significantly with the number of source domains, thus reducing memory
issues and possible overfitting. Empirical studies indicate that PMT
outperforms state-of-the-art MSDA methods on several challenging object
detection datasets. Our code is available at
https://github.com/imatif17/Prototype-Mean-Teacher. | Atif Belal, Akhil Meethal, Francisco Perdigon Romero, Marco Pedersoli, Eric Granger | 2023-09-26T14:08:03Z | http://arxiv.org/abs/2309.14950v3 | # Multi-Source Domain Adaptation for Object Detection
###### Abstract
Adapting visual object detectors to operational target domains is a challenging task, commonly achieved using unsupervised domain adaptation (UDA) methods. When the labeled dataset is coming from multiple source domains, treating them as separate domains and performing a multi-source domain adaptation (MSDA) improves the accuracy and robustness over mixing these source domains and performing a UDA, as observed by recent studies in MSDA. Existing MSDA methods learn domain invariant and domain-specific parameters (for each source domain) for the adaptation. However, unlike single-source UDA methods, learning domain-specific parameters makes them grow significantly proportional to the number of source domains used. This paper proposes a novel MSDA method called Prototype-based Mean-Teacher (PMT), which uses class prototypes instead of domain-specific subnets to preserve domain-specific information. These prototypes are learned using a contrastive loss, aligning the same categories across domains and separating different categories far apart. Because of the use of prototypes, the parameter size of our method does not increase significantly with the number of source domains, thus reducing memory issues and possible overfitting. Empirical studies show PMT outperforms state-of-the-art MSDA methods on several challenging object detection datasets. Our code is available at [https://github.com/imatif17/Prototype-Mean-Teacher](https://github.com/imatif17/Prototype-Mean-Teacher)
## 1 Introduction
Object detection (OD), one of the fundamental tasks in computer vision, has made significant progress in recent years [26, 40, 2]. However, this progress, typically measured by performance on curated benchmark datasets such as MS-COCO [16] or Pascal VOC [6], may decline significantly with changes in the data distribution between training (source) and testing (target) domain data [33, 39]. This distribution shift can be due to many factors like variations in weather, illumination, geographic locations, size of objects, etc. Since it is too costly to annotate data collected from every operational target domain, researchers have explored a multitude of techniques for unsupervised domain adaptation (UDA), which seeks to find a common representation space between data distributions of a labeled source domain and an unlabeled target domain [3, 15, 24].
Among the state-of-art UDA strategies for OD, feature alignment [3, 41] and pseudo-labeling of target data [12, 13] are the most common. Recently, impressive results were obtained by combining both strategies, where the popular mean-teacher [25] framework is used to pseudo-label target domain images, and a GRL [7] integrated domain discriminator is used for alignment of features in an adversarial way [15, 1, 5]. Mainstream UDA methods rely on a single source domain with labeled data during adaptation. However, in a practical setting, several source datasets may be available, and the datasets may also be collected from multiple different domains, e.g., sensors, environment, etc. The simplest way to use UDA with multiple sources of training data is by blending source domain data to form a single labeled source dataset, as shown in Fig. 0(a). However, considering each domain as a separate source allows the OD model to explicitly address domain discrepancies between individual source distributions, and aligning sources during adaptation has been found to provide a higher level of recognition accuracy and robustness [38]. This setting is referred to as multi-source domain adaptation (MSDA).
Two methods have been proposed for MSDA in OD: Divide-and-Merge Spindle Network (DMSN) [35] and Target-Relevant Knowledge Preservation (TRKP) [29]. The overall architecture of both methods can be divided into domain-general and domain-specific model parameters. The initial part of the network learns the domain-invariant representation, while the latter part learns the domain
specific representation. This allows the model to align the features from multiple domains while preserving domain-specific information. As illustrated in Fig. 0(b), DMSN relies on a mean-teacher training framework, with domain-specific student subnets for each source domain. In contrast, as shown in Fig. 0(c), TRKP proposed an adversarial disentanglement method and preserved domain-specific knowledge using a separate detection head for each source domain. The detection results from the teacher's detection heads are combined to generate pseudo-labels for the student, trained on the target domain. To fully exploit the potential of multiple sources, both methods fundamentally advocate for learning domain-specific weights associated with each source domain. However, learning domain-specific weights for each source domain leads to MSDA architectures that are difficult to train for OD since the amount of domain-specific parameters increases rapidly with the number of sources. Moreover, it is also observed that the weighted combination of the source domains, relying on the heuristics of domain similarities (each pair of source-target domains similarity quantified by a weight value) is not optimal [32]. Additionally, the two methods focused on finding a common representation between domains through adversarial training [35] and adversarial disentanglement [29].
Figure 1: Comparison of the MSDA architectures using the mean-teacher method in the case with 2 source domains. While existing methods need domain-specific parameters (typically a detection head for each source domain) for preserving domain information, our method keeps domain-specific information by using prototypes for each class and domain.
They ignored the class-wise alignment of object categories. Objects can have variations in appearance, scale, orientation, or other visual characteristics, which may be specific to a particular domain. If the objects in the source and target domain exhibit significant differences, it becomes difficult to align the features extracted by the model to effectively detect the objects in the target domain. With the increase in the number of domains in MSDA, this problem becomes more significant. To solve these issues, we propose to learn domain-specific class prototype vectors. In this paper, we argue that MSDA can be simplified without learning domain-specific parameters for each source domain. Instead, learning domain-specific class prototype vectors is sufficient to preserve domain-specific information for each source domain. Additionally, because of the use prototypes, the parameter size of our method does not increase significantly with the number of source domains, thus reducing memory issues and possible overfitting on the source data.
In this paper, a new MSDA method is introduced for OD that is based on mean-teacher [18] and adversarial training (see Fig. 1d). The new Prototype Mean Teacher (PMT) method relies on a cost-effective architecture that stores class and domain specific information using prototypes. To learn domain-invariant features, a multi-class domain discriminator is integrated into the features generated from the OD backbone. To preserve the domain-specific information of all domains, PMT learns class prototypes using the prototype network for all the domains. Additionally, The prototypes are trained using contrastive loss to perform a class-conditional and domain-specific adaptation of the detector. The resulting architecture can provide higher OD accuracy than state-of-the-art MSDA approaches while being conceptually simpler.
**Our main contributions are summarized as follows:**
(1) A novel prototype-based mean tester (PMT) method is introduced for MSDA of object detectors. Our approach shows that prototype vectors can elegantly and efficiently store domain-specific information.
(2) By using a multi-domain discriminator, and a contrastive loss on prototypes for each domain and class, the proposed PMT can perform class- and domain-conditional adaptation, improving a detector's performance.
(3) Our experimental results and ablations show that the PMT method provides a solution that scales well to the number of source domains, and can outperform state-of-the-art MSDA methods for OD on benchmark datasets.
## 2 Related Works
Many DL models have been proposed for OD, namely, Faster-RCNN (FRCNN) [23], Fully Convolutional One Stage (FCOS) [26], You Only Look Once (YOLO) [22], Single Shot Detector (SSD) [17]. Although our PMT method is independent of the OD model, this paper focuses on FRCNN as our base detector for SOTA comparison.
**(a) Unsupervised Domain Adaptation.** UDA methods seek to alleviate the problem of domain shift by adapting the detector trained on a labeled source dataset to a target domain using an unlabeled target dataset. Minimizing the domain discrepancy and adversarial learning are the most popular approaches for UDA. In [3], the authors integrated a domain discriminator into Faster-RCNN to learn domain-invariant feature representations. Later, [24] proposed strong and weak alignment loss. [10] proposed a two-step domain alignment using cycle GAN to mitigate the impact of domain shift between the source and target domains. [34] proposed a graph-based prototype alignment, using contrastive loss. [15] utilized a combination of adversarial training and the mean teacher paradigm. All these methods were designed for UDA from a single source domain. When the source data itself is coming from multiple domains, modeling the domain discrepancy improves the accuracy and robustness of the target OD model [29, 35]. In our work, we focus on MSDA methods for OD that explicitly consider domain discrepancies among source datasets.
**(b) Multi-Source Domain Adaptation.** In the MSDA setting, the labeled training data belongs to multiple source domains, and the MSDA method aims to distill the knowledge from these source domains to the target domain. Only two methods have been proposed for this setting called DMSN [35] and TRKP [29]. Fundamentally, both methods preserve domain-specific information by utilizing specific subnets for each source domain. Thus the parameters of these methods increase linearly as the number of source domains increases. In our method, we preserve domain-specific information using prototypes for each domain which results in a constant size for the MSDA model regardless of the number of source domains.
**(c) Prototype-based Learning.** This learning paradigm is used in different forms for open-world OD [11], semi-supervised OD [14], domain-adaptive OD [37], and few-shot OD [28]. In open-world OD, prototypes are used to achieve class separation in the feature space and help unknown class identification [11]. In semi-supervised OD, prototypes are used for class distribution alignment between pseudo-labels and their highly overlapping proposals [14]. In the detection of objects across domains, prototypes are used to align the foreground and background regions between the source and target domains [37]. Few-shot OD used universal prototypes to learn the invariant object characteristics across seen and novel classes [28]. In contrast, we use prototypes in MSDA settings to simplify domain-specific feature learning. Existing MSDA methods learn domain-specific features by using separate networks for each source domain. We show that by using domain-specific prototype vectors instead, we can avoid learning separate networks for each domain.
## 3 Proposed Method
In the MSDA setting, we assume that there are \(N\) source domains \(S_{1},S_{2},\ldots,S_{N}\) and one target domain \(T\). Each source domain can be represented as \(S_{i}=\{(x_{i}^{j},y_{i}^{j})\}_{i=1}^{M_{j}}\), where \(j=1,2,...,N\) indicates the source domain, and where \(M_{j}\) indicates the number of images in the source domain \(j\). Here, \(x_{i}^{j}\) represents the input image \(i\) of domain \(j\), and \(y_{i}^{j}\) represents its corresponding annotations (bounding boxes and object categories). The target domain \(T\) can be represented as \(T=\{\{x_{i}\}_{i=1}^{M_{T}}\}\), where \(M_{T}\) indicates the number of images in the target domain. Here, \(x_{i}\) represents the input image \(i\) in the target domain. In the work, we assume that all domains have the same \(K\) object categories.
The detailed architectural diagram of our PMT method is illustrated in Fig. 2. We use the mean teacher framework with unbiased teacher [18] for semi-supervised OD as our base model. Training of our model is performed in two stages. First, a supervised-only training called the burn-in stage is performed to help achieve reliable pseudo-labels on the target domain because the teacher network is initialized using these weights. Second, the model is adapted with both labeled data from the source domains and unlabeled data from the target domain. With the source data, we continue using the supervised training (as in the burn-in stage). Target domain pseudo-labels are obtained from the teacher to train the model on the target domain. To produce domain invariant features, a discriminator is introduced in the network. Additionally, the prototype network is integrated to preserve domain-specific information, and class-conditioned adaptation is performed using a contrastive loss. The rest of this section provides details on the individual components of our proposed PMT.
### Supervised Learning
In our problem setting, the annotations are available for the source domains, so we use them to train the model in a supervised way. For this, we compute the standard supervised detection loss of the object detector:
\[\mathcal{L}_{\mbox{sup}}=\sum_{j=1}^{N}\sum_{i=1}^{M_{j}}\mathcal{L}_{\mbox{ cls}}(x_{i}^{j},y_{i}^{j})+\mathcal{L}_{\mbox{reg}}(x_{i}^{j},y_{i}^{j}) \tag{1}\]
where \(\mathcal{L}_{\mbox{cls}}\) is the cross-entropy loss used for bounding box classification, and \(\mathcal{L}_{\mbox{reg}}\) is the smooth-L1 loss used for the bounding box regression as in FRCNN.
### Learning with Pseudo-labels
Since annotations are not available for the target domain data, we cannot directly proceed with supervised training as with the source domain. We obtain pseudo-labels for images for the target domain using the mean-teacher method [25]. For that, two augmented versions of the target-domain images are created, called weak and strong augmentations. The weak augmentation is simply the image rescaling and horizontal flip transformation. The strong augmentation includes color jittering, grayscale, Gaussian blur, and cutout patches, which perform only pixel-level transformations. We followed the scale ranges provided in [18] for the strong augmentation. Then the weak and strong augmented versions are processed by the teacher and student networks, respectively, as shown in Fig. 2. The predictions made by the teacher model are used as pseudo-labels for the training of the student model. To avoid noisy pseudo-labels from the teacher model, confidence thresholding is applied to the teacher network's prediction while computing the pseudo-labels. The used unsupervised training loss for the target domain using labels from the teacher model is:
\[\mathcal{L}_{\mbox{unsup}}=\sum_{i=1}^{M_{T}}\mathcal{L}_{\mbox{cls}}(x_{i}, \widetilde{y}_{i})+\mathcal{L}_{\mbox{reg}}(x_{i},\widetilde{y}_{i}) \tag{2}\]
where \(\widetilde{y}_{i}\) represents the filtered pseudo-labels generated by the teacher model.
### Domain-invariant Features with Discriminator
The feature representation of a model consists of two parts: domain-invariant and domain-specific. In a domain adaptation problem, we try to promote the domain-invariant feature space. To achieve this, UDA methods add a binary domain discriminator to the output of the backbone [3, 15]. This can be extended to MSDA, by having multiple such discriminators. But this doesn't consider the domain discrepancy between the source domain and increases the computational cost [19, 38]. Instead of using multiple binary discriminators, we overcome these issues by introducing a multi-class discriminator. This discriminator is connected to the network using a Gradient Reversal Layer (GRL) [7]. It seeks to classify images according to domain. When this classification loss is back-propagated, the GRL layer reverses the gradients, so the backbone network will try to increase the classification loss, challenging the ability of the discriminator to distinguish between domains. Gradually, the domain of the image-level features extracted by the backbone network will be indistinguishable from the discriminator. This simple adversarial game between the discriminator and the backbone allows us to learn domain-invariant features [3]. The discriminator is trained with the cross-entropy loss between the \(\hat{y}^{j}\), the \(i\)-th one-hot vector for the domain \(j\), and the output of the domain discriminator on image \(x_{i}^{j}\):
\[\mathcal{L}_{\mbox{dis}}=-\sum_{j=1}^{N+1}\sum_{i=1}^{M_{i}}\hat{y}^{j}\log(D( G(F(x_{i}^{j}))) \tag{3}\]
where Let \(F\) is the CNN feature extractor, \(G\) is the gradient reverse layer and \(D\) is the multi-domain discriminator. The
summation is taken from \(j=1\) to \(N+1\) as we also consider the target domain.
### Domain-specific Features with Prototypes
In our PMT method, prototypes are used to preserve domain-specific information. To obtain the prototypes, we added a prototype network to the output of the ROI pool features. This network maps the output of the ROI pool to a \(d\) dimensional feature space, where class-conditioned feature alignment is performed using a contrastive loss. This aligns representations of the same object categories across different domains and separates representations of different object categories. Fig. 3 illustrates the working of the prototypes for domain-specific feature alignment. We produce a local prototype for each class (e.g., car, truck, bus in Fig. 3) in each domain, and these prototypes preserve the domain-specific information. The prototype for each class "across all domains", called as global prototypes, is obtained as the mean of the local prototypes of that class. The contrastive loss on global prototypes pushes them apart in the feature space, resulting in better separation of the classes. It also aligns the local prototypes from all domains for each class reducing confusion due to appearance variation.
Let \(p_{k}^{j}\) denote the local prototype of the \(k\)th class in domain \(j\). The global prototype of class \(k\) is denoted as \(P_{k}\). Let the number of occurrences of class \(k\) from domain \(j\) in a minibatch be \(c_{k}^{j}\). The prototype update value for class \(k\) from domain \(j\) in a minibatch is computed as
\[q_{k}^{j}=\frac{1}{|c_{k}^{j}|}\sum_{r=1}^{c_{k}^{j}}P(R(F(x^{j}),r)) \tag{4}\]
where \(P\) is the prototype network (a 2 layers MLP), \(R\) is the fasterRCNN ROI-pooling and \(F\) is the CNN feature extractor. Thus, \(R(F(x^{j}),r)\) pools features from the RoI \(r\) corresponding to ground-truth box for class \(k\) from domain \(j\) in a minibatch. For the target domain without ground-truth annotations, the pseudo-labels obtained from the teacher network are used to compute the update value \(q_{k}^{N+1}\). When
Figure 3: Prototype-based feature alignment with multiple source domains. There are three domains and three classes. Each domain has a prototype for each class. Initially, there is confusion between classes and the intra-class distance to global prototypes from multiple domains is also large. After alignment, class confusion and intra-class distance to global prototypes are reduced.
Figure 2: Architectural diagram of the proposed Prototype Mean Teacher (PMT) for MSDA. A standard mean-teacher training framework is used. The student is trained with backpropagation, while the teacher is an exponential moving average of the student. The student is trained with images from all domains, and feature alignment is performed at both the image and instance levels using a discriminator and a prototype, respectively. During inference, the teacher model is employed.
\(c_{k}^{j}=0\) in a minibatch, no updates are performed for the corresponding prototype. The local prototype vector \(p_{k}^{j}\) for each class \(k\) and domain \(j\) is stored in memory along with a count value \(\rho_{k}^{j}\) that tells how many times \(p_{k}^{j}\) has been updated thus far. Given the current update value \(q_{k}^{j}\), local prototype \(p_{k}^{j}\) is updated as:
\[p_{k}^{j}=\frac{\rho_{k}^{j}p_{k}^{j}+q_{k}^{j}}{\rho_{k}^{j}+1} \tag{5}\]
The count of updates \(\rho_{k}^{j}\) is incremented after this operation. After updating the prototype vectors, the contrastive loss is computed in two parts. The first part aligns the prototypes of the same class from different domains by maximizing their similarity. The second part pushes apart the prototypes of different classes to reduce class confusion. To align the prototypes of the same class from different domains, we need to consider pairwise combinations of the domains: \(C=Comb(N+1,2)\) (including source and target domains). Let \(\mathcal{C}\) be the cosine similarity between two vectors. The alignment of category \(k\) across different domains is achieved by maximizing the following similarity:
\[\frac{1}{CK}\sum_{k=1}^{K}\sum_{j=1}^{N+1}\sum_{l\neq j}^{N+1}\mathcal{C}(p_{ k}^{j},p_{k}^{l}) \tag{6}\]
normalized by the total number of pairwise similarities \(CK\). To push the prototypes of the different classes apart, we minimize their cosine similarity. For this, we first compute the global prototypes from local prototypes of each domain as their weighted mean:
\[P_{k}=\frac{\sum_{j=1}^{N}\rho_{k}^{j}p_{k}^{j}}{\sum_{j=1}^{N}\rho_{k}^{j}} \tag{7}\]
where the weight \(\frac{\rho_{k}^{j}}{\sum_{j=1}^{N}\rho_{k}^{j}}\) assigns importance to the prototype of class \(k\) from domain \(j\) according to its number of occurrence \(\rho_{k}^{j}\). Once \(P_{k}\) for each class is computed, we maximize the following similarity function:
\[\sum_{k=1}^{K}\sum_{l\neq k}^{K}\mathcal{C}(P_{k},P_{l}) \tag{8}\]
Note that prototypes from the target domain are not considered here because of the noise they induce due to the use of pseudo-labels. The final contrastive loss on prototypes is:
\[\begin{split}\mathcal{L}_{\text{\small{prot}}}=\sum_{k=1}^{K} \sum_{l\neq k}^{K}\mathcal{C}(P_{k},P_{l})-\\ \frac{1}{CK}\sum_{k=1}^{K}\sum_{j=1}^{N+1}\sum_{l\neq j}^{N+1} \mathcal{C}(p_{k}^{j},p_{k}^{l})\end{split} \tag{9}\]
### Training Summary
The FRCNN model is initialized with Imagenet pretrained weights. The weights of the student model are updated using back-propagation, while the weights of the teacher model are obtained as the exponential moving average of the student model's weights over time. During training, first, we start our burn-in training stage. In this step only the Eq. 1 is used to train the student model. After the burn-in stage, the teacher model is initialized as an exact copy of the student model. From now on, the weights of the student model are adapted using the back-propagation, while the weights of the teacher are the EMA of the student model's weights. In the unsupervised MSDA training step, all the losses discussed above are used jointly to adapt the student model. Therefore, the total loss is:
\[\mathcal{L}=\mathcal{L}_{\text{\small{sup}}}+\alpha\mathcal{L}_{\text{\small{Unsup }}}+\beta\mathcal{L}_{\text{\small{dis}}}+\gamma\mathcal{L}_{\text{\small{prot}}} \tag{10}\]
where \(\alpha\), \(\beta\), and \(\gamma\) are hyperparameters to weight each loss.
## 4 Results and Discussion
### Experimental Methodology
**Implementation Details.** Our experiments follow the same procedure as in [29, 35]. FRCNN [23] VGG16 backbone pre-trained with ImageNet was used as the detection framework. Similar to [9], ROI-alignment was used, and the shorter side of the input image was reduced to 600 pixels. For our mean-teacher training, the same setting was used as in [18]. For filtering the pseudo-labels a threshold of 0.7 is used. The model is trained in the burn-in setting for 15 epochs. Then, the model was trained in the unsupervised DA setting for another 15 epochs. For all the settings, the value \(\alpha\) and \(\beta\) were set to 1 and 0.1, respectively, while the value of \(\gamma\) was set to 1.2, 0.6, and 0.1 for Sections 4.2, 4.3 and 4.4. The weight smoothing coefficient of EMA is set to 0.9996. We conducted all the experiments on 4 A100 GPUs, with batch size 4 and a learning rate of 0.2. Our approach was implemented using Pytorch [20] and Detectron2 [30].
**Quantitative comparison.** We compared our approach with the following baseline: (1) Source-Only: FRCNN [23] is trained in a supervised manner on source domain data, and no adaptation is performed. (2) UDA Blending: all the source domain data is combined, and UDA methods [31, 15, 34, 24] are used to adapt FRCNN. (3) MSDA: adaptation of FRCNN from independent source datasets. It includes MSDA methods developed for classification [21, 38] and upgraded for OD, as well as MSDA methods developed for OD [35][29]. (4) Oracle: Target-Only and All-Combined. In the Target-Only case, we performed supervised training of FRCNN only on labeled target data. In the All-Combined case, supervised training is performed on all source and target data combined. Note that some of the baseline results are taken from [35] and [29].
### Cross Time Adaptation
In this setting, images are collected at different times of the day, so domain shift is caused by changes in illumination. The performance of our method in this setting is evaluated on the BDD100K [36] dataset. The dataset consists of data collected at three different times - Daytime, Dusk/Dawn, and Night Time. In our experiments, Daytime and Night are used as the source domains, and Dusk/Dawn as the target domain. The source domain consists of labeled 64,699 (36,728 daytime + 27,971 night) images. In the target domain, we have 5,027 unlabeled images, that are used for training. The evaluation is done on 778 validation set images of Dusk/Dawn. The mAP on 10 classes is reported.
The mAP performance of our method is compared against the other approaches in Table 1. Class-wise AP is reported in Table 7 in Appendix. It can be observed that the UDA methods that blend source datasets are able to increase the performance of the detector compared to the Source-Only baseline. However, this increase in performance is not large, as they disregard the inter-source domain shift. It is also interesting to note that two state-of-the-art MSDA classification methods [21, 38], perform worse than the Source-Only baseline. This degradation in performance shows the fundamental difference in MSDA methods for classification and detection and a strong motivation for MSDA for OD. Our method improves on the performance of the best-performing MSDA method [29] by 5.5%. It can also be seen that the performance of our PMT method is much better than the Oracle Target-Only case (because of the small size of the target dataset), and it approached the mAP of the Oracle All-Combined case.
### Cross Camera Adaptation
In this setting, the data are collected using different cameras, and domain shift is caused by changes in the camera's resolution and viewpoint. In this setting BDD100K [36], Cityscape [4], and Kitty [8] datasets are used. For our experiments, we used Cityscape and Kitty as the source domains, while the Daytime domain of BDD100k was used as the target domain. For both training and evaluation, we only considered the images with car objects. This provides a source domain that consists of 9,515 (2,831 Cityscapes + 6,684 Kitty) labeled images. In the target domain, we have 36,728 unlabeled images used for training. The evaluation is done on 5,258 validation set images of Daytime. The AP is reported only on the car object category in Table 2.
Note that in this experiment, there is only one object category (car), so we cannot use both components of our contrastive loss 9. For this case, the prototype separation part was removed for our contrastive loss. It can be observed that the variation in the performance is similar to Section 4.2. Our PMT can outperform state-of-the-art methods, but the increase in AP is lower compared to TRKP. We suppose this is due to the removal of the prototype separation part of our contrastive loss. In ablation studies, we showed that every loss component contributed to obtaining the best PMT performance.
### Extension to Mixed Domain Adaptation
In the MSDA, the domain shift among source data is not always limited to one factor. So, a setting with domain shift with multiple factors is considered for validation. In the source domain, we considered the MS COCO [16], Cityscapes [4], and Synscapes [27] datasets, and the Daytime domain of the BDD100K dataset are the target domain. Among the source domains, the domain shift is mixed and there are more classes, making this study a challenging sc
\begin{table}
\begin{tabular}{c l|c c c} \hline
**Setting** & **Method** & **D** & **N** & **D+N** \\ \hline \hline Source Only & FRCNN [23] & 30.4 & 25.0 & 28.9 \\ \hline \multirow{6}{*}{UDA Blending} & Strong-Weak [24] & 31.4 & 26.9 & 29.9 \\ & Graph Prototype [1] & 31.8 & 27.6 & 30.6 \\ & Cat. Regularization [31] & 31.2 & 28.4 & 30.2 \\ & UMT [5] & 33.8 & 21.6 & 33.5 \\ & Adaptive Teacher [15] & 34.9 & 27.8 & 34.6 \\ \hline \multirow{6}{*}{MSDA} & MDAN [38] & - & - & 27.6 \\ & M\({}^{3}\)SDA [21] & - & - & 26.5 \\ \cline{1-1} & DMSN [35] & - & - & 35.0 \\ \cline{1-1} & TRKP [29] & - & - & 39.8 \\ \cline{1-1} & **PMT(ours)** & - & - & **45.3** \\ \hline \multirow{2}{*}{Oracle} & Target-Only & - & - & 26.6 \\ \cline{1-1} & All-Combined & - & - & 45.6 \\ \hline \end{tabular}
\end{table}
Table 1: Detection AP of PMT compared against the baseline, UDA, MSDA, and oracle methods on BDD100K. Source domains are daytime (D) and night (N) subsets and the target is always Dusk/Dawn of BDD100K.
\begin{table}
\begin{tabular}{c l|c c c} \hline
**Setting** & **Method** & **C** & **K** & **C+K** \\ \hline \hline Source Only & FRCNN [23] & 44.6 & 28.6 & 43.2 \\ \hline \multirow{6}{*}{UDA Blending} & Strong-Weak [24] & 45.5 & 29.6 & 41.9 \\ & Cat. Regularization [31] & 46.5 & 30.8 & 43.6 \\ & UMT [5] & 47.5 & 35.4 & 47.0 \\ & Adaptive Teacher [15] & 49.8 & 40.1 & 48.4 \\ \hline \multirow{6}{*}{MSDA} & MDAN [38] & - & - & 43.2 \\ & M\({}^{3}\)SDA [21] & - & - & 44.1 \\ \cline{1-1} & DMSN [35] & - & - & 49.2 \\ \cline{1-1} & TRKP [29] & - & - & 58.4 \\ \cline{1-1} & **PMT(ours)** & - & - & **58.7** \\ \hline \multirow{2}{*}{Oracle} & Target-Only & - & - & 60.2 \\ \cline{1-1} & All-Combined & - & - & 69.7 \\ \hline \end{tabular}
\end{table}
Table 2: AP for the car class. Our proposed method PMT is compared against the baseline, UDA, MSDA, and oracle methods on the _Daytime_ domain of BDD100K dataset. C and K refer to Cityscapes and Kitty datasets.
nario. Also, the number of source domains is increased from two to three. The source domain has 99,724 (2,975 Cityscapes + 71,749 MS COCO + 25,000 Synscapes) labeled images and, the target domain has 36,728 unlabeled images. The evaluation is performed on 5,258 validation set images of Daytime. For training and evaluation, we only employ the images having 7 common object categories between the datasets. From the results reported in Table 3, we can observe that the Source-Only setting performs better than the UDA Blending setting. This is because of the complex domain shift among the source data. When only Cityscapes and MS COCO are used as the source domain, our method outperforms the previous best-performing method by 3.4%. After adding the Synscapes dataset our method still outperforms other methods, including the Oracle Target-Only results. This experiment further highlights the effectiveness of our method. Class-wise AP is reported in Table 8 in Appendix.
### Ablation Studies
**Impact of the different components.** The cross-time adaptation setting is used in the ablation studies. The two main components of our method are the prototype network and multi-class discriminator. To clearly disentangle the effect of terms in the contrastive loss for prototypes, we considered them as two separate parts: "prototype separation" and "prototype alignment" part. The prototype separation loss increases the distance between different object categories across domains, whereas the prototype alignment loss aligns the same object category across various domains. The experimental results shown in Table 4 show that the results without using the discriminator and prototype are better than the Source-Only result of Table 1. This is due to the effectiveness of the mean-teacher framework in MSDA. By adding the discriminator to the network, performance is increased by 10.2%. This result is already better than the previous state of art MSDA methods for OD. By adding both of our prototype loss terms individually, the performance of the model further increases. Note that using the alignment loss performs better than the separation loss because the separation loss only discriminates the categories, and the vanilla FRCNN is already doing it. Finally, when both the prototype loss terms are combined, the performance is further improved.
### Parameter Growth with Number of Domains
One of the important advantages of not using domain-specific weights is the reduction in the number of parameters to learn. While the existing MSDA detectors have a significant linear growth in parameters with the number of domains, the parameter growth of our method is negligible as the number of source domains increases. Tab. 5 illustrates this. DMSN [35] makes some part of the feature extractor also domain-specific, thus showing a significantly higher rate of growth compared to other methods. TRKP [29] makes only the detection heads domain-specific, so the growth rate is less compared to DMSN. Our method has only domain-specific prototypes which require fewer parameters per domain than the other approaches.
## 5 Conclusion
A commonly used architecture for MSDA is to learn domain-invariant and domain-specific parameters (for each source domain) to effectively adapt DL models from multiple source domains. In this paper, we proposed a method for simplifying MSDA where domain-specific information is retained using class prototypes for each domain. This avoids the need to train a domain-specific subnet for each source domain, simplifying the MSDA architecture significantly. The parameters of our model remain almost constant regardless of the number of source domains used as only prototype vectors are needed for each domain. Thus, our model size is comparable to UDA methods which perform adaptation from a single source. Experimental results
\begin{table}
\begin{tabular}{c c c|c} \hline \hline
**Multidomain** & **Prototype** & **Prototype** & \\
**Discriminator** & **Separation** & **Alignment** & **AP\({}_{50}\)** \\ \hline \hline & & & 31.7 \\ ✓ & & & 41.9 \\ ✓ & ✓ & & 43.0 \\ ✓ & & ✓ & 43.4 \\ ✓ & ✓ & ✓ & 44.6 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ablation study on the components of our PMT method.
\begin{table}
\begin{tabular}{c c|c c c} \hline \hline
**Setting** & **Method** & **C** & **C + M** & **C + M + S** \\ \hline \hline Source Only & FRCNN [23] & 23.4 & 29.7 & 30.9 \\ \multirow{2}{*}{UDA Blending} & Unbiased Teach. [18] & - & 18.5 & 25.1 \\ & Adaptive Teach. [15] & - & 22.9 & 29.6 \\ \hline \multirow{2}{*}{MSDA} & TRKP [29] & - & 35.3 & 37.1 \\ & **PMT(ours)** & - & **38.7** & **39.7** \\ \hline \multirow{2}{*}{ Oracle} & Target-Only & - & - & 38.6 \\ & All-Combined & - & 47.1 & 48.2 \\ \hline \end{tabular}
\end{table}
Table 3: mAP on 7 object categories of our PMT compared against baselines on the _Daytime_ domain of BDD100K. C, M, and S refer to Cityscapes, MS COCO, and Synscapes datasets.
\begin{table}
\begin{tabular}{c|c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{4}{c}{Number of source domains} \\ \cline{2-5} & l & 2 & 3 & 4 & 5 \\ \hline \hline DMSN [35] & 45.994 & 75.426 & 104.858 & 134.290 & 163.722 \\ TRKP [29] & 45.994 & 59.942 & 73.890 & 87.838 & 101.786 \\
**PMT (ours)** & 46.586 & 46.587 & 46.588 & 46.589 & 46.590 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Model parameter growth (in millions) as the number of source domains increases. While the parameters of DMSN [35] and TRKP [29] grow quickly with the number of source domains, the parameters of our method PMT remain almost constant because the increase is only due to the prototype vectors.
show that our proposed method can effectively exploit multiple domains and improve on the state-of-the-art for multi-source object detection.
|
2306.00140 | Genuinely nonabelian partial difference sets | Strongly regular graphs (SRGs) provide a fertile area of exploration in
algebraic combinatorics, integrating techniques in graph theory, linear
algebra, group theory, finite fields, finite geometry, and number theory. Of
particular interest are those SRGs with a large automorphism group. If an
automorphism group acts regularly (sharply transitively) on the vertices of the
graph, then we may identify the graph with a subset of the group, a partial
difference set (PDS), which allows us to apply techniques from group theory to
examine the graph. Much of the work over the past four decades has concentrated
on abelian PDSs using the powerful techniques of character theory. However,
little work has been done on nonabelian PDSs. In this paper we point out the
existence of \textit{genuinely nonabelian} PDSs, i.e., PDSs for parameter sets
where a nonabelian group is the only possible regular automorphism group. We
include methods for demonstrating that abelian PDSs are not possible for a
particular set of parameters or for a particular SRG. Four infinite families of
genuinely nonabelian PDSs are described, two of which -- one arising from
triangular graphs and one arising from Krein covers of complete graphs
constructed by Godsil \cite{Godsil_1992} -- are new. We also include a new
nonabelian PDS found by computer search and present some possible future
directions of research. | John Polhill, James Davis, Ken Smith, Eric Swartz | 2023-05-31T19:30:08Z | http://arxiv.org/abs/2306.00140v1 | # Genuinely nonabelian partial difference sets
###### Abstract
Strongly regular graphs (SRGs) provide a fertile area of exploration in algebraic combinatorics, integrating techniques in graph theory, linear algebra, group theory, finite fields, finite geometry, and number theory. Of particular interest are those SRGs with a large automorphism group. If an automorphism group acts regularly (sharply transitively) on the vertices of the graph, then we may identify the graph with a subset of the group, a partial difference set (PDS), which allows us to apply techniques from group theory to examine the graph. Much of the work over the past four decades has concentrated on abelian PDSs using the powerful techniques of character theory. However, little work has been done on nonabelian PDSs. In this paper we point out the existence of _genuinely nonabelian_ PDSs, i.e., PDSs for parameter sets where a nonabelian group is the only possible regular automorphism group. We include methods for demonstrating that abelian PDSs are not possible for a particular set of parameters or for a particular SRG. Four infinite families of genuinely nonabelian PDSs are described, two of which - one arising from triangular graphs and one arising from Krein covers of complete graphs constructed by Godsil [1] - are new. We also include a new nonabelian PDS found by computer search and present some possible future directions of research.
## 1 Motivation and Overview
"Strongly regular graphs stand on the cusp between the random and the highly structured." ([1])
Research into strongly regular graphs and partial difference sets is a rich vein within algebraic combinatorics, involving graph theory, linear algebra, group theory, finite field theory, and algebraic number theory. (For definitions of a strongly regular
graph and of a partial difference set, see Sections 2 and 3, respectively.) Constructions of PDSs correspond to projective two-weight codes [10], and PDSs also correspond to projective sets in projective geometries [14]. For an excellent survey about early results related to PDSs and the connections PDSs have to various combinatorial objects, see [15].
Most of the work on PDSs has focused on abelian groups [15], involving character theory, cyclotomy in finite fields, and computer searches for PDSs in high exponent groups [16]. We observe that many of the families of PDSs occur in \(p\)-groups, groups whose order is a power of a prime \(p\), and the vast majority of \(p\)-groups are nonabelian. Just as one example, it is possible (using the DifSets package [17] in GAP [1]) to find all \((64,28,12)\) difference sets. There are \(330159\) such difference sets giving \(105269\) nonisomorphic symmetric designs. Of these, only \(748\) designs have an abelian group acting sharply transitively on the points (this is the first time this observation has been made in print). This implies that only \(0.7\%\) of \((64,28,12)\) symmetric designs arising from difference sets are constructible via a difference set in an abelian group. The conclusion we draw is that nonabelian groups are likely to contain vastly more constructions of interesting combinatorial objects, and we are only now starting to develop techniques to explore this nonabelian universe.
The purpose of this paper is to summarize the state of current knowledge regarding PDSs in nonabelian groups and how it relates to what is known in the abelian case, present some new results, and provide a number of possible future directions of interest. The paper is organized as follows. Section 2 introduces SRGs, gives a few important examples for the rest of the paper, and includes basic feasibility conditions for a SRG to exist. Section 3 transitions to the main topic of this paper, namely PDSs and more generally automorphism groups of SRGs. Section 4 defines "genuinely nonabelian" PDSs, and we provide a few examples of this concept. The first and last of these (Theorems 4.8 and 4.11, respectively) are new infinite families of genuinely nonabelian PDSs, while the others are known examples that can be shown to have parameters that will not support an abelian PDS. Section 5 sketches an outline of the techniques that have been used in exploring nonabelian PDSs, and we share some questions for further study. Finally, Appendix A contains explicit details of a new genuinely nonabelian PDS with parameters \((512,133,24,38)\).
## 2 Introduction to Strongly Regular Graphs
A graph is _regular_ if there is an integer \(k\) such that the degree of every vertex is \(k\). A regular graph with \(v\) vertices and degree \(k\) is _strongly regular_ with parameters \((v,k,\lambda,\mu)\) if the number of paths of length two between two vertices \(x\) and \(y\) is dependent only on whether or not \(x\) is adjacent to \(y\). If \(x\) is adjacent to \(y\) (\(x\sim y\)), then \(\lambda\) represents the number of paths of length two from \(x\) to \(y\); if \(x\) is _not_ adjacent to \(y\) (\(x\not\sim y\)) then \(\mu\) represents the number of paths of length two from \(x\) to \(y\). Simple
inclusion-exclusion shows that the complement of a \((v,k,\lambda,\mu)\) SRG is a SRG with parameters \((v,v-k-1,v-2k+\mu-2,v-2k+\lambda)\)
Strongly regular graphs were introduced by R. C. Bose in [10]. In that inaugural paper, Bose also introduced the concept of a partial geometry \(\mathrm{pg}(r,k,t)\) and showed that the incidence relation on a partial geometry gave a SRG on the points of the geometry. Since that introduction, numerous examples of SRGs have been discovered [1], often expressed in terms of an underlying geometry over a finite field. In particular, SRGs arise as the point graphs of partial geometries and generalized quadrangles.
The theory of strongly regular graphs is well developed. See, for example, [13, Chapter 21, 261-282], [100, Sections 4.3-4.5, 82-100], [111, Chapter VII.11, 852-868], [112, Sections 9.4 & 9.5, 118-125], [119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 323, 335, 336, 337, 338, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 444, 435, 436, 437, 438, 449, 444, 450, 451, 452, 453, 454, 455, 456, 457, 458, 459, 460, 461, 462, 463, 464, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 499, 410, 411, 413, 414, 415, 416, 417, 419, 421, 423, 424, 425, 426, 427, 428, 429, 431, 432, 433, 434, 435, 436, 437, 438, 439, 44, 445, 452, 439, 453, 454, 455, 456, 457, 458, 459, 460, 462, 463, 464, 465, 466, 467, 468, 479, 481, 483, 485, 486, 487, 488, 489, 490, 491, 492, 494, 493, 494, 495, 496, 497, 498, 499, 499, 410, 499, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 423, 424, 426, 429, 433, 44, 435, 436, 437, 439, 44, 446, 447, 448, 453, 447, 449, 454, 455, 456, 457, 458, 459, 460, 463, 464, 465, 466, 467, 468, 469, 470, 472, 474, 475, 476, 477, 478, 479, 480, 482, 483, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 410, 499, 411, 412, 495, 499, 413, 496, 497, 499, 420, 498, 499, 411, 413, 414, 415, 416, 417, 419, 430, 499, 421, 494, 499, 431, 495, 496, 497, 498, 499, 400, 499, 410, 499, 422, 499, 44, 499, 400, 499, 411, 412, 413, 413, 414, 415, 416, 417, 418, 419, 420, 423, 42, 424, 42, 425, 426, 427, 428, 429, 433, 436, 438, 439, 44, 449, 455, 461, 467, 470, 483, 48, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 400, 499, 410, 499, 423, 499, 43, 490, 44, 499, 424, 49, 450, 499, 411, 425, 496, 497, 499, 426, 498, 499, 430, 499, 44, 450, 499, 460, 49, 470, 499, 480, 499, 499, 410, 499, 499, 420, 499, 432, 499, 44, 495, 496, 497, 499, 411, 498, 499, 421, 499, 433, 49, 449, 450, 499, 460, 499, 470, 499, 481, 499, 499, 490, 499, 490, 499, 4910, 499, 4911, 499, 492, 499, 493, 494, 495, 496, 497, 498, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 499, 49
regular with parameters \((4\mu+1,2\mu,\mu-1,\mu).\) Here the group \(G=(F,+)\) acting sharply transitively on the graph is the additive group of the field. This graph, known as the _Paley graph_[12], is isomorphic to its own complement.
**Example 2.4**.: There are two interesting infinite families of graphs that have the designation "Latin Square." Both families have \(m^{2}\) vertices. The (positive) latin square graphs \(PL_{m}(r)\) may be created by latin squares and have parameters \(v=m^{2},k=r(m-1),\lambda=r^{2}-3r+m,\mu=r(r-1).\) This family is well known and graphs exist for most parameters. The second infinite family of graphs are the negative latin square graphs, \(NL_{m}(r);\) these graphs have parameters \(v=m^{2},k=r(m+1),\lambda=r^{2}+3r-m,\mu=r(r+1).\) The existence of \(NL_{m}(r)\) is rarer than in the first parameter set but graphs do arise in some cases in which a rank 3 group acts on an underlying field of order \(m^{2}\) and may give rise to 2-weight codes. (See [21] for an early exploration of these graphs.)
### The spectrum of a strongly regular graph
The adjacency matrix \(A\) of a graph is a \(v\times v\) matrix whose rows and columns are indexed by the vertices, and the entry in the \((v_{i},v_{j})\) position is 1 if \(v_{i}\) and \(v_{j}\) are adjacent, 0 otherwise. The adjacency matrix \(A\) of a SRG satisfies the equation
\[A^{2}=kI+\lambda A+\mu A^{c},\]
where \(A^{c}=J-I-A\) is the adjacency matrix of the complementary graph. (The matrix \(J\) here is a square matrix of all ones.) Substituting \(J-I-A\) for \(A^{c}\) and collecting like terms, we can rewrite this as
\[A^{2}-(\lambda-\mu)A-(k-\mu)I=\mu J.\]
The \(v\times 1\) column vector \(\vec{1}\) is an eigenvector of \(A\) with eigenvalue \(k\). If the graph \(\Gamma\) is connected then the eigenspace corresponding to eigenvalue \(k\) has dimension 1. The eigenvectors with different eigenvalues will be orthogonal (since \(A\) is symmetric) and so any eigenvector with an eigenvalue distinct from \(k\) will be be a root of the quadratic polynomial
\[F(x):=x^{2}-(\lambda-\mu)x-(k-\mu).\]
The discriminant of this quadratic polynomial is
\[\Delta:=(\lambda-\mu)^{2}+4(k-\mu),\]
and so the two roots of this polynomial are
\[\theta_{1},\theta_{2}:=\frac{1}{2}(\lambda-\mu\pm\sqrt{\Delta}).\]
We will later need to think in terms of sum and difference of the nontrivial eigenvalues: \(\theta_{1}+\theta_{2}=\lambda-\mu;\) and \(\theta_{1}-\theta_{2}=\sqrt{\Delta}.\) The difference between the nontrivial eigenvalues, \(\sqrt{\Delta},\) will be an especially important parameter in the study of PDSs.
We summarize this information as a theorem.
**Theorem 2.5**.: _The eigenvalues of the adjacency matrix of a \((v,k,\lambda,\mu)\) SRG are \(k,\theta_{1},\theta_{2}\) for \(F(\theta_{1})=F(\theta_{2})=0\)._
We can apply Theorem 2.5 to the triangular graph found in Example 2.2.
**Example 2.6**.: A triangular graph \(T_{n}\) has parameters
\[\left(\frac{n(n-1)}{2},2(n-2),n-2,4\right),\]
and so \(\Delta=(n-2)^{2},\theta_{1}+\theta_{2}=n-6,\) and \(\theta_{1}-\theta_{2}=n-2.\) This implies \(\theta_{1}=n-4,\theta_{2}=-2.\)
### The Multiplicity Condition
The eigenvalues of (the adjacency matrix of) a SRG lead us to two more feasibility conditions. If we change the basis of our vector space \(V\) from the standard basis to a basis of eigenvectors, then we essentially diagonalize the matrix \(A\), with eigenvalues on the diagonal. Let \(m_{i}\) represent the number of times the eigenvalue \(\theta_{i}\) occurs in this diagonal matrix, \(i\in\{1,2\}\); since \(A\) is symmetric, all of the eigenvalues are real and hence we must have
\[1+m_{1}+m_{2}=v. \tag{2.1}\]
The trace of a matrix is the sum of the elements on the diagonal, and the diagonal entries of \(A\) are all zero; hence, \(Tr(A)=0\), implying
\[k+m_{1}\theta_{1}+m_{2}\theta_{2}=0. \tag{2.2}\]
Equations (2.1) and (2.2) place strong conditions on the positive integers \(m_{1}\) and \(m_{2}\) as shown when we apply these equations to the triangular graph.
**Example 2.7**.: These two equations for a triangular graph \(T_{n}\) give \(m_{1}+m_{2}=v-1=(n^{2}-n-2)/2=(n+1)(n-2)/2\) and \(m_{1}(n-4)+m_{2}(-2)=-2(n-2).\) Solving for \(m_{1}\) and \(m_{2}\) we have
\[m_{1}=n-1,\,m_{2}=\frac{n(n-3)}{2}.\]
Suppose that \(\Delta=(\lambda-\mu)^{2}+4(k-\mu)\) is not the square of an integer. Then \(\sqrt{\Delta}\) is not rational and so the trace of the adjacency matrix is
\[k+\frac{1}{2}m_{1}(\lambda-\mu+\sqrt{\Delta})+\frac{1}{2}m_{2}(\lambda-\mu- \sqrt{\Delta})=0.\]
Thus, \((m_{1}-m_{2})\sqrt{\Delta}\) must be rational. This can occur only if \(m_{1}-m_{2}=0.\) This forces \(m_{1}=m_{2}=(v-1)/2\) and our graph is a conference graph (defined to be a graph with the same parameters as a Paley graph, namely \((4\mu+1,2\mu,\mu-1,\mu)\)). This gives our second feasibility condition:
**Lemma 2.8**.: _If \(\Gamma\) is a \((v,k,\lambda,\mu)\) SRG such that \(\Delta=(\lambda-\mu)^{2}+4(k-\mu)\) is not a perfect square then \(\Gamma\) is a conference graph._
This case is fairly restrictive. The four parameters are stated exactly, in terms of the single parameter \(\mu.\) Here the eigenvalues are
\[\theta_{i}=\frac{-1+(-1)^{i}\sqrt{4\mu+1}}{2}=\frac{-1+(-1)^{i}\sqrt{v}}{2},\]
and their multiplicities are the same:
\[m_{1}=m_{2}=\frac{v-1}{2}.\]
A SRG where \(\Delta\) is the square of an integer is called a Type II SRG. The multiplicity of an eigenvalue
\[\theta_{1}=\frac{(\lambda-\mu)+\sqrt{\Delta}}{2}\]
is
\[m_{1}=\frac{1}{2}\left((v-1)-\frac{2k+(v-1)(\lambda-\mu)}{\sqrt{\Delta}} \right).\]
This gives our third feasibility condition.
**Lemma 2.9**.: _If a \((v,k,\lambda,\mu)\) SRG satisfies \(\Delta=(\lambda-\mu)^{2}+4(k-\mu)\) is a perfect square then_
\[m_{1}=\frac{1}{2}\left((v-1)-\frac{2k+(v-1)(\lambda-\mu)}{\sqrt{\Delta}}\right)\]
_is a nonnegative integer._
We will use Lemma 2.9 to rule out certain parameter sets from having an abelian PDS.
## 3 Automorphisms of SRGs, Cayley Graphs, and PDSs
The history of SRGs is intimately connected with the study of the symmetries of such graphs. A number of the sporadic simple groups (including the Higman-Sims group) were discovered as rank three permutation groups. (A _rank three permutation group_ is a group acting transitively on a set such that the stabilizer of any element has exactly three orbits.) Indeed, the constructions of simple groups led to an analysis of rank three permutation groups by Higman ([11], [12], [13]). Rank three permutation groups give SRGs in a natural way: of the three orbits the stabilizer of a point has, one is the point itself, another corresponds to the neighbors of the point in the corresponding SRG, and the last orbit corresponds to the set of all non-neighbors.
As with any combinatorial object, the symmetries (automorphisms) of SRGs are of interest. The early constructions of SRGs, as geometric objects over a finite field, naturally gave rise to large automorphism groups. If a SRG has an automorphism group acting regularly (that is, sharply transitively) on vertices then we may construct the object as a _Cayley graph_. Let \(G\) be a finite group written multiplicatively and \(S\subseteq G\) a subset with the property that \(1\not\in S\) and \(S=S^{(-1)},\) that is, \(S\) is closed under the property of taking inverses. We may then create a graph \(\operatorname{Cay}(G,S)\) by agreeing that the vertex set is labeled by the set \(G,\) and two vertices \(x,y\in G\) are adjacent (denoted \(x\sim y\)) if and only if \(xy^{-1}\in S.\) The graph \(\operatorname{Cay}(G,S)\) is the Cayley graph with group \(G\) and generating set \(S.\)
The edge-defining property \(xy^{-1}\in S\) can be expressed by saying that \(x\sim y\) if and only if there is an element \(s\in S\) such that \(x=sy\). From this viewpoint, premultiplication by members of \(S\) maps vertices (such as \(y\)) to adjacent vertices (\(sy\)). Thus we can see that the set \(S\) represents the neighborhood of the identity element and the size of the set \(S\) is clearly the degree of the graph. Thus a Cayley graph is regular. However, most Cayley graphs are not SRGs: the Paley graphs (squares in a finite field \(F\) of characteristic \(1\bmod 4\)) and the Lattice graphs (\(\{(x,0):x\neq 0\}\cup\{(0,x):x\neq 0\}\) in \(\mathbb{Z}_{n}\times\mathbb{Z}_{n}\)) are two Cayley graphs that are SRGs.
If \(\operatorname{Cay}(G,S)\) is a Cayley graph, it is common to apply adjectives describing the group to the graph. Thus we will speak of an _abelian_ Cayley graph if \(G\) is abelian, and so on.
Let \(G\) be a finite group of order \(v\) and \(S\) a subset of \(G\). For \(g\in G\), let \(\lambda_{g}\) be the left regular representation. That is, order the elements of \(G\) and construct the \(v\times v\)\((0,1)\)-matrix \(\lambda_{g}\) where the \((i,j)\) entry of \(\lambda_{g}\) is \(1\) if and only if \(g(g_{j})=g_{i}.\) Each matrix \(\lambda_{g}\) is a permutation matrix describing how left-multiplication by \(g\) permutes the elements of \(G\). If we define \(\Lambda(S):=\sum_{s\in S}\lambda_{s}\), then \(\Lambda(S)\) is a \((0,1)\)-matrix with row and column sum equal to \(|S|\).
Suppose we have a graph on \(v\) vertices with adjacency matrix \(A\). Suppose, furthermore, that a group \(G\) acts sharply transitively on the vertex set of the graph. Choose a vertex \(v_{0}\) and let \(N(v_{0})\) be the neighborhood of \(v_{0}\). Define
\[S:=\{g\in G:g(v_{0})\in N(v_{0})\}.\]
Then \(S\) is a subset of \(G\) with the property that \(\Lambda(S)\) is an adjacency matrix for the graph. If the underlying graph is strongly regular then \(\Lambda(S)\), being an adjacency matrix for the underlying graph, has the same spectrum.
We now introduce a new perspective on a Cayley graph that is a SRG, which was introduced by S. L. Ma in [14]. Let \(G\) be a group and \(X=\sum_{g\in G}a_{g}g\) an element of the group ring \(\mathbb{Z}[G].\) Let \(m\) be an integer. We define \(X^{(m)}:=\sum_{g\in G}a_{g}g^{m}.\) (Note the parentheses around the integer \(m\), so that we distinguish \(X^{(m)}\) from \(X^{m}.\)) We will often follow the convention that a (sub)set \(S\) of a group \(G\) will correspond to the group ring element \(S=\sum_{s\in S}s\), and the possible confusion between \(S\) being a set and \(S\) being a group ring element will be determined by the context. We note
that the group ring element \(S\) will satisfy \(S^{(-1)}=S\), and that leads to the following definition.
**Definition 3.1**.: A \((v,k,\lambda,\mu)\)_partial difference set_ (PDS) in a group \(G\) is a set \(S\) such that, in the group ring \(\mathbb{Z}[G]\),
\[SG=kG,\ S^{2}=k+\lambda S+\mu(G-S-1).\]
The discussion, above, gives the following result:
**Lemma 3.2**.: _A \((v,k,\lambda,\mu)\) PDS \(S\) is equivalent to a \((v,k,\lambda,\mu)\) Cayley graph \(\operatorname{Cay}(G,S)\)._
Indeed, this shows that a \((v,k,\lambda,\mu)\) PDS \(S\), when considered as a subset of a group \(G\), requires \(|G|=v\), \(|S|=k\), and every nonidentity element \(g\in G\) can be written in either \(\lambda\) or \(\mu\) different ways (depending on whether or not \(g\) is in \(S\)) as \(xy^{-1}\), where \(x,y\in S\). Furthermore, we typically require \(1\notin S\) so that the corresponding SRG does not contain loops. Moreover, we say that a PDS is a _Type II PDS_ if the corresponding SRG is a Type II SRG.
One tool that has been used in the abelian case to construct PDSs is known as _multipliers_. If \(\phi\) is a group automorphism on a group \(G\) and \(S\subset G\) is a \((v,k,\lambda,\mu)\) PDS in \(G\), then a simple computation shows that \(\phi(S)\) is also a \((v,k,\lambda,\mu)\) PDS in \(G\). If the automorphism takes the form \(\phi(g)=g^{m}\) for an integer \(m\), then we call the automorphism a _multiplier_. A powerful result in _abelian_ PDSs is Ma's Multiplier Theorem [12]:
**Lemma 3.3** (Ma's Multiplier Theorem).: _If \(S\) is a Type II PDS in an abelian group \(G\) and if \(m\) is an integer relatively prime to the order of \(G\), then \(S^{(m)}=S.\)_
We will observe that this lemma does not necessarily hold in nonabelian groups: namely, we will have explicit examples in the next section so that \(S\) is a _nonabelian_ PDS and \(S^{(m)}\neq S\).
## 4 Genuinely Nonabelian PDSs
We now come to the key definition in the paper.
**Definition 4.1**.: We will call a PDS _genuinely nonabelian_ if the underlying strongly regular graph \(\operatorname{Cay}(G,S)\) has no abelian automorphism group acting sharply transitively on the vertices. We will call a set of parameters \((v,k,\lambda,\mu)\)_genuinely nonabelian_ if there are nonabelian PDSs with those parameters but there are no abelian PDSs.
The first known example of a genuinely nonabelian combinatorial object was the Hadamard difference set with parameters \((100,45,20)\) found in [13]. By removing the identity, we get a nonabelian PDS with parameters \((100,44,18,20)\). It had
been proven previously that these objects did not exist in abelian groups. Similarly, nonabelian PDSs found in [10] have parameters \((100,22,0,6)\), \((100,36,14,12)\), \((100,45,20,20)\), and \((100,44,18,20)\), and there are no abelian PDSs with these parameters.
In order to show that a set of PDS parameters is genuinely nonabelian, we need to develop techniques for showing that no abelian PDS can exist with those parameters. The first technique requires that we define a character of an abelian group \(G\): the map \(\chi\) from \(G\) to the multiplicative group \(\mathbb{C}\) is called a character if \(\chi\) is a homomorphism. The set of all characters of \(G\) is denoted \(G^{*}\). The following is well known.
**Lemma 4.2**.: _If \(G\) is an abelian group, then \(G^{*}\), under the operation of pointwise multiplication, is a group isomorphic to \(G\)._
Just as we can look for subsets of \(G\) that satisfy the conditions of being a PDS, we can look for subsets of \(G^{*}\) that are PDSs. The following result (due to Bridges and Mena [1] and Delsarte [1]) identifies a subset of \(G^{*}\) that will be a PDS.
**Theorem 4.3**.: _Let \(S\) be a PDS in an abelian group \(G\) and fix a nontrivial eigenvalue \(\theta\) of \(S\). Then, \(S^{*}=\{\chi\in G^{*}:\chi(S)=\theta\}\) is a PDS in \(G^{*}\)._
The parameters of the PDS from Theorem 4.3 are a bit messy; we refer the interested reader to [10, Theorem 3.4] for full details. We do note, however, that \(|S^{*}|\) in Theorem 4.3 is the multiplicity of the eigenvalue \(\theta\). We also know from Lemma 4.2 that \(|G^{*}|=|G|\). Thus, we can consider the feasibility conditions from Section 2 to determine whether it is possible to have a PDS with those two parameters. If not, then it will be impossible to construct a PDS \(S\) in the abelian group \(G\). Another related result that can be used to rule out abelian PDSs in some parameters where a nonabelian PDS might still exist is due to Ma [10].
**Theorem 4.4**.: _If \(S\) is a PDS in an abelian group \(G\) and_
\[S^{*}:=\left\{\chi\in G^{*}:\chi(S)=\frac{(\lambda-\mu)+\sqrt{\Delta}}{2} \right\},\]
_then \(\sqrt{\Delta^{*}}=v/\sqrt{\Delta}.\) In particular, if \(S\) is a Type II PDS in an abelian group then \(v/\sqrt{\Delta}\) is an integer._
We now apply these criteria for abelian PDSs to three particular sets of parameters. Later in this section, we will show that there are nonabelian PDSs with these parameters, providing new examples of genuinely nonabelian sets of parameters for PDSs.
**Corollary 4.5**.: _If \(n>4\), then there is no abelian \((\frac{n(n-1)}{2},2(n-2),n-2,4)\) PDS (these are the parameters for \(T_{n}\))._
Proof.: The parameters of \(T_{n}\) have \(v=\frac{n(n-1)}{2}\) and \(\Delta=(n-2)^{2}.\) If \(n>4\) then
\[\frac{v}{\sqrt{\Delta}}=\frac{n(n-1)}{2(n-2)}=\frac{1}{2}n+\frac{1}{2}+\frac{2} {n-2}.\]
which is not an integer if \(n>4\).
**Corollary 4.6**.: _Let \(q\) be a prime power and \(r<q+1\) be an integer dividing \(q+1\). A PDS with parameters_
\[\left(q^{3},(q-1)\left(\frac{(q+1)^{2}}{r}-q\right),r\left(\frac{q+1}{r}-1 \right)^{3}+r-3,\left(\frac{q+1}{r}-1\right)\left(\frac{(q+1)^{2}}{r}-q\right)\right)\]
_is genuinely nonabelian._
Proof.: In this case, \(\sqrt{\Delta}=q(q+1)/r\). Since \((q+1)/r>1\), \(\sqrt{\Delta}\) does not divide \(v=q^{3}\). The result follows from Theorem 4.4.
There are known to be SRGs with the parameters listed in Corollary 4.6 by the work of Godsil [1, 5.3 Lemma]. We next highlight one specific instance of Corollary 4.6; as we will see later in this sections, PDSs are known to exist for these parameters.
**Corollary 4.7**.: _Let \(S\) be a PDS with parameters \((q^{3},q^{2}+q-2,q-2,q+2)\). If \(q\) is odd then \(S\) cannot be an abelian PDS._
Proof.: This follows from setting \(r=(q+1)/2\) in Corollary 4.6.
As our first example of a genuinely nonabelian set of parameters, the following construction of a PDS in a nonabelian group occurs in a set of parameters that Corollary 4.5 shows cannot have an abelian PDS. This PDS construction is new.
**Theorem 4.8**.: _Let \(q=p^{r}\) be a prime power congruent to \(3\) mod \(4\). Then we may represent \(T_{q}\) as a Cayley graph in the semidirect product \(C_{p}^{r}\rtimes C_{t}\) where \(t=(q-1)/2\)._
Proof.: We prove this for \(r=1\): the general case is similar. Let \(V\) be the set of pairs (sets of size \(2\)) from \(\mathbb{Z}_{p}\). Define \(\sigma:V\to V\) by \(\sigma(\{a,b\}):=\{a+1,b+1\}\) where addition is done in \(\mathbb{Z}_{p}\). The function \(\sigma\) is a permutation of \(V\) of order \(p\).
Let \(g\) be a primitive root of \(p\) and define \(m:=g^{2}\in\mathbb{Z}_{p}^{*}.\) The element \(m\) generates a (multiplicative) subgroup, the quadratic residues of \(p\), of order \(t\), index \(2\), in \(\mathbb{Z}_{p}^{*}\). Define \(\tau:V\to V\) by \(\tau(\{a,b\}):=\{ma,mb\}\) where multiplication is done in \(\mathbb{Z}_{p}\). The function \(\tau\) is a permutation of \(V\) of order \(t=(p-1)/2\).
We compute \(\tau\sigma\tau^{-1}\):
\[\tau\sigma\tau^{-1}(\{a,b\}) =\tau\sigma(\{m^{-1}a,m^{-1}b\})=\tau(\{m^{-1}a+1,m^{-1}b+1\})\] \[=\{a+m,b+m\}=\sigma^{m}(\{a,b\}).\]
Therefore,
\[\tau\sigma\tau^{-1}=\sigma^{m}\]
Thus \(G:=\langle\sigma,\tau\rangle\cong C_{p}\rtimes_{m}C_{t}\) is a nonabelian group of order \(pt\) and all its elements may be described by \(\sigma^{i}\tau^{j}\) as \(i\) ranges through \(\{0,1,2..,p-1\}\) and \(j\) ranges through \(\{0,1,2..,t-1\}\).
What is the orbit of \(\{0,1\}\) under the group \(G\)? The powers of \(\tau\) map \(\{0,1\}\) to \(\{0,jm\}\) for \(j\in\{1,2,3...,t-1\}.\) The powers of \(\sigma\) then map these elements to \(\{a,a+jm\}\) for \(a\in\{0,1,2,...,p-1\}.\) The set
\[S_{+}:=\{\{a,a+jm\}:a\in\{0,1,2,...,p-1\},j\in\{1,2,3...,t-1\}\}\]
is half of the vertices of \(T_{p}\) and is a subset of the orbit of \(\{0,1\}.\) Now \(\sigma^{-1}(\{0,1\})=\{0,-1\},\) and the orbit of \(\{0,-1\}\) includes
\[S_{-}:=\left\{\{a,a-jm\}:a\in\{0,1,2,...,p-1\},j\in\{1,2,3...,t-1\}\right\}.\]
If \(-1\) is a quadratic residue of \(p\) then \(-1=mj\) for some integer \(j\) and these two sets are the same. But since \(-1\) is _not_ a quadratic residue we have that \(S_{+}\) and \(S_{-}\) are disjoint and \(S_{+}\cup S_{-}=T_{p}\). In this case, \(G=\langle\sigma,\tau\rangle\) is transitive on the vertices of \(T_{p}\). Since \(G\) has the same order as \(T_{p}\) and is transitive on \(T_{p}\) then \(G\) acts regularly on the graph \(T_{p}.\) Thus we have constructed a PDS in \(G.\)
**Remark 4.9**.: We can explicitly write out a PDS when \(G\cong C_{p}\rtimes_{m}C_{\frac{p-1}{2}}\). Set
\[N:=\{\{0,a\}:a=2,...,p-1\}\cup\{\{1,b\}:b=2,...,p-1\},\]
the neighborhood of \(\{0,1\},\) and define
\[S:=\{g\in G:g(\{0,1\})\in N\}.\]
We observe that since \(\{1,2\}\in N\) and \(\{0,-1\}\in N\) then \(\{\sigma,\sigma^{-1}\}\subseteq S.\) Set \(T:=\{\tau^{j}:j\in\{1,2,...,p-1\}\}.\) Since \(\{0,mj\}\in N\) for all \(j\) then \(T\subseteq S.\) Similarly, since \(\{0,-mj\}\in N\) for all \(j\) then \(T\sigma^{-1}\subseteq S.\) The remaining elements of \(S\) can be found by considering elements of the form \(\{1,1+mj\}\) and \(\{1,1-mj\}\) as \(mj\) varies across the quadratic residues of \(p\). As the sets \(\{1+mj:j\in\{1,2,...,p-1\}\}\) and \(\{1-mj:j\in\{1,2,...,p-1\}\}\) partition the nonzero elements of \(\mathbb{Z}_{p}\) and are a subset of \(N\) then \(\sigma T,\sigma T\sigma^{-1}\subseteq S.\) Thus,
\[S=\{\sigma,\sigma^{-1}\}\cup T\cup T\sigma^{-1}\cup\sigma T\cup\sigma T\sigma^ {-1}.\]
(Note that the sets \(T\) and \(\sigma T\sigma^{-1}\) must be disjoint, and this is possible _only_ if \(G\) is nonabelian!) If we wish to write all the elements of \(G\) in the form \(\sigma^{i}\tau^{j}\) then
\[S=\{\sigma,\sigma^{-1}\}\cup T\cup\sigma^{-m}T\cup\sigma T\cup\sigma^{-m+1}T.\]
We further note that the requirement that \(q\) is \(3\bmod 4\) is necessary in Theorem 4.8. For example, suppose \(p=5\). Here \(\tau\sigma\tau^{-1}=\sigma^{4}\) and \(m=4\). Then
\[\sigma=(01\ 12\ 23\ 34\ 04)(02\ 13\ 24\ 03\ 14)\ \text{and}\ \tau=(01\ 04)(02\ 03)(12\ 34)(13\ 24)(14)(23).\]
The orbit of \(01\) is \(\{01,12,23,34,04\}\) and the orbit of \(02\) is \(\{02,13,24,03,14\}\). In this case there is no PDS since these permutations are not transitive on the graph.
We also note that exploration in GAP [1] using the Algebraic Graph Theory package [1] shows that \(T_{n}\) is _not_ a Cayley graph if \(n\in\{5,6,8,9,10,12,13,14,15\}\). (The integer \(15\) required considerable work.) One might suspect that the only time \(T_{n}\) has a regular automorphism group is when \(p\) is a prime power congruent to \(3\mod 4\).
**Example 4.10**.: As an example of Theorem 4.8, we consider \(T_{7}\). The permutation \(\sigma\) in the proof has cycle structure
\[\sigma=(01\ 12\ 23\ 34\ 45\ 56\ 06)(02\ 13\ 24\ 35\ 46\ 05\ 16)(03\ 14\ 25\ 36\ 04\ 15\ 26),\]
and the permutation \(\tau\) has cycle structure
\[\tau=(01\ 02\ 04)(03\ 06\ 05)(12\ 24\ 14)(13\ 26\ 45)(15\ 23\ 46)(16\ 25\ 34)(35\ 36\ 56).\]
In this case, a \((21,10,5,4)\) PDS in \(\langle\sigma,\tau\rangle\cong C_{7}\rtimes_{2}C_{3}\) is
\[S=\{\sigma,\sigma^{6},\ \tau,\tau^{2},\ \sigma\tau,\sigma\tau^{2},\ \tau\sigma^{6},\tau^{2}\sigma^{6},\ \sigma\tau\sigma^{6},\sigma\tau^{2}\sigma^{6}\}\]
\[=\{\sigma,\sigma^{6},\tau,\tau^{2},\sigma\tau,\sigma\tau^{2},\sigma^{5}\tau, \sigma^{3}\tau^{2},\sigma^{6}\tau,\sigma^{4}\tau^{2}\}.\]
We note that this PDS does not satisfy Ma's Multiplier Theorem (Lemma 3.3): while \(2\) is relatively prime to \(|\langle\sigma,\tau\rangle|=21\), \(\sigma\in S\) whereas \(\sigma^{2}\notin S\).
A second example of genuinely nonabelian PDSs have parameters \((q^{3},q^{2}+q-2,q-2,q+2)\). We showed in Corollary 4.7 that no abelian PDSs can have these parameters if \(q\) is odd. Two separate constructions ([1], [2]) provide nonabelian PDSs with these parameters, the first when \(q\) is an odd prime power and the associated group is a Heisenberg group over the field of order \(q\) (that is, the set of unipotent upper-triangular matrices with entries in \(\operatorname{GF}(q)\)), and the second when \(q\) is an odd prime and the associated group is an extraspecial group of exponent \(q^{2}\). The parameter set is hence genuinely nonabelian, and any SRG coming from these PDSs will also be genuinely nonabelian.
The smallest example of an odd \(q\) is the SRG with parameters \((27,10,1,5)\) created by setting \(q=3.\) Consider the nonabelian Heisenberg group
\[G_{1}:=(C_{3}\times C_{3})\rtimes C_{3}:=\langle x,y,z:x^{3}=y^{3}=z^{3}=xzx^{ -1}=yzy^{-1}=1,xyx^{-1}=yz^{-1}\rangle.\]
In this nonabelian group,
\[S_{1}=(x+x^{2}+xy+xy^{2})+(1+y+x^{2}y^{2})z+(1+y^{2}+x^{2}y)z^{2}\]
is a \((27,10,1,5)\) PDS.
Next, consider the nonabelian group
\[G_{2}:=C_{9}\rtimes_{4}C_{3}:=\langle x,y:x^{9}=y^{3}=1,yxy^{-1}=x^{4}\rangle.\]
In this nonabelian group,
\[S_{2}:=(x^{2}+x^{3}+x^{6}+x^{7})+(x^{2}+x^{3}+x^{4})y+(x+x^{2}+x^{6})y^{2}\]
is a \((27,10,1,5)\) PDS. In particular, the PDS \(S_{2}\) in the group \(G_{2}\) fails Ma's Multiplier Theorem (Lemma 3.3): the integer \(2\) is relatively prime to the order of \(G_{2}\) and \(x^{2}\in S_{2}\), but \((x^{2})^{2}=x^{4}\notin S_{2}\).
While the groups \(G_{1}\) and \(G_{2}\) are not isomorphic, the corresponding SRGs \(\operatorname{Cay}(G_{1},S_{1})\) and \(\operatorname{Cay}(G_{2},S_{2})\) are in fact isomorphic: they are each the complement of the Schlafli graph, which was proven to be unique by Seidel [10]. (On the other hand, the graphs arising from extraspecial groups of order \(p^{3}\) with exponent \(p\) and exponent \(p^{2}\) are not isomorphic in general; for example, they are not isomorphic when \(p=5\).)
These PDSs happen to be _pseudo-geometric_: they have the same parameters as a SRG that would be a collinearity graph of a _generalized quadrangle_ (GQ). A GQ of order \((s,t)\) is a point-line incidence geometry such that any two points lie on at most one common line; every line is incident with exactly \(s+1\) points; every point is incident with exactly \(t+1\) lines; and, given a point \(P\) and a line \(\ell\) not incident with \(P\), there is a unique point on \(\ell\) collinear with \(P\). Indeed, a group acting regularly on the set of points of a GQ of order \((s,t)\) would correspond to a \(((s+1)(st+1),s(t+1),s-1,t+1)\) PDS. We refer the interested reader to [12] for more information about GQs.
Groups acting regularly on the set of points of a GQ have produced extremely interesting examples of nonabelian PDSs in recent years. For example, Bamberg and Giudici noted the existence of a \((4617,520,7,65)\) PDS in a group isomorphic to \(C_{513}\rtimes C_{9}\) in [1] that is genuinely nonabelian. Furthermore, Bamberg and Giudici noted that other groups (beyond just Heisenberg groups) can act regularly on the set of points of a Payne-derived GQ of \(\operatorname{W}(q)\) of order \((q-1,q+1)\) for \(q\) odd (which would correspond to a \((q^{3},q^{2}+q-2,q-2,q+2)\) PDS). In fact, Feng and Li [11] classified all groups acting regularly on the Payne-derived GQ of \(\operatorname{W}(q)\), \(q\) odd, showing as a byproduct that such groups can actually have _unbounded nilpotency class_.
We end this section by proving that, for every choice of \(r<q+1\) in Corollary 4.6, there is a genuinely nonabelian PDS with those parameters. As far as we know, other than when \(r=(q+1)/2\), these have generally not previously been recognized as feasible parameters for PDSs.
**Theorem 4.11**.: _Let \(q\) be a prime power and \(r<q+1\) be an integer dividing \(q+1\). There exists a genuinely nonabelian PDS with parameters_
\[\left(q^{3},(q-1)\left(\frac{(q+1)^{2}}{r}-q\right),r\left(\frac{q+1}{r}-1 \right)^{3}+r-3,\left(\frac{q+1}{r}-1\right)\left(\frac{(q+1)^{2}}{r}-q\right) \right).\]
Proof.: As noted above, SRGs with these parameters do exist: for example, by [1, 5.3 Lemma], since \(q\) is a prime power and \(r\) divides \(q+1\), such a graph can be constructed from a GQ of order \((q^{2},q)\) with a particularly nice _ovoid_, that is, a collection of \(q^{3}+1\) points in the GQ that are pairwise noncollinear. We again refer the reader to [10] for information and terminology related to GQs. Start with the classical GQ \(\mathrm{H}(3,q^{2})\) of order \((q^{2},q)\), and find an automorphism \(g\) of order \((q+1)/r\) fixing each point in an ovoid (but fixing no line of the GQ). Take one point \(P_{0}\) in the ovoid, and fix one distinguished line \(\ell_{0}\) incident with \(P_{0}\). The vertices of our SRG will be the \(\langle g\rangle\)-orbits of lines \(\langle g\rangle\ell\) such that \(\ell\) is not incident with \(P_{0}\) but \(\ell_{0}\) is concurrent with some line in \(\langle g\rangle\ell\), and two orbits \(\langle g\rangle\ell_{1}\) and \(\langle g\rangle\ell_{2}\) are adjacent in the SRG if and only \(\ell_{1}\) is collinear with some line in \(\langle g\rangle\ell_{2}\). (We note that the roles of points and lines are often reversed in this construction, and so typically the construction is done in the dual GQ \(\mathrm{Q}^{-}(5,q)\) of order \((q,q^{2})\); we have chosen to work instead in \(\mathrm{H}(3,q^{2})\) for the nice representation we have of an ovoid.)
We now explicitly construct the GQ and ovoid. This construction is essentially the same as that of [1, 6.1 Lemma], except we are choosing a different nonsingular Hermitian form so that our group \(G\) containing the PDS has a nice representation. Let \(V=\mathrm{GF}(q^{2})^{4}\), and, given two vectors \(x,y\in V\), define a nonsingular Hermitian form \(b:V\times V\to\mathrm{GF}(q^{2})\) on \(V\) by
\[b(x,y):=x_{1}y_{1}^{q}+x_{2}y_{4}^{q}+x_{3}y_{3}^{q}+x_{4}y_{2}^{q}.\]
The \(1\)-dimensional totally isotropic subspaces of \(V\) - that is, the \(1\)-dimensional subspaces \(\langle x\rangle\) spanned by the (nonzero) vectors \(x\) such that \(b(x,x)=0\) - form the point set of a GQ \(\mathcal{H}\) of order \((q^{2},q)\) in \(\mathrm{PG}(3,q^{2})\). The lines of \(\mathcal{H}\) are the \(2\)-dimensional totally isotropic subspaces - that is, those \(2\)-dimensional subspaces \(U\) such that \(b(x,y)=0\) for all \(x,y\in U\) - with incidence between points and lines defined by subspace containment.
To construct an ovoid, we note that the intersection of \(\mathcal{H}\) with any non-tangent hyperplane is an ovoid; see [1] and [10]. For our choice of Hermitian form, we may choose \(x_{1}=0\) as our non-tangent hyperplane. Thus, our ovoid is
\[\mathcal{O}:=\left\{\langle x\rangle:x_{1}=0,\,x_{2}x_{4}^{q}+x_{3}^{q+1}+x_{ 4}x_{2}^{q}=0\right\}.\]
Moreover, by [1, p. 249], we see in fact that
\[\mathcal{O}=\left\{\langle(0,1,0,0)^{t}\rangle\right\}\cup\left\{\langle(0, \alpha,\beta,1)^{t}\rangle:\alpha+\alpha^{q}+\beta^{q+1}=0,\alpha,\beta\in \mathrm{GF}(q^{2})\right\},\]
and \(|\mathcal{O}|=q^{3}+1\).
The element \(g\) of order \((q+1)/r\) fixing each point in \(\mathcal{O}\) but fixing no line of \(\mathcal{H}\) is constructed exactly as in [1, 6.1 Lemma]: if \(\gamma\) is an element of \(\mathrm{GF}(q^{2})\) with multiplicative order \((q+1)/r\), then we define
\[g:=\begin{pmatrix}\gamma&0&0&0\\ 0&1&0&0\\ 0&0&1&0\\ 0&0&0&1\end{pmatrix}.\]
Since \(\gamma^{q+1}=1\), left multiplication by \(g\) preserves the bilinear form \(b\). It is also clear that \(g\) fixes every element of \(\mathcal{O}\), and, for the same reasons as noted in [1, 6.1 Lemma], \(\langle g\rangle\) acts fixed-point freely on the points of \(\mathcal{H}\) not on the hyperplane \(x_{1}=0\), and so \(\langle g\rangle\) acts fixed-point freely on the lines of \(\mathcal{H}\) incident with each point in \(\mathcal{O}\).
We now construct a subgroup \(G\) of automorphisms of \(\mathcal{H}\) of order \(q^{3}\). Let \(\alpha,\beta\in\operatorname{GF}(q^{2})\) be elements satisfying \(\alpha+\alpha^{q}+\beta^{q+1}=0\); there are exactly \(q^{3}\) such pairs \((\alpha,\beta)\). Define
\[t_{\alpha,\beta}:=\begin{pmatrix}1&0&0&0\\ 0&1&-\beta^{q}&\alpha\\ 0&0&1&\beta\\ 0&0&0&1\end{pmatrix}.\]
For the same reasons as noted in [1, p. 249], the set
\[G:=\left\{t_{\alpha,\beta}:\alpha+\alpha^{q}+\beta^{q+1}=0\right\}\]
is a group of order \(q^{3}\) fixing the point \(\langle(0,1,0,0)^{t}\rangle\) of \(\mathcal{O}\) and acting regularly on the remaining \(q^{3}\) elements of \(\mathcal{O}\).
Finally, we claim that the group \(G\) acts regularly on the vertices of our associated SRG. To see this, we choose \(P_{0}=\langle(0,1,0,0)^{t}\rangle\). We note that, for all \(\alpha,\beta\), \(gt_{\alpha,\beta}=t_{\alpha,\beta}g\), so the group \(G\) preserves the \(\langle g\rangle\)-orbits of lines. Since the number \(r\) of \(\langle g\rangle\)-orbits of lines incident with \(P_{0}\) is coprime to \(|G|=q^{3}\), at least one \(\langle g\rangle\)-orbit of lines incident with \(P_{0}\) is fixed by \(G\); we choose one such distinguished \(\langle g\rangle\)-orbit \(\langle g\rangle\ell_{0}\). Since \(G\) fixes \(\langle g\rangle\ell_{0}\) but acts regularly on \(\mathcal{O}\backslash\{P_{0}\}\), \(G\) acts regularly on the \(\langle g\rangle\)-orbits of lines \(\ell\) such that \(\ell\) is not incident with \(P_{0}\) but \(\ell_{0}\) is concurrent with some line in \(\langle g\rangle\ell\); in other words, \(G\) acts regularly on the vertices of the associated SRG. The group \(G\) is clearly nonabelian from its definition, but we may also conclude that \(G\) is nonabelian from Corollary 4.6. This completes the proof.
**Remark 4.12**.: For each fixed prime power \(q=p^{d}\), each PDS constructed in Theorem 4.11 occurs in the _same_ group \(G\) of order \(q^{3}\); indeed, the construction in Theorem 4.11 shows that \(G\) is isomorphic to a Sylow \(p\)-subgroup of the projective special unitary group \(\operatorname{PSU}(3,q)\). Moreover, examination of the elements \(t_{\alpha,\beta}\) shows that \(G\) has exponent \(4\) when \(q\) is even and exponent \(p\) when \(q\) is odd. For example, although it is already apparent from the matrix structure of the elements \(t_{\alpha,\beta}\), when \(q=p\) is an odd prime, this shows that \(G\) must be a Heisenberg group of order \(p^{3}\).
As we will discuss further in the next section, the results in this section suggest that the examination of known combinatorial structures for sharply transitive subgroups may be a fruitful line of inquiry.
## 5 Nonabelian Techniques and Future Directions
The world of PDSs in nonabelian groups is relatively unexplored territory, especially when compared to the abelian case. The main issue at the moment is that powerful
techniques in the abelian setting - such as Ma's Multiplier Theorem (Lemma 3.3) - simply do not hold when the group is nonabelian. With respect to the Multiplier Theorem, there is a partial generalization to the nonabelian case due to Ma [14]:
**Lemma 5.1**.: _Let \(S\) be a Type II PDS in a group \(G\), and let \(m\) be an integer that is relatively prime to \(|G/G^{\prime}|\), where \(G^{\prime}\) is the derived subgroup of \(G\). If \(\overline{S}\) denotes the image of \(S\) under the natural homomorphism \(G\to G/G^{\prime}\), then \(\overline{S}^{(m)}=\overline{S}\)._
Unfortunately, this result is not always useful in practice: for example, we consider again the group \(G=\langle\sigma,\tau\rangle\cong C_{7}\rtimes_{2}C_{3}\) and PDS \(S\) arising from Theorem 4.8. As we saw in Section 4, \(m=2\) is not a multiplier for \(S\), but Lemma 5.1 tells us that \(m=2\) will be a multiplier for \(\overline{S}\) in \(G/G^{\prime}\). However, in this case, \(G^{\prime}=\langle\sigma\rangle\) and \(G/G^{\prime}\cong C_{3}\), and so \(\overline{S}=\overline{S}^{(2)}=\overline{S}^{(-1)}\), which we already know will be true since \(S=S^{(-1)}\) for a PDS.
### A useful class function
There are a few known results that are extremely useful in the nonabelian setting, and these results have largely been inspired by the study of groups acting regularly on the point set of a generalized quadrangle. The first result is a generalization of Benson's Lemma [1, Lemma 4.3] to SRGs. Benson's Lemma utilizes a technique attributed to Graham Higman, which calculates the value of a character of the automorphism group of an association scheme on an eigenspace; see [1, pp. 89-91]. De Winter, Kamischke, and Wang [13] proved a generalization for SRGs, which we state here specifically for PDSs; recall that \(\sqrt{\Delta}=\theta_{1}-\theta_{2}\).
**Lemma 5.2**.: _Let \(S\) be a Type II PDS with parameters \((v,k,\lambda,\mu)\) with eigenvalues \(\theta_{2}<\theta_{1}<k\) in a group \(G\), and let \(x\) be a nontrivial element of the group \(G\). If \(d_{1}(x)\) denotes the number of vertices of the corresponding SRG \(\Gamma\) that \(x\) sends to adjacent vertices in \(\Gamma\), then_
\[k-\theta_{2}\equiv\mu-\theta_{2}(\theta_{1}+1)\equiv d_{1}(x)\pmod{\sqrt{ \Delta}}.\]
Yoshiara [15] used Benson's Lemma along with clever counting and group theoretical arguments to prove that no GQ of order \((t^{2},t)\), where \(t\geqslant 2\), has a group of automorphisms acting regularly on its point set. Inspired by these results, Swartz and Tauscheck [10] proved corresponding results that apply to PDSs. We state an equivalent result here based on the notation of this paper; in what follows, \(\operatorname{Cl}(x)\) denotes the conjugacy class of the group element \(x\in G\) and \(C_{G}(x)\) denotes the centralizer of \(x\) in \(G\).
**Lemma 5.3**.: _Let \(S\) be a Type II PDS in a group \(G\), let \(x\) be a nontrivial element of \(G\), and define_
\[\Phi(x):=|\operatorname{Cl}(x)\cap S||C_{G}(x)|.\]
_Then,_
\[\Phi(x)\equiv\mu-\theta_{2}(\theta_{1}+1)\pmod{\sqrt{\Delta}}.\]
_In particular, if \(\sqrt{\Delta}\) does not divide \(\mu-\theta_{2}(\theta_{1}+1)\), then \(\operatorname{Cl}(x)\cap S\neq\varnothing\)._
Note that \(\Phi(x)\) is invariant on each conjugacy class of \(G\) and hence a class function. In particular, _every_ nonidentity central element of a group is its own conjugacy class. Since the complement of a SRG is also a SRG, we can apply to the criterion of Lemma 5.3 to both a PDS and its complement in \(G\backslash\{1\}\) to prove the following:
**Corollary 5.4**.:
1. _If_ \(\sqrt{\Delta}\) _divides neither_ \(\mu-\theta_{2}(\theta_{1}+1)\) _nor_ \(v-2k+\lambda-\theta_{2}(\theta_{1}+1)\)_, then a group with a nontrivial center cannot contain a_ \((v,k,\lambda,\mu)\) _PDS._
2. _If_ \(\sqrt{\Delta}\) _does not divide one of_ \(\mu-\theta_{2}(\theta_{1}+1)\) _or_ \(v-2k+\lambda-\theta_{2}(\theta_{1}+1)\)_, then a_ \((v,k,\lambda,\mu)\) _PDS is genuinely nonabelian._
Proof of (ii).: We include a proof of (ii), since it is not stated explicitly in [10]. We assume without loss of generality that \(\sqrt{\Delta}\) does not divide \(\mu-\theta_{2}(\theta_{1}+1)\). By Lemma 5.3, for any nonidentity element \(x\) in a group \(G\) of order \(v\), \(\operatorname{Cl}(x)\cap S\neq\varnothing\). In particular, \(G\) cannot be abelian, since this implies every nonidentity element is contained in the PDS, and so any such PDS is necessarily genuinely nonabelian.
We include two examples illustrating the utility of these ideas.
**Example 5.5**.: Consider a \((p^{3},p^{2}+p-2,p-2,p+2)\) PDS, where \(p\) is an odd prime. In this case, \(\theta_{1}=p-2\), \(\theta_{2}=-p-2\), \(\sqrt{\Delta}=2p\), \(\mu-\theta_{2}(\theta_{1}+1)=p(p+2)\), and \(v-2k+\lambda-\theta_{2}(\theta_{1}+1)=p^{2}(p-1)\). Thus, \(\sqrt{\Delta}\) divides \(v-2k+\lambda-\theta_{2}(\theta_{1}+1)\) but not \(\mu-\theta_{2}(\theta_{1}+1)\), and hence any PDS is genuinely nonabelian by Corollary 5.4.
Let \(G\) be an extraspecial group of order \(p^{3}\) with exponent \(p^{2}\) isomorphic to \(C_{p^{2}}\rtimes C_{p}\). The centralizer of any noncentral element has order \(p^{2}\), and hence there are
\[\frac{p^{3}-p}{p}+(p-1)=p^{2}+p-2\]
nonidentity conjugacy classes in \(G\). Since every nonidentity conjugacy class must meet a \((p^{3},p^{2}+p-2,p-2,p+2)\) PDS nontrivially by Lemma 5.3, this means, if such a group contains a \((p^{3},p^{2}+p-2,p-2,p+2)\) PDS, then every nonidentity conjugacy class would necessarily meet the PDS in exactly one element. This is exactly what happens in the PDS constructed in [11]; see, in particular, [11, Lemma 5].
**Example 5.6**.: For a triangular graph \(T_{n}\) with parameters \((n(n-1)/2,2(n-2),n-2,4)\), \(\theta_{1}=n-4,\theta_{2}=-2\) and \(\sqrt{\Delta}=n-2.\) Thus,
\[\mu-\theta_{2}(\theta_{1}+1)=4+2(n-3)=2n-2\equiv 2\pmod{\sqrt{\Delta}},\]
so by Lemma 5.3 every nonidentity conjugacy class meets the PDS nontrivially.
The conjugacy classes of \(C_{7}\rtimes C_{3}\) are \(\{1\}\), \(\operatorname{Cl}(\sigma)\), \(\operatorname{Cl}(\sigma^{3})\), \(\operatorname{Cl}(\tau)\), and \(\operatorname{Cl}(\tau^{2})\) of sizes \(1\), \(3\), \(3\), \(7\), and \(7\), respectively. So, for \(x\neq 1\), \(\Phi(x)\) has values
\[7|\operatorname{Cl}(\sigma)\cap S|,\ 7|\operatorname{Cl}(\sigma^{3})\cap S|,\ 3| \operatorname{Cl}(\tau)\cap S|,\ 3|\operatorname{Cl}(\tau^{2})\cap S|.\]
Since \(\sqrt{\Delta}=5\), these must be each must be congruent to \(2\) modulo \(5\) while adding up to \(k=10\). This forces
\[|\operatorname{Cl}(\sigma)\cap S|=|\operatorname{Cl}(\sigma^{3})\cap S|=1,\ | \operatorname{Cl}(\tau)\cap S|=|\operatorname{Cl}(\tau^{2})\cap S|=4.\]
With this information, it is easy to write out the PDS.
The constraints imposed by the function \(\Phi\) appear to be extremely powerful in certain contexts. For example, Ott [14] recently used similar ideas to prove that any group \(G\) containing a \(((s+1)(s^{2}+1),s^{2}+s+1,s+1)\) reversible difference set containing the identity must have even order, settling a thirty-year-old conjecture of Ghinelli [15]; this equivalently shows that any group \(G\) containing a \(((s+1)(s^{2}+1),s^{2}+s,s-1,s+1)\) PDS must have even order.
**Question 5.7**.: Can further results about the function \(\Phi\) be used to prove the nonexistence of other PDSs in nonabelian groups?
Obviously, character theory has been an extremely powerful tool in the analysis of PDSs in abelian groups. Moreover, the function \(\Phi\), as noted above, is a class function. We may replace the family of characters of an abelian group with the set of irreducible representations of a general group. This worked in finding a \((100,44,18,20)\) genuinely nonabelian PDS; see [16]. Unfortunately, analysis of the irreducible representations of degree greater than one requires a study of matrices over the complex numbers, a much more complicated theory.
**Question 5.8**.: How can we use the irreducible representations of a nonabelian group \(G\) to study the PDSs contained in \(G\)?
### New PDSs from known combinatorial objects
As demonstrated by the tremendous success of Feng and Li finding new examples of PDSs by studying the full automorphism groups of known generalized quadrangles [13] - as well as the results of Theorems 4.8 and 4.11 - it stands to reason that the study of other combinatorial objects with large automorphism groups could prove fruitful. Moreover, Jorgensen and Klin constructed 15 distinct PDSs (with parameters \((100,22,0,6)\), \((100,36,14,12)\), \((100,45,20,20)\), \((100,44,18,20)\)) in four distinct nonabelian groups of order \(100\) by proceeding exactly in this manner [10]. Indeed, following along this line of reasoning, Feng, He, and Chen constructed nonabelian 2-groups with exponent \(4\), \(8\), and \(16\) and of nilpotency class \(2\), \(3\), \(4\), and \(6\) for the Davis-Xiang graphs and RT2 graphs [12] (which are Latin Square type;
see Example 2.4), and they remark that their "results suggest that a good way to construct nonabelian groups that contain nontrivial partial difference sets and amorphic nonabelian Cayley association schemes is to consider the regular subgroups of the known strongly regular graphs and known amorphic association schemes."
We will further show the utility of this approach by constructing examples of \((512,133,24,38)\) PDSs. (Such a PDS arises from Theorem 4.11 with \(q=8\) and \(r=3\).) In this case, \(\theta_{1}=5\), \(\theta_{2}=-19\), \(\sqrt{\Delta}=24\), and \(\mu-\theta_{2}(\theta_{1}+1)=152\), so, by Lemma 5.3, every nontrivial conjugacy class must meet such a PDS, which we already know will be genuinely nonabelian by Corollary 4.6.
Proceeding as in the proof of Theorem 4.11, we may construct a SRG with these parameters by starting with a \(\mathrm{GQ}\)\(\mathrm{H}(3,64)\) of order \((64,8)\), and, indeed, this graph can be constructed using the packages FinInG [1] and GRAPE [19] in GAP [1]. The full automorphism group of the graph has order \(193536\), and a Sylow \(2\)-subgroup of the full automorphism group has order \(1024\). While Theorem 4.11 guarantees the existence of one group of order \(512\) acting regularly on the vertices of this graph, an examination of all maximal subgroups of a Sylow \(2\)-subgroup yields the following:
**Theorem 5.9**.: _There exist two nonisomorphic nonabelian groups of order \(512\) that contain \((512,133,24,38)\) PDSs._
Proof.: One group is the one constructed in Theorem 4.11, and the other group and corresponding PDS are listed explicitly in Appendix A. They are nonisomorphic since the group constructed in Theorem 4.11 has exponent \(4\) (see Remark 4.12), while the group listed in Appendix A (identified as SmallGroup(512,4508) in GAP [1]) has exponent \(8\).
In light of Theorem 5.9, the authors believe that a full examination - in the spirit of [13] - of the groups acting regularly on the vertices of the Krein covers of complete graphs constructed by Godsil [1] would be quite interesting. With this in mind, we end the paper with one final question.
**Question 5.10**.: Which known SRGs have a nonabelian group of automorphisms acting regularly on vertices? In particular, are there other infinite families of such SRGs and PDSs?
|
2309.11810 | Extragalactic Test of General Relativity from Strong Gravitational
Lensing by using Artificial Neural Networks | This study aims to test the validity of general relativity (GR) on kiloparsec
scales by employing a newly compiled galaxy-scale strong gravitational lensing
(SGL) sample. We utilize the distance sum rule within the
Friedmann-Lema\^{\i}tre-Robertson-Walker metric to obtain cosmology-independent
constraints on both the parameterized post-Newtonian parameter $\gamma_{\rm
PPN}$ and the spatial curvature $\Omega_{k}$, which overcomes the circularity
problem induced by the presumption of a cosmological model grounded in GR. To
calibrate the distances in the SGL systems, we introduce a novel nonparametric
approach, Artificial Neural Network (ANN), to reconstruct a smooth
distance--redshift relation from the Pantheon+ sample of type Ia supernovae.
Our results show that $\gamma_{\rm PPN}=1.16_{-0.12}^{+0.15}$ and
$\Omega_k=0.89_{-1.00}^{+1.97}$, indicating a spatially flat universe with the
conservation of GR (i.e., $\Omega_k=0$ and $\gamma_{\rm PPN}=1$) is basically
supported within $1\sigma$ confidence level. Assuming a zero spatial curvature,
we find $\gamma_{\rm PPN}=1.09_{-0.10}^{+0.11}$, representing an agreement with
the prediction of 1 from GR to a 9.6\% precision. If we instead assume GR holds
(i.e., $\gamma_{\rm PPN}=1$), the curvature parameter constraint can be further
improved to be $\Omega_k=0.11_{-0.47}^{+0.78}$. These resulting constraints
demonstrate the effectiveness of our method in testing GR on galactic scales by
combining observations of strong lensing and the distance--redshift relation
reconstructed by ANN. | Jing-Yu Ran, Jun-Jie Wei | 2023-09-21T06:28:39Z | http://arxiv.org/abs/2309.11810v2 | Extragalactic Test of General Relativity from Strong Gravitational Lensing by using Artificial Neural Networks
###### Abstract
This study aims to test the validity of general relativity (GR) on kiloparsec scales by employing a newly compiled galaxy-scale strong gravitational lensing (SGL) sample. We utilize the distance sum rule within the Friedmann-Lemaitre-Robertson-Walker metric to obtain cosmology-independent constraints on both the parameterized post-Newtonian parameter \(\gamma_{\rm PPN}\) and the spatial curvature \(\Omega_{\rm k}\), which overcomes the circularity problem induced by the presumption of a cosmological model grounded in GR. To calibrate the distances in the SGL systems, we introduce a novel nonparametric approach, Artificial Neural Network (ANN), to reconstruct a smooth distance-redshift relation from the Pantheon+ sample of type Ia supernovae. Our results show that \(\gamma_{\rm PPN}=1.16^{+0.13}_{-0.12}\) and \(\Omega_{\rm k}=0.89^{+1.97}_{-1.00}\), indicating a spatially flat universe with the conservation of GR (i.e., \(\Omega_{\rm k}=0\) and \(\gamma_{\rm PPN}=1.1\)) is basically supported within \(1\sigma\) confidence level. Assuming a zero spatial curvature, we find \(\gamma_{\rm PPN}=1.09^{+0.14}_{-0.10}\), representing an agreement with the prediction of 1 from GR to a 9.6% precision. If we instead assume GR holds (i.e., \(\gamma_{\rm PPN}=1\)), the curvature parameter constraint can be further improved to be \(\Omega_{\rm k}=0.11^{+0.78}_{-0.47}\). These resulting constraints demonstrate the effectiveness of our method in testing GR on galactic scales by combining observations of strong lensing and the distance-redshift relation reconstructed by ANN.
## I Introduction
As an important cornerstone of modern physics, Einstein's theory of general relativity (GR) has withstood very strict tests (e.g., [1; 2; 3; 4]). But testing GR at a much higher precision is still a vital task, because any possible violation of GR would have profound effects on our understanding of fundamental physics. Within the parameterized post-Newtonian (PPN) formalism, GR predicts that the PPN parameter \(\gamma_{\rm PPN}\) which describes the amount of space curvature produced per unit rest mass should be exactly [5]. Measuring \(\gamma_{\rm PPN}\) therefore serves as a test of the validity of GR on large scales. That is, any deviation from \(\gamma_{\rm PPN}=1\) implies a possible violation of GR.
On solar system scales, the GR prediction for \(\gamma_{\rm PPN}\) has been confirmed with high accuracy. By measuring the round-trip travel time of radar signals passing near the Sun, the Cassini spacecraft yielded \(\gamma_{\rm PPN}=1+(2.1\pm 2.3)\times 10^{-5}\)[6]. However, the extragalactic tests of GR are still insufficiency and much less precise. On galactic scales, strong gravitational lensing (SGL), combined with stellar kinematics in the lensing galaxy, provides an effective way to test the validity of GR by constraining the PPN parameter \(\gamma_{\rm PPN}\). The pioneering work by Ref. [7] first utilized this approach and reported a result of \(\gamma_{\rm PPN}=0.98\pm 0.07\) based on observations of 15 elliptical lensing galaxies from the Sloan Lens ACS Survey. Since then, numerous studies have been conducted to test GR using different SGL samples [8; 9; 10; 11; 12; 13; 14]. In this paper, we further explore the validity of GR by employing a newly compiled SGL sample [15], which consists of 161 galaxy-scale strong lensing systems. This larger SGL sample allows us to perform a more comprehensive analysis and obtain further insights into the behavior of gravity on galactic scales.
In practice, in order to constrain the PPN parameter \(\gamma_{\rm PPN}\) using SGL systems, one has to know a ratio of three angular diameter distances (i.e., the distances from the observer to the lens, \(D_{ls}\), the observer to the source, \(D_{s}\), and the lens to the source, \(D_{ls}\)). In most previous works, the required distance ratio is calculated within the context of the standard \(\Lambda\)CDM cosmological model. However, \(\Lambda\)CDM itself is established on the framework of GR, which leads to a circularity problem in testing GR [13; 14]. To circumvent this problem, we will introduce the distance sum rule (DSR) in the Friedmann-Lemaitre-Robertson-Walker (FLRW) metric. The two distances \(D_{l}\) and \(D_{s}\) can be directly determined from observations of type Ia supernovae (SNe Ia), but not the distance \(D_{ls}\). The DSR enables us to convert \(D_{ls}\) into a relationship with \(D_{l}\), \(D_{s}\), and the spatial curvature \(\Omega_{k}\). Based on the DSR in the FLRW metric, cosmology-independent constraints on both \(\gamma_{\rm PPN}\) and \(\Omega_{k}\) can thus be obtained by combing observations of strong lensing and SNe Ia [10; 14].
Very recently, by employing the Gaussian Process (GP) method, Liu et al. [13] reconstructed a smooth distance-redshift relation directly from SN Ia observation to calibrate the distances in the SGL sample. GP allows for the reconstruction of a function from a dataset without assuming a specific model or parameterization, and it has been widely used in cosmological researches [16; 17; 18; 19; 20; 21]. In the GP analysis, the errors in the observational data are assumed to follow a Gaussian distribution [22]. However, the actual observations might not follow Gaussian distributions. This may thus be a strong assumption for reconstructing a function from observational data. Moreover, due to the sparsity and scatter of data points at high redshifts, the GP reconstructed function from SN Ia data exhibits strange oscillations with large uncertainties. To address these concerns and ensure the reliability of the reconstructed function, we employ the Artificial Neural Network
(ANN) method, which is a machine learning technique and has been proven to be a "universal approximator" that can reconstruct a great variety of functions [23; 24]. Thanks to the powerful property of neural networks, methods based on ANNs have been widely used in regression and estimation tasks. In this work, we will reconstruct the distance-redshift relation from SN Ia data using the ANN method, utilizing a code developed in Ref [25].
This paper is organized as follows: in Section II, we introduce the methodology and observations used for testing GR on galactic scales. Cosmology-independent constraints on \(\gamma_{\rm PPN}\) and \(\Omega_{k}\) are shown in Section III. In Section IV, we make a summary and end with some discussions.
## II Methodology and data
In the weak-field limit, the metric of space-time can be characterized as
\[{\rm d}s^{2}=c^{2}{\rm d}t^{2}\left(1-\frac{2GM}{c^{2}r}\right)-{\rm d}r^{2} \left(1+\frac{2\gamma_{\rm PPN}GM}{c^{2}r}\right)-r^{2}{\rm d}\Omega^{2}\, \tag{1}\]
where \(\gamma_{\rm PPN}\) is the PPN parameter, \(M\) is the mass of the central object, and \(\Omega\) is the angle in the invariant orbital plane. In the framework of GR, \(\gamma_{\rm PPN}\) is equal to unity.
### Gravitational Lensing Theory
The main idea of testing the validity of GR via SGL systems is that the mass enclosed within the Einstein radius derived separately from the gravitational theory and the dynamical theory should be equivalent, i.e., \(M_{\rm E}^{\rm eff}=M_{\rm E}^{\rm dyn}\). From the theory of gravitational lensing [26], the Einstein angle \(\theta_{\rm E}\) reflecting the angular separations between multiple images is related to the gravitational mass \(M_{\rm E}^{\rm eff}\),
\[\theta_{\rm E}=\sqrt{\frac{1+\gamma_{\rm PPN}}{2}}\left(\frac{4GM_{\rm E}^{ \rm eff}}{c^{2}}\frac{D_{ls}}{D_{l}D_{s}}\right)^{1/2}\, \tag{2}\]
where \(D_{l}\), \(D_{s}\), and \(D_{ls}\) are, respectively, the angular diameter distances from the observer to the lens, the observer to the source, and the lens to the source. By introducing the Einstein radius \(R_{\rm E}=D_{l}\theta_{\rm E}\), Equation (2) can be rearranged as
\[\frac{GM_{\rm E}^{\rm eff}}{R_{\rm E}}=\frac{2}{1+\gamma_{\rm PPN}}\frac{c^{2} }{4}\frac{D_{s}}{D_{ls}}\theta_{\rm E}. \tag{3}\]
To estimate the dynamical mass \(M_{\rm E}^{\rm dyn}\) from the spectroscopic measurement of the lens velocity dispersion, one must first set a mass distribution model for the lensing galaxy. Here we use the common mass model with power-law density profiles [27; 15]:
\[\left\{\begin{array}{l}\rho(r)=\rho_{0}\left(\frac{r}{r_{0}}\right)^{-\alpha }\\ \nu(r)=\nu_{0}\left(\frac{r}{r_{0}}\right)^{-\delta}\\ \beta(r)=1-\sigma_{r}^{2}/\sigma_{r}^{2}\,\end{array}\right. \tag{4}\]
where \(r\) is defined as the spherical radial coordinate from the lens centre, \(\rho\left(r\right)\) is the total (including luminous and dark matter) mass density distribution, and \(\nu\left(r\right)\) represents the distribution of luminous density. The parameter \(\beta\left(r\right)\) describes the anisotropy of the stellar velocity dispersion, where \(\sigma_{r}\) and \(\sigma_{r}\) are the velocity dispersions in the tangential and radial directions, respectively. In the literature, \(\beta\) is always assumed to be independent of \(r\) (e.g., [28; 27]). Following previous studies [13; 14; 15; 9; 7; 10], we set a Gaussian prior \(\beta=0.18\pm 0.13\), informed by the constraint from a well-studied sample of elliptical galaxies [29]. That is, \(\beta\) will be marginalized using a Gaussian prior of \(\beta=0.18\pm 0.13\) over the \(2\sigma\) range of \([-0.08,\ 0.44]\). Also, \(\alpha\) and \(\delta\) are the power-law indices of the total mass density profile and the luminosity density profile, respectively. It has been confirmed in previous works [15; 30] that \(\alpha\) is significantly related with the lens redshift \(z_{l}\) and the surface mass density of the lensing galaxy. Therefore, we treat the parametrized model of \(\alpha\) as [15]
\[\alpha=\alpha_{0}+\alpha_{z}z_{l}+\alpha_{s}\log_{10}\tilde{\Sigma}\, \tag{5}\]
where \(\alpha_{0}\), \(\alpha_{z}\) and \(\alpha_{s}\) are arbitrary constants. Here \(\tilde{\Sigma}\) stands for the normalized surface mass density, and is expressed as \(\tilde{\Sigma}=\frac{\left(\sigma_{0}/100\ {\rm km\ s}^{-1}\right)^{2}}{R_{\rm E}/10 \ {\rm kpc}}\), where \(\sigma_{0}\) is the observed velocity dispersion, \(R_{\rm eff}\) is the lensing galaxy's half-light radius, and \(h=H_{0}/(100\ {\rm km\ s}^{-1}\ {\rm Mpc}^{-1})\) is the reduced Hubble constant.
Following the well-known radial Jeans equation in spherical coordinate [31], the radial velocity dispersion of the luminous matter \(\sigma_{r}\) in early-type lens galaxies takes the form
\[\sigma_{r}^{2}\left(r\right)=\frac{G\int_{r}^{\infty}{\rm d}r^{\prime}r^{ \prime 2\beta-2}\nu\left(r^{\prime}\right)M\left(r^{\prime}\right)}{r^{\prime 2 \beta}\nu\left(r\right)}\, \tag{6}\]
where \(M\left(r\right)\) is the total mass included within a sphere with radius \(r\),
\[M\left(r\right)=\int_{0}^{r}{\rm d}r^{\prime}4\pi r^{\prime 2}\rho\left(r^{ \prime}\right)=4\pi\frac{\rho_{0}}{r_{0}^{-\alpha}}\frac{r^{3-\alpha}}{3- \alpha}. \tag{7}\]
The dynamical mass \(M_{\rm E}^{\rm dyn}\) enclosed within a cylinder of radius equal to the Einstein radius \(R_{\rm E}\) can be written as [15]
\[M_{\rm E}^{\rm dyn}=2\pi^{3/2}\frac{R_{\rm E}^{3-\alpha}}{3-\alpha}\frac{ \Gamma\left(\frac{\alpha-1}{2}\right)}{\Gamma\left(\frac{\alpha}{2}\right)} \frac{\rho_{0}}{r_{0}^{-\alpha}}\, \tag{8}\]
where \(\Gamma(x)\) is Euler's Gamma function. By combining Equations (7) and (8), we get the relation between \(M\left(r\right)\) and \(M_{\rm E}^{\rm dyn}\):
\[M(r)=\frac{2}{\sqrt{\pi}}\frac{1}{\lambda(\alpha)}\left(\frac{r}{R_{\rm E}} \right)^{3-\alpha}M_{\rm E}^{\rm dyn}\, \tag{9}\]
where \(\lambda(\alpha)=\Gamma\left(\frac{\alpha-1}{2}\right)/\Gamma\left(\frac{\alpha} {2}\right)\). By substituting Equations (9) and (4) into Equation (6), we obtain
\[\sigma_{r}^{2}\left(r\right)=\frac{2}{\sqrt{\pi}}\frac{GM_{\rm E}^{\rm dyn}}{R_{ \rm E}}\frac{1}{\xi-2\beta}\frac{1}{\lambda(\alpha)}\left(\frac{r}{R_{\rm E}} \right)^{2-\alpha}\, \tag{10}\]
where \(\xi=\alpha+\delta-2\).
The actual velocity dispersion of the lensing galaxy is the component of luminosity-weighted average along the line of sight and measured over the effective spectroscopic aperture \(R_{\rm A}\), that can be expressed as (see Ref. [15] for more details)
\[\sigma_{0}^{2}\left(\leq R_{\rm A}\right)=\frac{c^{2}}{2\sqrt{\pi}}\frac{2}{1+ \gamma_{\rm PPN}}\frac{D_{s}}{D_{ls}}\theta_{\rm E}F\left(\alpha,\ \delta,\ \beta\right)\left(\frac{R_{\rm A}}{R_{\rm E}}\right)^{2-\alpha}\, \tag{11}\]
where
\[F\left(\alpha,\ \delta,\ \beta\right)=\frac{3-\delta}{\left(\xi-2\beta\right) \left(3-\xi\right)}\frac{\lambda\left(\xi\right)-\beta\lambda\left(\xi+2 \right)}{\lambda\left(\alpha\right)\lambda\left(\delta\right)}. \tag{12}\]
The theoretical value of the velocity dispersion inside the radius \(R_{\rm eff}/2\) can then be calculated by [27]
\[\sigma_{0}^{\rm th}=\sqrt{\frac{c^{2}}{2\sqrt{\pi}}\frac{2}{1+\gamma_{\rm PPN}} \frac{D_{s}}{D_{ls}}\theta_{\rm E}F\left(\alpha,\ \delta,\ \beta\right)\left(\frac{\theta_{\rm eff}}{2\theta_{\rm E}}\right)^{2-\alpha}}\, \tag{13}\]
where \(\theta_{\rm eff}=R_{\rm eff}/D_{l}\) denotes the effective angular radius of the lensing galaxy.
Based on the spectroscopic data, one can measure the luminosity-weighted average of the line-of-sight velocity dispersion \(\sigma_{\rm ap}\) within the circular aperture with the angular radius \(\theta_{\rm ap}\). In practice, \(\sigma_{\rm ap}\) should be normalized to the velocity dispersion within the typical physical aperture with a radius \(R_{\rm eff}/2\),
\[\sigma_{0}^{\rm obs}=\sigma_{\rm ap}\left[\theta_{\rm eff}/(2\theta_{\rm ap}) \right]^{\eta}\, \tag{14}\]
where the value of the correction factor is taken as \(\eta=-0.066\pm 0.035\)[32]. Then, the total uncertainty of \(\sigma_{0}^{\rm obs}\) can be obtained by
\[\left(\Delta\sigma_{0}^{\rm SGL}\right)^{2}=\left(\Delta\sigma_{0}^{\rm stat} \right)^{2}+\left(\Delta\sigma_{0}^{\rm AC}\right)^{2}+\left(\Delta\sigma_{0} ^{\rm sys}\right)^{2}\, \tag{15}\]
where \(\Delta\sigma_{0}^{\rm stat}\) is the statistical error propagated from the measurement error of \(\sigma_{\rm ap}\), and \(\Delta\sigma_{0}^{\rm AC}\) is the aperture-correction-induced error propagated from the uncertainty of \(\eta\). The systematic error due to the extra mass contribution from the outer matters of the lensing galaxy along the line of sight, \(\Delta\sigma_{0}^{\rm sys}\), is taken as an uncertainty of \(\sim 3\%\) to the velocity dispersion [33].
Once we know the ratio of the angular diameter distances \(D_{s}/D_{ls}\), the constraints on the PPN parameter \(\gamma_{\rm PPN}\) can be derived by comparing the observational and theoretical values of the velocity dispersions (see Equations (13) and (14)). Conventionally the distance ratio \(D_{s}/D_{ls}\) is calculated within the standard \(\Lambda\)CDM cosmological model [9; 10]. However, \(\Lambda\)CDM itself is built on the framework of GR and this leads to a circularity problem [13; 14]. To avoid this problem, we will use a cosmological-model-independent method which is based upon the sum rule of distances in the FLRW metric to constrain \(\gamma_{\rm PPN}\).
### Distance Sum Rule
In a homogeneous and isotropic space, the dimensionless comoving distance \(d\left(z_{l},\ z_{s}\right)\equiv\left(H_{0}/c\right)\left(1+z_{s}\right)D_{ A}\left(z_{l},\ z_{s}\right)\) can be written as
\[d(z_{l},z_{s})=\frac{1}{\sqrt{|\Omega_{k}|}}{\rm sinn}\left(\sqrt{|\Omega_{k}| }\int_{z_{l}}^{z_{s}}\frac{{\rm d}z^{\prime}}{E(z^{\prime})}\right)\, \tag{16}\]
where \(\Omega_{k}\) denotes the spatial curvature density parameter at the present time and \(E(z)=H(z)/H_{0}\) is the dimensionless expansion rate. Also, \({\rm sinn}(x)\) is \({\rm sinh}(x)\) when \(\Omega_{k}>0\), \(x\) when \(\Omega_{k}=0\), and \({\rm sin}(x)\) when \(\Omega_{k}<0\). By applying the notations \(d(z)\equiv d\left(0,\ z\right)\), \(d_{ls}\equiv d\left(z_{l},\ z_{s}\right)\), \(d_{l}\equiv d\left(0,\ z\right)\), and \(d_{s}\equiv d\left(0,\ z_{s}\right)\), one can derive a sum rule of distances along the null geodesics of the FLRW metric as [34; 35; 36]
\[\frac{d_{ls}}{d_{s}}=\sqrt{1+\Omega_{k}d_{l}^{2}}-\frac{d_{l}}{d_{s}}\sqrt{1+ \Omega_{k}d_{s}^{2}}. \tag{17}\]
This relation provides a cosmology-independent probe to test both the spatial curvature and the FLRW metric. The validity of the FLRW metric can be tested by comparing the derived \(\Omega_{k}\) from the three distances (\(d_{l}\), \(d_{s}\), and \(d_{ls}\)) for any two pairs of (\(z_{l}\), \(z_{s}\)).
With Equation (17), the distance ratio \(D_{s}/D_{ls}\)1 in Equation (13) is only related to the curvature parameter \(\Omega_{k}\) and the dimensionless distances \(d_{l}\) and \(d_{s}\). If independent measurements of \(d_{l}\) and \(d_{s}\) are given, we can put constraints on both \(\gamma_{\rm PPN}\) and \(\Omega_{k}\) from Equations (13) and (17) without assuming any specific cosmological model.
Footnote 1: Note that \(D_{s}/D_{ls}\) is actually equal to the dimensionless distance ratio \(d_{s}/d_{ls}\).
### Artificial Neural Network
To calibrate the distances \(d_{l}\) and \(d_{s}\) of the SGL systems (i.e., the distances \(d_{l}\) and \(d_{s}\) on the right side of Equation (17)), we use a new nonparametric approach, ANN, to reconstruct a smooth distance-redshift relation \(d(z)\) from SN Ia observation.
ANNs possess several desirable properties, including high-level abstraction of neural input-output transformation, the ability to generalize from learned instances to new unseen data, adaptability, self-learning, fault tolerance, and nonlinearity [37]. According to the universal approximation theorem [38; 23], ANNs can function as universal function approximations to simulate arbitrary input-output relationships using multilayer feedforward networks with a sufficient number of hidden units. Therefore, we can input the redshift \(z\) into the neural network, with the corresponding comoving distance \(d(z)\) and its associated error \(\sigma_{d(z)}\) as the desired outputs. Once the network has been trained using the Pantheon+ sample, we will obtain an approximate function capable of predicting both \(d(z)\) and its error \(\sigma_{d(z)}\) at any given redshift \(z\).
Ref. [25] has developed a Python code for the reconstruction of functions from observational data employing an ANN. They have substantiated the reliability of these reconstructed functions by estimating cosmological parameters through the
utilization of the reconstructed Hubble parameter \(H(z)\) and the luminosity distance \(D_{L}(z)\), in direct comparison with observational data. In our study, we will employ this code to reconstruct the distance-redshift relation.
The general structure of an ANN consists of an input layer, one or more hidden layers, and an output layer. The basic unit of these layers are referred to as neurons, which serve as both linear transformation units and nonlinear activation functions for the input vector. In accordance with Ref. [25], we employ the Exponential Linear Unit as our chosen activation function, as defined by its form in [39]:
\[f\left(x\right)=\left\{\begin{array}{cc}x&x>0\\ \alpha\left(e^{x}-1\right)&x\leq 0\end{array}\right., \tag{18}\]
where the hyperparameter \(\alpha\) is set to 1.
The network is trained by minimizing a loss function, which quantitatively measures the discrepancy between the ground truth and predicted values. In this analysis, we adopt the mean absolute error (MAE), also known as the L1 loss function, as our choice of loss function. The linear weights and biases within the network are optimized using the back-propagation algorithm. We employ the Adam optimizer [40], a gradient-based optimization technique, to iteratively update the network parameters during training. This choice of optimizer also contributes to faster convergence. After multiple iterations, the network parameters are adjusted to minimize the loss. We have determined that a sufficient number of iterations for training convergence is reached when the loss no longer decreases, which we set to be \(3\times 10^{5}\). Batch Normalization [41] is a technique designed to stabilize the distribution of inputs within each layer, allowing for higher learning rates and reduced sensitivity to initialization.
To determine the optimal network model, we train the network using 1701 SNe Ia from the Pantheon+ sample (more on this below) and assess the fitting effect through K-fold cross-validation [42]. In K-fold cross-validation, the training set is divided into k smaller sets, with k-1 folds used as training data for model training, and the remaining fold used for validation. This process is repeated k times, with each fold serving as the validation data once. The final performance of the model is determined by averaging the performance across these k iterations. This approach is particularly useful when the number of samples available for learning is insufficient to split into traditional train, validation, and test sets, as is the case in our analysis. Additionally, it helps mitigate issues arising from the randomness in data partitioning. As a general guideline, we have selected \(k=10\) for our cross-validation procedure and have utilized the mean squared error (MSE) as the metric for validating the performance of the model.
Through our experimentation, we have found that the network model with a single hidden layer comprising 4096 neurons and without batch normalization yields the best results. We conduct comparisons with models having varying numbers of hidden layers, and we observe diminished performance as the number of hidden layers increased, accompanied by increased computational resource consumption. Regarding the number of neurons in the hidden layer, we observe negligible impact on the results, as reflected by the final MSE values consistently hovering around 0.0042, regardless of whether the number of neurons was set to 1024, 2048, 4096, or 8192. Importantly, the final MSE value with 4096 neurons was slightly smaller compared to the other three configurations, and as a result, we select this configuration. The validation values for implementing batch normalization or not implementing it is 0.0049 or 0.0042, respectively.
Subsequently, we will employ the optimal network model, as described above, to reconstruct our distance-redshift curve.
### Supernova Data
In order to reconstruct the distance function \(d(z)\), we choose the latest combined sample of SNe Ia called Pantheon+ [43], which consists of 1701 light curves of 1550 SNe Ia, covering the redshift range \(0.001<z<2.3\). For each SN Ia, the distance modulus \(\mu\) is related to the luminosity distance \(D_{L}\) by
\[\mu(z)=5\log_{10}\left[\frac{D_{L}(z)}{\text{Mpc}}\right]+25\, \tag{19}\]
and the observed distance modulus is
\[\mu_{\text{obs}}(z)=m_{B}(z)+\kappa\cdot X_{1}-\omega\cdot\mathcal{C}-M_{B}\, \tag{20}\]
where \(m_{B}\) is the rest-frame \(B\) band peak magnitude, \(X_{1}\) and \(\mathcal{C}\), respectively, represent the time stretch of light curve and the SN color at maximum brightness, and \(M_{B}\) is the absolute \(B\)-band magnitude. Through the BEAMS with Bias Corrections method [44], the two nuisance parameters \(\kappa\) and \(\omega\) can be calibrated to zero. Then, the observed distance modulus can be simplified as
\[\mu_{\text{obs}}(z)=m_{\text{corr}}(z)-M_{B}\, \tag{21}\]
where \(m_{\text{corr}}\) is the corrected apparent magnitude. The absolute magnitude \(M_{B}\) is exactly degenerate with the Hubble constant \(H_{0}\). Once the value of \(M_{B}\) or \(H_{0}\) is known, the luminosity distances \(D_{L}(z)\) can be obtained from SNe Ia.
In this work, we adopt \(H_{0}=70\) km s\({}^{-1}\) Mpc\({}^{-1}\) to normalize the SN Ia \(D_{L}(z)\) data as the observational \(d(z)\). That is, \(d(z)=(H_{0}/c)D_{L}(z)/(1+z)\). Note that the choice of \(H_{0}\) has no impact on our results, since the required distance ratio \(D_{s}/D_{ls}\) (see Equation (13)) is completely independent of \(H_{0}\). Having obtained the dataset of \(d(z)\), we adopt ANN to reconstruct the distance function \(d(z)\), and the results are shown in Figure 1. The black line represents the reconstructed function of \(d(z)\), and the shaded region is the corresponding \(1\sigma\) confidence level.
### Strong-lensing Data
Recently, Ref. [15] compiled a galaxy-scale SGL sample including 161 systems with stellar velocity dispersion measurements, which is assembled with strict selection criteria to meet the assumption of spherical symmetry on the lens mass model. The observational information for each SGL system
are listed in Appendix of Ref. [15], including the lens redshift \(z_{l}\), the source redshift \(z_{s}\), the Einstein angle \(\theta_{E}\), the central velocity dispersion of the lensing galaxy \(\sigma_{\rm ap}\), the spectroscopic aperture angular radius \(\theta_{\rm ap}\), and the half-light angular radius of the lensing galaxy \(\theta_{\rm off}\).
By fitting the two-dimensional power-law luminosity profile convolved with the instrumental point spread function to the high-resolution Hubble Space Telescope imaging data over a circle of radius \(\theta_{\rm eff}/2\) centered on the lensing galaxies, Ref. [15] measured the slops of the luminosity density profile \(\delta\) for the 130 lensing galaxies in the full sample. They showed that \(\delta\) should be treated as as an observable for each lens in order to get an unbiased estimate of the cosmological parameter \(\Omega_{\rm m}\). Therefore, the SGL sample we adopt here is the truncated sample of 130 SGL systems with \(\delta\) measurements, for which the redshift ranges of lenses and sources are \(0.0624\leq z_{l}\leq 0.7224\) and \(0.1970\leq z_{s}\leq 2.8324\), respectively. In this work, we use the reconstructed distance function \(d(z)\) from Pantheon+ SNe Ia to calibrate the distances \(d_{l}\) and \(d_{s}\) of the SGL systems. However, the SN Ia catalog extends only to \(z=2.3\). As such, we shall employ only a sub-set of the SGL sample that overlaps with the SN Ia data for the calibration. Thus, only 120 SGL systems with \(z_{s}\leq 2.3\) are available in our analysis.
### The Likelihood Function
By using the Python Markov Chain Monte Carlo module EMCEE [45] to maximize the likelihood function \(\mathcal{L}\), we simultaneously place limits on the PPN parameter \(\gamma_{\rm PPN}\), the curvature parameter \(\Omega_{k}\), and the lens model parameters (\(\alpha_{0}\), \(\alpha_{z}\), and \(\alpha_{s}\)). The likelihood function is defined as
\[\mathcal{L}=\prod_{i=1}^{120}\frac{1}{\sqrt{2\pi}\Delta\sigma_{0,i}^{\rm tot}} \exp\left[-\frac{1}{2}\left(\frac{\sigma_{0,i}^{\rm th}-\sigma_{0,i}^{\rm obs} }{\Delta\sigma_{0,i}^{\rm tot}}\right)^{2}\right]\, \tag{22}\]
where the variance
\[\left(\Delta\sigma_{0}^{\rm tot}\right)^{2}=\left(\Delta\sigma_{0}^{\rm SGL} \right)^{2}+\left(\Delta\sigma_{0}^{\rm SN}\right)^{2} \tag{23}\]
is given in terms of the total uncertainty \(\Delta\sigma_{0}^{\rm SGL}\) derived from the SGL observation (Equation (15)) and the propagated uncertainty \(\Delta\sigma_{0}^{\rm SN}\) derived from the distance calibration by SNe Ia. With Equation (13), the propagated uncertainty \(\Delta\sigma_{0}^{\rm SN}\) can be estimated as
\[\Delta\sigma_{0}^{\rm SN}=\sigma_{0}^{\rm th}\frac{\Delta D_{r}}{2D_{r}}\, \tag{24}\]
where \(D_{r}\) is a convenient notation for the distance ratio in Equation (13), i.e., \(D_{r}\equiv D_{s}/D_{ls}=d_{s}/d_{ls}\), and its uncertainty is \(\Delta D_{r}\). With the reconstructed distance function \(d(z)\), as well as its \(1\sigma\) uncertainty \(\Delta d(z)\), from the SN Ia data, we can calibrate the distances (\(d_{l}\) and \(d_{s}\)) and their corresponding uncertainties (\(\Delta d_{l}\) and \(\Delta d_{s}\)) for each SGL system. Thus, the uncertainty \(\Delta D_{r}\) of the distance ratio can be easily derived from Equation (17), i.e.,
\[\begin{split}(\Delta D_{r})^{2}=& D_{r}^{4}\left( \frac{\Omega_{k}d_{l}}{\sqrt{1+\Omega_{k}d_{l}^{2}}}-\frac{\sqrt{1+\Omega_{k} d_{s}^{2}}}{d_{s}}\right)^{2}(\Delta d_{l})^{2}\\ &+D_{r}^{4}\left(\frac{d_{l}}{d_{s}^{2}\sqrt{1+\Omega_{k}d_{s}^{ 2}}}\right)^{2}(\Delta d_{s})^{2}\.\end{split} \tag{25}\]
## III Results
The 1D marginalized probability distributions and 2D plots of the \(1-2\sigma\) confidence regions for the PPN parameter \(\gamma_{\rm PPN}\), the cosmic curvature \(\Omega_{k}\), and the lens model parameters (\(\alpha_{0}\), \(\alpha_{z}\), and \(\alpha_{s}\)), constrained by 120 SGL systems, are presented in Figure 2, and the best-fitting results are listed in Table 1. These contours show that at the \(1\sigma\) confidence level, the inferred parameter values are \(\gamma_{\rm PPN}=1.16^{+0.15}_{-0.12}\), \(\Omega_{k}=0.89^{+1.97}_{-1.00}\), \(\alpha_{0}=1.2^{+0.15}_{-0.15}\), \(\alpha_{z}=-0.37^{+0.22}_{-0.26}\), and \(\alpha_{s}=0.70^{+0.10}_{-0.09}\). We find that the measured \(\gamma_{\rm PPN}\) is consistent with the prediction of \(\gamma_{\rm PPN}=1\) from GR, and its constraint accuracy is about 11.6%. While \(\Omega_{k}\) is weakly constrained, it is still compatible with zero spatial curvature within \(1\sigma\) confidence level. We also find that the inferred \(\alpha_{z}\) and \(\alpha_{s}\) separately deviate from zero at \(\sim 2\sigma\) and \(\sim 8\sigma\) levels, confirming previous finding that the total mass density slope \(\alpha\) strongly depends on both the lens redshift and the surface mass density [15].
We further explore the scenario of adopting a prior of flatness, i.e., \(\Omega_{k}=0\). For this scenario, as shown in Figure 3 and Table 1, the marginalized distribution gives \(\gamma_{\rm PPN}=1.09^{+0.11}_{-0.10}\), representing a precision of 9.6%, in good agreement with the prediction of GR. If instead we adopt a prior of \(\gamma_{\rm PPN}=1\) (i.e., assuming GR holds) and allow \(\Omega_{k}\) to be a free parameter, the resulting constraints on \(\Omega_{k}\) and the lens model parameters are displayed in Figure 4 and Table 1. The marginalized \(\Omega_{k}\) constraint is \(\Omega_{k}=0.12^{+0.78}_{-0.47}\), consistent with a spatially flat universe. The comparison among lines 1-3 of Table 1 suggests that different choices of priors have little effect on the lens model parameters (\(\alpha_{0}\), \(\alpha_{z}\), and \(\alpha_{s}\)).
Figure 1: Reconstruction of the dimensionless comoving distance \(d(z)\) from Pantheon+ SNe Ia using ANN. The shaded area is the \(1\sigma\) confidence level of the reconstruction. The blue dots with error bars represent the observational data.
## IV Conclusion and Discussions
Galaxy-scale SGL systems, combined with stellar velocity dispersion measurements of lensing galaxies, provide a powerful probe to test the validity of GR by constraining the PPN parameter \(\gamma_{\rm PPN}\) on kiloparsec scales. Testing GR in this manner, however, it is necessary to know the angular diameter distances between the observer, lens, and source. Conventionally, the required distances are calculated within the standard \(\Lambda\)CDM cosmological model. Such distance calculations would involve a circularity problem in testing GR, since \(\Lambda\)CDM itself is established on the framework of GR. In this paper, in order to address the circularity problem, we have employed the DSR in the FLRW metric to estimate not only \(\gamma_{\rm PPN}\) but also the spatial curvature \(\Omega_{k}\) independently of any specific cosmological model. To calibrate the distances of the SGL systems, we have introduced a new nonparametric approach for reconstructing the distance-redshift relation from the Pantheon+ SN Ia sample using an ANN, which has no assumptions about the observational data and is a completely
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Priors & \(\gamma_{\rm PPN}\) & \(\Omega_{k}\) & \(\alpha_{0}\) & \(\alpha_{z}\) & \(\alpha_{s}\) \\ \hline None & \(1.16^{+0.15}_{-0.10}\) & \(0.89^{+1.57}_{-1.00}\) & \(1.20^{+0.15}_{-0.14}\) & \(-0.37^{+0.22}_{-0.17}\) & \(0.70^{+0.10}_{-0.09}\) \\ \(\Omega_{k}=0\) & \(1.09^{+0.11}_{-0.10}\) & & \(1.22^{+0.14}_{-0.14}\) & \(-0.20^{+0.11}_{-0.11}\) & \(0.67^{+0.09}_{-0.09}\) \\ \(\gamma_{\rm PPN}=1\) & & \(0.12^{+0.78}_{-0.47}\) & \(1.10^{+0.11}_{-0.12}\) & \(-0.20^{+0.15}_{-0.16}\) & \(0.74^{+0.08}_{-0.08}\) \\ \hline \end{tabular}
\end{table}
Table 1: Constraint results for All Parameters with Different Priors
Figure 2: 1D marginalized probability distributions and 2D \(1-2\sigma\) confidence contours for the PPN parameter \(\gamma_{\rm PPN}\), the cosmic curvature \(\Omega_{k}\), and the lens model parameters (\(\alpha_{0}\), \(\alpha_{z}\), and \(\alpha_{s}\)). The dashed lines represent \(\gamma_{\rm PPN}=1\) and \(\Omega_{k}=0\), corresponding to a flat universe with the validity of GR.
data-driven approach.
By combining 120 well-selected SGL systems with the reconstructed distance function from 1701 data points of SNe Ia, we have obtained simultaneous estimates of \(\gamma_{\rm PPN}\) and \(\Omega_{k}\) without any specific assumptions about the contents of the universe or the theory of gravity. Our results show that \(\gamma_{\rm PPN}=1.16^{+0.15}_{-0.12}\) and \(\Omega_{k}=0.89^{+1.97}_{-1.00}\). The measured \(\gamma_{\rm PPN}\) is in good agreement with the prediction of GR with 11.6% accuracy. If we use flatness as a prior (i.e., \(\Omega_{k}=0\)), we infer that \(\gamma_{\rm PPN}=1.09^{+0.11}_{-0.10}\), representing a precision of 9.6%. If we instead assume the conservation of GR (i.e., \(\gamma_{\rm PPN}=1\)) and allow \(\Omega_{k}\) to be a free parameter, we find \(\Omega_{k}=0.12^{+0.78}_{-0.47}\). The measured \(\Omega_{k}\) is consistent with zero spatial curvature, suggesting that there is no significant deviation from a flat universe.
In the literature, based on a sample of 80 SGL systems, Ref. [10] obtained the constraint accuracy of the PPN parameter \(\gamma_{\rm PPN}\) to be 25% under the assumption of a flat \(\Lambda\)CDM model with parameters taken from Planck observations. Within the same context of \(\Lambda\)CDM, Ref. [11] concluded that \(\gamma_{\rm PPN}=0.97\pm 0.09\) (representing a precision of 9.3%) by analyzing the nearby lens ESO 325-G004. Through the reanalysis of four time-delay lenses, Ref. [12] obtained simultaneous constraints of \(\gamma_{\rm PPN}\) and the Hubble constant \(H_{0}\) for flat \(\Lambda\)CDM, yielding \(\gamma_{\rm PPN}=0.87^{+0.19}_{-0.17}\) (representing a precision of 21%) and \(H_{0}=73.65^{+1.95}_{-2.26}\) km s\({}^{-1}\) Mpc\({}^{-1}\). Within a flat FLRW metric, Ref. [13] used 120 lenses to achieve a model-independent estimate of \(\gamma_{\rm PPN}=1.065^{+0.064}_{-0.074}\) (representing a precision of 6.5%) by employing the GP method to reconstruct the SN distances. As a further refinement, Ref. [14] removed the flatness assumption and implemented the DSR to obtain model-independent constraints of \(\gamma_{\rm PPN}=1.11^{+0.11}_{-0.09}\) (representing a precision of 9.0%) and \(\Omega_{k}=0.48^{+1.09}_{-0.71}\). Note that in Ref. [14] the distances of the SGL systems were determined by fitting a third-order polynomial to the SN Ia data. Unlike the polynomial fit that rely on the assumed parameterization, the ANN used in this work is a completely data-driven approach that could reconstruct a function from various data without assuming a parameterization of the function. Moreover, unlike the GP method that rely on the assumption of Gaussian distributions for the observational random variables, the ANN method has no assumptions about the data. More importantly, compared to previous results, our work yielded comparable resulting constraints on \(\gamma_{\rm PPN}\), which indicates the effectiveness of data-driven modeling based on the ANN.
Looking forward, the forthcoming Large Synoptic Survey Telescope (LSST) survey, with its excellent operation performance, holds great promise for detecting a large number of lenses, potentially reaching up to 120,000 in the most optimistic scenario [46]. By setting a prior on the curvature parameter \(-0.007<\Omega_{k}<0.006\), Ref. [10] showed that 53,000 simulated LSST strong lensing data would set a stringent constraint of \(\gamma_{\rm PPN}=1.000^{+0.009}_{-0.0011}\), reaching a precision of \(10^{-3}\sim 10^{-4}\). Similarly, Ref. [47] performed a robust extragalactic test of GR using a well-defined sample of 5,000
Figure 3: Same as Figure 2, except now for the scenario with a prior of \(\Omega_{k}=0\). The dashed line represents \(\gamma_{\rm PPN}=1\) predicted by GR.
simulated strong lenses from LSST, yielding an accuracy of 0.5%. In brief, much more severe constraints on both \(\gamma_{\rm PPN}\) and \(\Omega_{k}\), as discussed in this work, can be expected with the help of future lens surveys.
###### Acknowledgements.
This work is partially supported by the National Natural Science Foundation of China (grant Nos. 12373053 and 12041306), the Key Research Program of Frontier Sciences (grant No. ZDBS-LY-7014) of Chinese Academy of Sciences, the Natural Science Foundation of Jiangsu Province (grant No. BK20221562), and the Young Elite Scientists Sponsorship Program of Jiangsu Association for Science and Technology.
|
2309.05865 | Force-directed graph embedding with hops distance | Graph embedding has become an increasingly important technique for analyzing
graph-structured data. By representing nodes in a graph as vectors in a
low-dimensional space, graph embedding enables efficient graph processing and
analysis tasks like node classification, link prediction, and visualization. In
this paper, we propose a novel force-directed graph embedding method that
utilizes the steady acceleration kinetic formula to embed nodes in a way that
preserves graph topology and structural features. Our method simulates a set of
customized attractive and repulsive forces between all node pairs with respect
to their hop distance. These forces are then used in Newton's second law to
obtain the acceleration of each node. The method is intuitive, parallelizable,
and highly scalable. We evaluate our method on several graph analysis tasks and
show that it achieves competitive performance compared to state-of-the-art
unsupervised embedding techniques. | Hamidreza Lotfalizadeh, Mohammad Al Hasan | 2023-09-11T23:08:03Z | http://arxiv.org/abs/2309.05865v1 | # Force-directed graph embedding with hops distance
###### Abstract
Graph embedding has become an increasingly important technique for analyzing graph-structured data. By representing nodes in a graph as vectors in a low-dimensional space, graph embedding enables efficient graph processing and analysis tasks like node classification, link prediction, and visualization. In this paper, we propose a novel force-directed graph embedding method that utilizes the steady acceleration kinetic formula to embed nodes in a way that preserves graph topology and structural features. Our method simulates a set of customized attractive and repulsive forces between all node pairs with respect to their hop distance. These forces are then used in Newton's second law to obtain the acceleration of each node. The method is intuitive, parallelizable, and highly scalable. We evaluate our method on several graph analysis tasks and show that it achieves competitive performance compared to state-of-the-art unsupervised embedding techniques.
Graph embedding, Force-directed, Unsupervised, Dimension reduction, Data representation
## I Introduction
In graph theory, a graph is a mathematical structure that represents relationships between objects. Objects and relationships between pairs of objects are represented by a set of vertices and edges respectively. Another conventional name for vertex and edge is node and link which are used interchangeably in this paper. Graphs are the widely used data structures for representing relationships between entities in various domains of applications, such as social networks, biological networks, knowledge graphs, and communication networks. Analyzing graphs can provide valuable insights into these domains. Nevertheless, working with graph data also poses significant computational challenges due to its complex, interconnected structure.
A key challenge when analyzing graphs is developing useful vector representations of the nodes, which can enable downstream machine learning tasks like node classification, link prediction, and clustering [1, 2, 3].
Graph embedding has emerged as an effective technique for converting graph-structured data into a form that is more amenable to analysis and computation. The key idea is to represent each node in the graph as a vector in a low-dimensional space. In other words, the objective of graph embedding on a graph with \(n\) nodes is to place each node in a d-dimensional space, with \(d\ll n\) such that the placements reflect important graph structure and connectivity information. The embedding is constructed to preserve node proximity, which refers to the proximity or similarity between a pair of nodes based on the graph structure. Nodes that are "close" in terms of hop distance in the original graph topology should be embedded closer together in the vector space.
Recent work has shown that low-dimensional embeddings of nodes in large graphs can encode useful information about network structure and node metadata [1, 2, 3, 4, 5, 6]. The basic methodology behind these node embedding approaches is to use dimensionality reduction techniques tailored for graph structures to distill high-dimensional graph information into a dense vector representation for each node.
Graph embedding methods can be categorized into two major sets of supervised and unsupervised embedding methods. In unsupervised embedding, the only information available during the embedding process is the node connectivity information in terms of edges. In supervised embedding however, we also may have other information such as node and edge features.
In this work, we present a novel unsupervised graph embedding method based on force-directed objective functions. Force-directed approaches have been extensively used for graph drawing and visualization, where node positions are optimized to reflect graph topology. We adopt the force-directed paradigm to learn graph embeddings that preserve structural proximity between nodes. Our method simulates attractive and repulsive forces between nodes to obtain a low-dimensional embedding that maintains the graph topology, node proximities, and graph connectivity information. We demonstrate the effectiveness of our technique on tasks like node classification, link prediction, and visualization on a diverse set of graph network datasets. Our force-directed embedding method outperforms existing unsupervised techniques, highlighting the utility of physics-based simulations for graph representation learning.
## II Previous Works
A variety of unsupervised graph embedding techniques have been developed that aim to preserve graph topology in the embedded space without relying on node attributes or labels. We briefly review some representative methods from the major categories discussed in the following.
Matrix factorization-based techniques like Locally Linear Embedding (LLE) [7] and Laplacian Eigenmaps [8] aim to
factorize a matrix representing node proximity, like the adjacency matrix or Laplacian matrix, to obtain the embeddings. For example, Laplacian Eigenmaps minimize a cost function that penalizes large distances between connected nodes in the embedding. While these methods encode global graph structure, the eigendecomposition becomes expensive for large graphs.
Edge reconstruction techniques like LINE [4] and SDNE [5] directly optimize an objective like edge reconstruction probability or representation likelihood over the embedding vectors. This is efficient but relies solely on direct first-order connections between nodes.
DeepWalk [1] and Node2vec [3] are random walk-based methods. In random walk-based methods, a set of nodes is sampled from the graph through randomized walks. The random traversal is supposed to reflect the connectivity features of the graph. The embedding of the sampled nodes is then optimized with respect to the co-occurrence probability of neighboring nodes in these walks. Node2vec expands on DeepWalk and uses a different random walk strategy by having a bias for BFS traversal over DFS. Both these works use the skip-gram model of word2vec [9] for their optimization objective.
Force-directed algorithms are among the most popular methods for visualizing graph topology and symmetries through 2-dimensional or 3-dimensional layouts. The key idea is to model the graph as a physical system with forces between nodes that determine their layout. Distance-based method [10] defines an ideal distance between nodes based on graph distance to minimize the difference between the ideal and actual distances. Electrical force models [11] use electrical repulsion forces between nodes. These basic force-directed methods work well for small graphs but often get trapped in local optima for larger graphs. Multilevel approaches [12, 13] use previous approaches while coarsening the graph recursively and refining the layout from coarse to fine. This approach helps avoid local minima. N-body simulations [14] use techniques like Barnes-Hut to approximate long-range forces efficiently which allows scaling. In [15] the graph is embedded in a high-dimensional space and projected to 2D or 3D.
While extensively studied for visualization, force-directed techniques have rarely been applied to graph embedding. Rahman et al. in a recent work [16, 17] proposed Force2Vec, which uses the spring electrical force model as a loss function. They calculate the repulsive force between all pairs of nodes and the attractive force between only pairs of nodes that are connected by an edge in the graph. In addition, they use stochastic gradient descent with negative sampling for optimization. and optimizations like batching and vector operations to scale training.
Our force-directed embedding method is based on the famous kinetic formula \(x=1/2at^{2}+v_{0}t+x_{0}\) to calculate the gradient for embedding at each step. The acceleration is calculated using Newton's \(2^{nd}\) law \(a=F/m\) by calculating the sum of all forces induced on each node. We demonstrate through experiments that this holistic approach outperforms existing techniques on several graph analysis tasks.
## III The Proposed Method
### _Background_
Let \(G(V,E)\) denote a graph with \(V\) and \(E\) sets representing its sets of \(n\) vertices and \(m\) edges respectively. The embedding process learns the mapping function \(f\colon V\longrightarrow\mathbb{R}^{d}\), \(d\ll n\) which maps each element of the set \(V\) to a d-dimensional vector space. In other words \(f(u_{i})=z_{i}\) for \(u_{i}\in V,z_{i}\in\mathbb{R}^{d}\). A path from node \(u_{i}\) to \(u_{j}\), \(i\neq j\) in a graph is a sequence of connected nodes with \(u_{i}\) at the head of the sequence and \(u_{j}\) at the end. A hops-distance matrix \(\mathbf{H}\in\mathbb{Z}_{\geq 0}^{n\times n}\) represents the shortest path distance length between nodes. \(\mathbf{H}_{ij}\in\mathbb{Z}_{\geq 0}\) represents the length of the shortest path from node \(u_{i}\) to \(u_{j}\) such that each traversal from one node to the next in the sequence counts as one hop. \(\mathbf{H}_{ij}=0\) if \(i=j\), i.e. link to self counts as zero-hop. \(\mathbf{H}_{ij}=\infty\) if there is no path between the two nodes.
### _Our proposed force-directed methodology_
The main premise of our methodology is to embed nodes in the d-dimensional such that the Euclidean distance of node pairs is proportional to their hops-distance and path count between them. We want the nodes that have shorter hop distances in the graph to be embedded proportionally closer in the d-dimensional space. In addition, we want the embedding to capture and reflect other graph structural and connectivity information such as community, bridge, path count, etc.
The force-directed approach comes into frame when considering the kinetic equation model and Newton's second law as follows.
\[x =\frac{1}{2}at^{2}+v_{0}t+x_{0} \tag{1}\] \[\frac{dx}{dt} =at+v_{0}\] (2) \[\frac{dx}{dt} =at_{step}=a\] (3) \[a =\frac{F}{m} \tag{4}\]
The equation (1) shows location \(x\in\mathbb{R}^{d}\) of a mobile object in a d-dimensional vector space at time \(t\) with acceleration rate \(a\in\mathbb{R}^{d}\) and initial velocity \(v_{0}\in\mathbb{R}^{d}\) and initial location \(x_{0}\in\mathbb{R}^{d}\). In equation (2) \(\frac{dx}{dt}\) is the rate of change of \(x\) at time \(t\) and is proportional to acceleration factor \(a\).
We wish to use \(dx\) from equation (2) as the gradient for embedding in a vector space in a step-wise fashion. At the beginning of each step, all nodes are halted which makes \(v_{0}=0\). In addition, we assume timesteps \(t_{\text{step}}=1\) so that we retain an approximately steady acceleration rate during the step and do not render the kinetic equation irrelevant.
According to Newton's second law (4), the acceleration of an object is directly proportional to the net force acting on the object, and inversely proportional to the mass of the object.
In our proposed method, we assume that all nodes of a graph are mutually exerting attractive and repulsive forces on
each other. Therefore, net force imposed on a node can be calculated using (5). In this equation, \(F(u)\) is the net force on node \(u\) while \(F_{\text{attr}}(u,v)\) and \(F_{\text{repl}}(u,v)\) are mutual attractive and repulsive forces between nodes \(u\) and \(v\).
\[F(u)=F_{\text{attr}}(u)+F_{\text{repl}}(u) \tag{5}\]
We wish larger Euclidean distance between two embeddings to reduce the repulsive force and increase the attraction while larger distances have the opposite effect. In addition, we wish to have similar levels of attraction between node pairs of hops-distance groups. We use (6) and (7) equations for attractive and repulsive force models respectively. In these equations \(\|z_{uv}\|=\|z_{v}-z_{u}\|\) is the Euclidean distance between the embeddings of two nodes \(u\) and \(v\), \(h_{uv}\) is their hops-distance and \(0<\alpha<1\). \(V\) is the set of all nodes while \(N_{u}^{(h)}\) is the set of neighbors of \(u\) at \(h\)-hops distance. \(unit_{uv}=\frac{z_{uv}}{\|z_{uv}\|}\) is the unit vector along which the tension between \(u\) and \(v\) produces a force.
\[F_{\text{attr}}(u) =\sum_{h=1}^{\infty}\frac{1}{|N_{u}^{(h)}|}\sum_{v\in N_{u}^{(h) }}\alpha^{h_{uv}-1}\|z_{uv}\|\times unit_{uv} \tag{6}\] \[F_{\text{repl}}(u) =\frac{1}{|V|}\sum_{v\in V}h_{uv}e^{-\|z_{uv}\|}\times unit_{uv} \tag{7}\]
All in all, by replacing (5) the gradient equation derived in (3) we arrive at equation (8). In this equation, \(dz_{u}\) is the gradient of embedding of node \(u\). For mass of node \(u\) we let \(m_{u}=\deg u\). The intuition behind this is that the embedding of a node with more edges should be discounted on its gradient, increasing its inertia to alteration.
\[dz_{u}=\frac{F(u)}{m_{u}} \tag{8}\]
### _Avoiding local optima_
A force-directed system may converge to a local optimum point. To circumvent the local optima we adopt a random drop strategy. For each calculated gradient, we randomly pick some dimensions with 0.5 probability and zero them out. This strategy enhances the embedding results as shown later in this manuscript.
### _The algorithm_
```
1:Randomly initialize all \(z_{u}\), \(u\in V\)
2:while\(\sum_{u\in V}\|F(u)\|>\epsilon\)do
3:for all\(u\in V\)do\(\triangleright\) Calculate the gradients first
4:\(dz_{u}=\frac{F(u)}{\deg u}\)
5:endfor
6:for all\(u\in V\)do\(\triangleright\) Update embeddings
7:\(z_{u}+=\)RandomDrop(\(dz_{u}\))
8:endfor
9:endwhile
```
**Algorithm 1** Force-directed Graph Embedding
The algorithm 1 shows the procedure of our force-directed embedding. The algorithm starts by initializing random embedding vectors for each node. It then starts a loop where it calculates the net force imposed on each node to obtain its embedding gradient. Finally, it applies a random drop function on the gradients and updates the corresponding embedding vectors. The loop stops when the force-directed system enters an equilibrium state where the sum of the magnitude of net forces on all nodes reaches a minimum level. The two inner loops can be parallelized using vector operations.
### _Algorithm complexity_
In this algorithm, forces between each pair of nodes are to be calculated. Therefore, the time complexity of this algorithm is \(O(n^{2})\). But since this algorithm is parallelizeable, it is possible to do a tradeoff between time and space complexity by using batching techniques.
## IV Evaluation Results
In this section, we discuss the evaluation results of our force-directed methodology and compare it against the best available methods prior to this work. All the evaluations and comparisons are performed on 128-dimensional graph embeddings generated by the target methods. Two major graph tasks for evaluating the quality of a graph embedding are node classification and link predictions. They are thoroughly discussed in their corresponding sections. In these evaluations we used RandomForestClassifier, the random forest implementation from Scikit-learn package [18, 19] version 1.2.2. Along with the default parameters of RandomForestClassifier, we used 2% as the minimum fraction of samples for tree node splitting. The rationale behind using random forest was its being one of the best-performing algorithms in classification.
The implementation of this work and the evaluation methodology discussed in this section can be accessed at [https://github.com/HessamLa/forcedirected](https://github.com/HessamLa/forcedirected).
### _Datasets and baseline methods_
The following graph datasets were used for evaluating various embedding methods.
* Cora [20]; This dataset consists of 2708 scientific publications, each classified into one of seven classes. Every pair of citing and cited publications in this dataset is a link which accounts for 5429 citations.
* CiteSeer [20]; This dataset contains 3312 scientific publications, each classified into one of six topics. The publications have 4732 citations between them, forming a citation network.
* Pub-Med [21]; The Pubmed Diabetes dataset contains 19717 scientific publications related to diabetes, sourced from the PubMed database. These publications are categorized into one of three classes. Its citation network consists of 44338 citation links.
* Ego-Facebook [22]; This is a Facebook friends network collected from a survey participants, representing ego-networks centered around individual users. 10 users provided access to their friend connections on Facebook, and manually labeled groups of friends into "circles" or communities. In total, the Ego-Facebook dataset comprises 4039 nodes representing friends, connected by 88234 links, and organized into 193 ground-truth circles labeled by the ego users. On average, each ego network has around 400 friends, divided into 19 circles of 22 friends each. The circles capture social contexts such as university friends, relatives, sports teams, etc. By capturing full ego-networks with ground truth community assignments, the Ego-Facebook dataset enables analysis of community detection and graph mining algorithms, providing a benchmark for methods aiming to identify overlapping and hierarchical circles in social networks centered around individuals.
* Wiki1; This data is generated from scraping Wikipedia pages. It consists of 2405 Wikipedia pages and 17981 hyperlink redirections between them. The pages are exclusively classified into 19 categories. Footnote 1: [https://github.com/thunlp/MMDW/](https://github.com/thunlp/MMDW/) accessed on July 28.2023
* CORA-Full [23]; This dataset (named CORA in the original paper) is a citation network containing 19793 scientific publications classified into 70 different classes. Each publication is represented as a 0/1-valued word vector indicating the absence/presence of 1433 unique dictionary words extracted from the paper abstracts. In total, the dataset contains 65311 links representing citations between the publications. The rich feature representation coupled with the citation linkage structure makes CORA well-suited for analyzing representation learning techniques on node classification tasks.
All of the above graphs were used for the link prediction task. For node classification however, node labels are required. Therefore, the Ego-Facebook graph did not contain node labeling and was not used for node classification.
We compare our embedding method against LINE, DeepWalk, Node2vec and GraphSAGE methods in terms of accuracy and macro F-1 scores of link prediction and node classification tasks.
### _Link Prediction_
The objective of link prediction is to predict if a link exists between two given nodes. Here, embeddings of two nodes are passed to the predictor and a boolean output is obtained which indicates exsitence or non-existence of a link between the two input embeddings.
For the link prediction task in this evaluation, the negative sampling technique is used. 80% of the available node pairs are used for training and 20% for testing. For each (method, dataset) pair the evaluation is performed 7 times and the average is reported here. Tables I and II show the accuracy and macro F-1 score of the link prediction task with random forest. Our Force-Directed method outperformed the best available unsupervised embedding methods while maintaining a low standard deviation for the 7 evaluation runs.
### _Node classification task_
The objective of node classification is to classify a node label. For this task, the classifier was trained on 80% of the nodes and tested on the remaining 20%. We did 7 runs of evaluation for this task and reported the average in tables III and IV for accuracy and macro F-1 scores respectively. As shown here Our method outperformed others on all datasets.
### _Embedding progression_
The Force-Directed embedding starts from randomly generated embeddings in the vector space. As the algorithm progresses, the Euclidean distance between node pairs embeddings for each group of h-hops distant nodes gradually converge to a point where on average, they reflect hops distance between pairs with negligible standard deviation.
Figure 3 shows the progression of Euclidean distances when using the Force-Directed method for generating embeddings for the Pub-Med graph. In this figure, "hops(n)" indicates the set of node pairs at n-hop distance. For example, "hops1" is the set of node pairs that are immediate neighbors, while "hops7" indicates the set of pairs that are 7 hops away. The maximum distance on this graph is 18 hops. This figure shows that nodes in closer vicinity, i.e. in the same cluster, quickly converge to stable distances. However, Euclidean distances of further node pairs take more iterations to converge, indicating that clusters take more time to converge to stable distances.
Fig. 1: Accuracy comparison of link prediction task with decision tree classifier.
Fig. 2: Accuracy comparison of node classification task with random forest classifier.
This observation is pointed out in the Qualitative analysis subsection.
### _Qualitative analysis_
Figure 4 shows the 2D representation of Ego-Facebook network embeddings generated by the three best-performing methods, Force-Directed, DeepWalk, and Node2vec. For producing a 2D visualization of the embeddings, the 128-dimensional embeddings are reduced to 2-dimensional using PCA. In these graphs, nodes with higher degrees have brighter colors and are depicted larger. As seen in these graphs, while all methods are performing rather well in keeping clusters of interconnected nodes together, Force-Directed embedding output does an even better job of producing a more pronounced distance between clusters by distancing less interconnected clusters. In addition, node types in terms of their structural role such as hub, connector, and peripheral nodes. Hub nodes have a high degree and are connected to many other nodes in the network. They act as highly connected centers for information spread. Connector nodes link hubs from different communities, helping spread information between clusters. Peripheral nodes are at the edges of a cluster with fewer connections than inner members.
## V Conclusion
We presented a novel graph embedding technique using a force-directed approach that simulates physics-based forces between nodes to obtain embeddings preserving graph topology and connectivity patterns. Our intuitive method demonstrated strong performance on tasks like node classification and link prediction compared to existing unsupervised techniques. Important future work includes optimizing the complexity of
the algorithm and scaling the embedding method to massive graphs with billions of edges. A niche on researching optimal force functions with Force-Directed embedding method is foreseeable.
|
2309.13961 | A hybrid quantum-classical approach to warm-starting optimization | The Quantum Approximate Optimization Algorithm (QAOA) is a promising
candidate for solving combinatorial optimization problems more efficiently than
classical computers. Recent studies have shown that warm-starting the standard
algorithm improves the performance. In this paper we compare the performance of
standard QAOA with that of warm-start QAOA in the context of portfolio
optimization and investigate the warm-start approach for different problem
instances. In particular, we analyze the extent to which the improved
performance of warm-start QAOA is due to quantum effects, and show that the
results can be reproduced or even surpassed by a purely classical preprocessing
of the original problem followed by standard QAOA. | Vanessa Dehn, Thomas Wellens | 2023-09-25T08:53:54Z | http://arxiv.org/abs/2309.13961v1 | # A hybrid quantum-classical approach to warm-starting optimization
###### Abstract
The Quantum Approximate Optimization Algorithm (QAOA) is a promising candidate for solving combinatorial optimization problems more efficiently than classical computers. Recent studies have shown that warm-starting the standard algorithm improves the performance. In this paper we compare the performance of standard QAOA with that of warm-start QAOA in the context of portfolio optimization and investigate the warm-start approach for different problem instances. In particular, we analyze the extent to which the improved performance of warm-start QAOA is due to quantum effects, and show that the results can be reproduced or even surpassed by a purely classical preprocessing of the original problem followed by standard QAOA.
## 1 Introduction
The quantum approximate optimization algorithm (QAOA) [1, 2] is often presented as a candidate for the efficient solution of combinatorial optimization problems in the current noisy intermediate-scale quantum (NISQ) era [3], i.e. the era of working on quantum hardware with error rates and limitations in size [4]. However, an advantage over classical algorithms has not yet been proven, so various modifications of QAOA such as warm-start QAOA, recursive QAOA, spanning tree QAOA [5, 6, 7, 8] or using an alternative objective function [9] are proposed to further improve its performance.
The warm-start approach is a well-known recipe for reducing the time needed to solve an optimization problem by starting the optimization with an efficiently computable approximate solution [10]. Applying this concept to quantum optimization, it has been shown that warm-starting the QAOA based on the classically obtained solution of a relaxed continuous optimization problem shows an improvement at a lower depth [5, 11]. The warm-start QAOA has been applied and discussed in the context of the MaxCut problem [12, 13], where the key idea is to find the maximum cut of a graph. Various problems, such as unstructured clustering problems, can be mapped to such a graph optimization problem and are shown to be solved using the warm-start QAOA [14]. An extensive study of warm-start including the selection and optimization of hyperparameters (also applied to the MaxCut problem) is reported in [15]. In addition to the above mentioned versions of warm-start, referred to as the continuous warm-start procedure, Egger et al. [5] have proposed modifications such as the rounded warm-start QAOA, where the initial state is generated by randomly rounding the SDP (semidefinite programming) relaxation of the QUBO problem. Another approach, called classically-boosted quantum optimization algorithm (CBQOA) [16], also uses a rounded solution of the SDP relaxation as initial state, followed by an efficiently-implementable continuous-time quantum walk.
From a general perspective, warm-starting quantum optimization is an example of a hybrid approach, where classical tools for solving a combinatorial optimization problem are combined with quantum methods. The idea is to analyze the processing steps throughout an optimization algorithm and to evaluate which steps are more efficient with classical methods and in which steps quantum methods are more suitable [17, 18]. In this paper, we focus on the problem of portfolio optimization and investigate the original warm-start QAOA proposed in [5] to analyze what factors are responsible for its improved per
formance compared to its standard version. In particular, we compare the performance of warm-start QAOA and standard QAOA for different problem instances, depending on how well the classically obtained solution of the relaxed problem (serving as the starting point of warm-start QAOA) agrees with the desired solution of the combinatorial optimization problem. Moreover, we propose a classical preprocessing scheme for standard QAOA, thereby formulating a hybrid quantum-classical approach to warm-start optimization that reproduces or even outperforms the performance of the actual warm-start QAOA.
This paper is organized as follows: in Sec. 2, we give a short introduction to the portfolio optimization problem and a brief presentation of solving the problem for random problem instances by using the warm-start QAOA in comparison to the standard QAOA. We then subdivide our problem instances into "hot" and "cold" optimizable instances, i.e. instances with a relaxed solution that is either closer to or further from the optimal solution, and discuss the performance of warm-start vs. standard QAOA for different instances in Sec. 3. In Sec. 4, we present a classical preprocessing method for the standard QAOA routine and analyse its performance in comparison with standard and warm-start QAOA for random, hot and cold problem instances. Finally, in Sec.5 we conclude and give a brief perspective for further investigations.
## 2 Background
### Formulation of the Portfolio Problem
In the general quadratic unconstrained binary optimization (QUBO) problem, a cost function is defined on \(N\) binary variables \(F:\mathbb{B}^{N}\rightarrow\mathbb{R}\):
\[F(\mathbf{x})=\sum_{i,j=1}^{N}F_{ij}x_{i}x_{j}+\sum_{i=1}^{N}f_{i}x_{i}\, \tag{1}\]
with symmetric matrix \(F_{ij}\in\mathbb{R}^{N\times N}\), vector \(f_{i}\in\mathbb{R}^{N}\) and \(N\) binary variables \(\mathbf{x}=(x_{1},x_{2},\ldots,x_{N})\in\{0,1\}^{N}\). Note that, since \(x_{i}=x_{i}^{2}\) for binary variables, the vector \(f_{i}\) can also be added to the diagonal of the matrix \(F_{ij}\). However, we will keep the linear term in the following, since it will be relevant for the continuous relaxation (see below). The solution of the problem is given by \(\mathbf{x}^{\text{opt}}\) that minimizes the above cost function.
Considering the portfolio problem as our problem model, we can directly identify the matrix \(F_{ij}\) with the covariance matrix of the stock returns \(\sigma_{ij}\) and the vector \(f_{i}\) with the expected return \(\mu_{i}\), respectively. The binary variables \(x_{i}\) represent the portfolio weights, which are 1 or 0, depending on the stock being chosen for the portfolio or not. Additionally, with the parameter \(q\in[0,1]\) the risk-preference, depending on whether the risk or the return is to be taken into account with larger weight, can be set. The function that has to be minimized thus reads:
\[F_{C}(\mathbf{x})=q\sum_{i,j=1}^{N}x_{i}x_{j}\sigma_{ij}-(1-q)\sum_{i=1}^{N}x_ {i}\mu_{i}. \tag{2}\]
The investment of a fixed amount of money can be addressed by introducing a budget constraint \(B=\sum_{i}x_{i}\), where \(B\) is the number of the assets selected from the \(N\) available assets for the portfolio. In order to deal with this constraint, a penalty term, denoted with \(A\), is added to the cost function. Finally, the cost function of our QUBO problem is obtained as:
\[F(\mathbf{x})=F_{C}(\mathbf{x})+A\left(B-\sum_{i=1}^{N}x_{i}\right)^{2}. \tag{3}\]
The routine that is used to determine a suitable factor for \(A\) is described in [19]. In the following, we refer to a created portfolio as "feasible" only if the budget constraint is met.
### Standard and warm-start QAOA
In order to address the optimization problem with the quantum computer, the cost function has to be mapped onto a cost Hamiltonian \(\hat{F}\). This can be done by converting the binary variables to operators with \(x_{i}=(\hat{I}_{i}+\hat{Z}_{i}/2)\), with \(\hat{I}_{i}\) and \(\hat{Z}_{i}\) being the identity and the Pauli-\(\hat{Z}\) operator acting on qubit \(i\) (\(i=1,2,\ldots,N\)). The QAOA circuit then generates the parametrized variational quantum state
\[\ket{\psi_{\gamma,\beta}}_{\text{std}}=\hat{U}_{\text{std}}(\beta_{p})e^{-i \gamma_{p}\hat{F}}...\ \hat{U}_{\text{std}}(\beta_{1})e^{-i\gamma_{1}\hat{F}}\ket{\psi_{0}}_{\text{ std}} \tag{4}\]
with parameters \(\vec{\gamma}=(\gamma_{1},...,\gamma_{p})\) and \(\vec{\beta}=(\beta_{1},...,\beta_{p})\) and number of iterations \(p\), and standard mixer \(\hat{U}_{\text{std}}\):
\[\hat{U}_{\text{std}}(\beta)=e^{i\beta}\sum_{i=1}^{N}\hat{X}_{i}. \tag{5}\]
The initial state is chosen to be \(\left|\psi_{0}\right\rangle=\left|+\right\rangle^{\otimes N}\), which is the minimum energy eigenstate of the mixing operator \(-\sum_{i}\hat{X}_{i}\). Further, all qubits are measured in the computational basis to determine the expectation value \(\left\langle\hat{F}\right\rangle\). This intermediate result is then passed on to a classical optimizer, which updates the parameters to minimize the expectation value.
The form of the generated quantum state of QAOA, see Eq. (4), is inspired by adiabatic quantum computing (AQC) in terms of starting in the ground state of the mixing Hamiltonian, which is then gradually transferred to the ground state of the cost Hamiltonian by approximating the adiabatic annealing path via Trotterization [20, 21] for iteration depth \(p\rightarrow\infty\). Therefore, the performance of QAOA improves with increasing \(p\)[22].
In contrast to the standard variant of QAOA, which starts from a uniform superposition \(\left|\psi_{0}\right\rangle=\left|+\right\rangle^{\otimes N}\) of all portfolios, the warm-start variant, introduced by [5, 13], starts with solving the continuous relaxation of the QUBO problem
\[\mathbf{x}^{*}=\underset{\mathbf{\tilde{x}}\in[0,1]^{N}}{\arg\min}\ F( \mathbf{\tilde{x}}) \tag{6}\]
where the variables \(\mathbf{\tilde{x}}\) are not binary, but real numbers \(\in[0,1]\). The continuous optimization problem (6) can be easily solved by classical optimization if the matrix \(F_{ij}\), see Eq. (1), is positive-semidefinite (leading to a convex quadratic problem). For our problem, this is the case, since \(F_{ij}=q\sigma_{ij}+A\delta_{ij}\), see Eqs. (1)-(3), with positive semidefinite \(\sigma_{ij}\) (as a covariance matrix) and \(q,A\geq 0\). In the general case, if \(F_{ij}\) is not positive semidefinite, the problem can be "convexified" either by changing the diagonal of \(F\) together with the linear term \(f_{i}\) such that the binary problem remains invariant (i.e. \(F_{ii}\to F_{ii}+c\), \(f_{i}\to f_{i}-c\)) [23] or by a semidefinite programming (SDP) procedure, see [5].
The optimal solution \(\mathbf{x}^{*}=(x_{1}^{*},x_{2}^{*},\ldots,x_{N}^{*})\) of Eq. (6) can now be used to initialize the QAOA, thus warm-starting it. For the warm-start algorithm as introduced in [5], the initial state of the standard algorithm \(\left|\psi_{0}\right\rangle_{\text{std}}=\left|+\right\rangle^{\otimes N}\) is replaced by the state
\[\left|\psi_{0}\right\rangle_{\text{WS}}=\underset{i=1}{\overset{N}{\otimes}} \hat{R}_{Y}(\theta_{i})\left|0\right\rangle_{N} \tag{7}\]
with angle \(\theta_{i}=2\arcsin(\sqrt{x_{i}^{*}})\). Further, the mixer \(\hat{U}_{\text{std}}(\beta)\) is replaced by \(\hat{U}_{\text{ws}}(\beta)=e^{-i\beta\hat{M}_{\text{ws}}}\) with mixing operator
\[\hat{M}_{\text{ws}}=-\sum_{i=1}^{N}\left[\sin(\theta_{i})\hat{X}_{i}+\cos( \theta_{i})\hat{Z}_{i}\right] \tag{8}\]
whose ground state is the initial state \(\left|\psi_{0}\right\rangle_{\text{WS}}\). When a bit in the relaxed solution is exactly set to \(0\) or \(1\) (i.e. \(x_{i}^{*}=0\) or \(x_{i}^{*}=1\)), the respective qubit is then initialized either in the \(\left|0\right\rangle\) or \(\left|1\right\rangle\) state. Since the cost Hamiltonian only applies \(Z\)-gates, the qubits in the above mentioned states
Figure 1: Mean approximation ratio \(r\) (a) and ground state probability \(P\) (b), both as a function of the number \(p\) of QAOA iterations for standard (teal line) and warm-start (pink line) QAOA for an ensemble of 20 random portfolio instances consisting of \(N=10\) assets with budget constraint of \(B=5\). Compared to the standard mixer, the warm-start mixer yields a better performance (i.e. approximation ratio tends towards \(r=1\) and probabilities are higher) since the initial state (\(p=0\)) for the warm-start version, prepared with the relaxed solution, is already closer to the optimal solution than for the standard mixer.
will remain in these states during the optimization. To avoid this, a regularization parameter can be introduced [5] which, however, we will not consider in the present paper.
To evaluate the performance of the optimization procedure, i.e. the quality of the solution, we use as a measure for both variants (standard and warm-start) the approximation ratio
\[r(x_{1},...,x_{N})=\begin{cases}\frac{F_{C}(x_{1},...,x_{N})-F_{C}^{\max}}{F_{C} ^{\min}-F_{C}^{\max}}&\text{if $\sum_{i}x_{i}=B$}\\ 0&\text{if $\sum_{i}x_{i}\neq B$}\end{cases} \tag{9}\]
averaged over the different measurements results \(x_{1},...,x_{N}\) performed on the final state of the optimized QAOA circuit, where \(F_{C}^{\max}\) and \(F_{C}^{\min}\) indicating the worst and the best solution among all feasible solutions. Measuring the optimal solution yields an approximation ratio of 1, whereas measuring the worst feasible or unfeasible solution yields an approximation ratio of 0. Note, that the approximation ratio is formulated with the cost function \(F_{C}\), see Eq. (2), to have a measure that is independent of the choice of the penalty term \(A\). As alternative measure, we also consider the probability \(P\) of measuring the optimal solution (also called "ground state probability" in the following).
Concerning our portfolio model, we use an ensemble of 20 randomly generated portfolio instances, each consisting of \(N=10\) assets (and budget constraint \(B=5\)), corresponding to \(N=10\) qubits in the quantum circuit. The solution of the relaxed problem, which is used to initialize the warm-start QAOA, is obtained using a gradient-based optimizer, whereas a gradient-free optimizer was chosen for the classical subroutine of QAOA (see [19] for details). The quantum circuits are executed using the qasm simulator with a fixed number of shots (here: 1000).
For both variants (standard and warm-start), we compare the mean approximation ratio \(r\) and the ground state probability \(P\) for increasing iteration depth \(p\) (up to \(p_{\max}=7\)), see Fig. 1. The standard deviation obtained from the 20 random instances is represented by the error bars. As expected, we find that, both, the approximation ratio \(r\) and the ground state probability \(P\) for the warm-start QAOA is significantly higher than for the standard QAOA, since the initial state (at \(p=0\)) for warm-start QAOA is already closer to the desired ground state energy of the problem Hamiltonian, i.e. the optimal solution. From the varying errorbars, we also observe that the fluctuations between the 20 random instances are larger for warm-start QAOA (especially in case of the ground state probability \(P\)). Note that, in contrast to standard QAOA, warm-start QAOA exhibits some fluctuations already for the initial state (\(p=0\)), depending on how well the solutions of the relaxed and the binary problem agree with each other.
## 3 Warm-start QAOA for different Problem Instances
To clarify the behaviour for different problem instances, we generate a larger set consisting of 1000 random portfolio instances, each consisting of \(N=10\) assets. Then, we analyze the distance between the solutions \(\mathbf{x}^{*}\) and \(\mathbf{x}^{\text{opt}}\) of the relaxed and the binary problem in order to identify samples of "hot" and "cold" instances where, respectively, the starting point of warm-start QAOA is close (or not close) to the optimal solution of the binary problem.
To quantify the distance, we compare the two vectors \(\mathbf{x}^{*}\) and \(\mathbf{x}^{\text{opt}}\) and determine the maximum deviation per instance, denoted with \(\varepsilon\):
\[\varepsilon=\underset{i}{\max}|x_{i}^{*}-x_{i}^{\text{opt}}|. \tag{10}\]
A large deviation refers to what we call a "cold"
Figure 2: Deviations \(\varepsilon\) and \(\sigma\) between the solutions of the relaxed and binary problem for 1000 random instances (blue circles). For both measures (\(\varepsilon\) and \(\sigma\)), we identify ”hot” and ”cold” subsets of 20 instances exhibiting the smallest or largest deviation, respectively (“\(\sigma\)-hot”: red down facing triangles, “\(\varepsilon\)-hot”: tan hexagons, “\(\sigma\)-cold”: orange diamonds, and “\(\varepsilon\)-cold”: olive plus symbols). The yellow circles represent the 20 randomly chosen instances considered in Fig. 1.
instance, whereas for a small deviation the instances are called "hot".
In addition to the maximum deviation \(\varepsilon\), we also consider the root mean square error (RMSE)
\[\sigma=\sqrt{\sum_{i}\left(x_{i}^{*}-x_{i}^{\text{opt}}\right)^{2}/N} \tag{11}\]
as a second measure, which takes into account the differences between all pairs of values (not only the maximum). For both measures, we identify, from our ensemble of 1000 random instances, subsets of 20 instances exhibiting the smallest ("hot") or largest ("cold") deviation, respectively, see Fig. 2.
In Fig. 3 the approximation ratios as well as the ground state probabilities of the "hot" and "cold" instances for both QAOA variants are shown as a function of the iteration depth \(p\). For comparison, the values of the randomly chosen instances, which are already displayed in Fig. 1(a), are also shown. It is noticeable that, for the standard version, differentiating between the two subsets does not make a difference to the result, neither for the approximation ratio nor for the probability, i.e. all deviations are within the error bars, see Fig. 3(a)-(d). With respect to the warm-start, we see a clear improvement in performance in terms of the approximation ratio when using the "hot" instances (Fig. 3(a,c)). For the ground state probabilities, the increase is even significantly higher (Fig. 3(b,d)). However, the improved performance of warm-start QAOA as compared to standard QAOA is not restricted to hot instances, but still holds for clod instances.
## 4 Standard QAOA with Classical Preprocessing
Above, we have seen that warm-start QAOA displays a better performance (both, in terms of approximation ratio and ground state probability) than standard QAOA - especially (but not only) for those problem instances where the classically obtained relaxed solution is close to the optimal solution. In this chapter, we will show that a similar increase of performance can be obtained by a purely classical preprocessing of the original optimization problem, which can then be solved with standard QAOA.
#### 4.0.1 Elimination of Variables
The solution of the relaxed problem frequently exhibits some bits, which are already exactly set to 0 and 1 (i.e., \(x_{i}^{*}=0\) or \(x_{i}^{*}=1\)). Since, as already mentioned above, their value is not changed by the warm-start QAOA algorithm, one can as well eliminate those bits from the original problem and run warm-start QAOA on a reduced set of qubits without changing its performance.
We will follow this idea and extend this procedure also to those bits where the relaxed solution \(x_{i}^{*}\) is not necessarily exactly 0 or 1, but close to it. For this purpose, we introduce two bounds \(\delta_{0}\) and \(\delta_{1}\) and round the relaxed solution as follows:
\begin{tabular}{c c} \hline \hline Rounding Scheme & \multicolumn{1}{c}{} \\ \hline
**if**\(x_{i}^{*}\leq\delta_{0}\)**then** & \(x_{i}=0\) \\
**else if**\(x_{i}^{*}\geq 1-\delta_{1}\)**then** & \(x_{i}=1\) \\ \hline \hline \end{tabular}
Depending on the choice of \(\delta_{0}\) and \(\delta_{1}\), a certain subset of variables \(x_{i}\) is thereby fixed to either 0 or 1. We then look at the reduced problem depending only on those variables that have not been rounded, which we finally solve using standard QAOA. In the following, we test a set of different \(\delta_{0}\), \(\delta_{1}\) - combinations and evaluate their performance with increasing iteration depth \(p\). As a baseline, we consider in particular the case \(\delta_{0}=\delta_{1}=0.5\) corresponding to a purely classical naive rounding of all variables.
For our simulations, we use as input our generated ensembles of 20 random instances, 20 "hot" and 20 "cold" instances classified by the \(\varepsilon\)-measure and by the RMSE (\(\sigma\)-measure) with \(N=10\) qubits and \(B=5\) selected assets. With this setup, we compute both the approximation ratio \(r\) and the probability \(P\) for an increasing number of QAOA iterations \(p\) (up to \(p_{\text{max}}=7\)) including the initial state at \(p=0\). For the classical preprocessing, we set various bound settings as follows: two symmetric bounds (\(\delta_{0}=\delta_{1}=0.01\) and \(\delta_{0}=\delta_{1}=0.25\)), one asymmetric bound (\(\delta_{0}=0.1,\delta_{1}=0.25\)), and the classical baseline as described above (\(\delta_{0}=\delta_{1}=0.5\)). For comparison, we again plot the results of standard QAOA (without preprocessing) and warm-start QAOA as already displayed above (see Figs. 1 and 3).
#### 4.0.2 Results for random instances
Fig. 4 shows the results for the 20 randomly chosen instances already considered in Fig. 1. For the classical baseline (\(\delta_{0}=\delta_{1}=0.5\)), we achieve a mean approximation ratio and mean probability of \(r=P=15\%\) (\(=\frac{3}{20}\)), i.e. 3 out of 20 solutions are in agreement with the optimal solution after the simple rounding (and hence \(r=P=1\)). All other cases, where the optimal solution was not found, violate the constraint (thus giving rise to \(r=P=0\)). Since the classical baseline involves no QAOA optimization, the result does not improve and remains constant over the iteration depth. For the two lower bounds (dashed black and beige lines) we achieve an approximation ratio (left) of about \(r\approx 80\%\) at \(p=7\) and thus an improvement over the standard QAOA. As can be seen from the initial state, this method provides a better starting point for the algorithm.
Also concerning the probabilities of obtaining the optimal solution (right), a slight increase over the standard version is visible, but for these three settings (standard and the two lower bounds) the probabilities are below our baseline probability, indicating that the bound settings are not large enough to round the values up or down towards the optimal bits. For a large bound setting (dashed green line) we can reproduce the approximation ratio of the warm-start variant of \(r\approx 92\%\) at depth \(p=7\) within the error bars, and even surpass it at lower depth. Only the probability of the warm-start is not achievable for the random problem instances. However, compared to the standard QAOA, a significant increase is accomplished. In summary, standard QAOA with preprocessing yields higher approximation ratios than warm-start QAOA, but lower ground state probabilities, indicating that alternative solutions are found that are very close to the optimal solution.
Figure 3: (a,b) Same as Fig. 1, but additionally for \(\sigma\)-hot and \(\sigma\)-cold problem instances. (c,d) Same as Fig. 1, but additionally for \(\varepsilon\)-hot and \(\varepsilon\)-cold problem instances. For both measures (\(\sigma\) and \(\varepsilon\)), the warm-start performance is either reduced or improved by choosing cold or hot instances. In particular, hot instances exhibit significantly higher values of the ground state probability. The advantage of warm-start QAOA compared to standard QAOA, however, still holds for cold instances. In contrast, the performance of standard QAOA is not sensitive to differentiating between cold or hot instances.
Figure 4: Mean approximation ratio (a) and ground state probability (b) obtained with the standard QAOA (teal line), the preprocessed standard QAOA using different bounds (dashed colored lines) and the warm-start QAOA (pink line), for 20 random problem instances with classical baseline (dashed horizontal line). For a bound setting of \(\delta_{0}=\delta_{1}=0.25\), the improved approximation ratio (a) of warm-start QAOA can be reproduced (or even outperformed at smaller \(p\)) by the preprocessed standard version. An increase in probability (right) over the standard QAOA is achieved, although not as high as with the warm-start.
Figure 5: Same as in Fig. 4, but for \(\varepsilon\)-cold (a,b) and \(\sigma\)-cold (c,d) instead of random problem instances. Again, for a large bound setting (\(\delta_{0}=\delta_{1}=0.25\)), the approximation ratio of the warm-start QAOA can be reproduced by the preprocessed standard version for both measures (a,c). However, the ground state probabilities are lower than those obtained with warm-start QAOA (b,d). For a different bound setting (\(\delta_{0}=0.1,\delta_{1}=0.25\)) a higher probability than for the warm-start QAOA can be achieved, but only using instances created with respect to the \(\varepsilon\)-measure, see (b).
#### 4.0.3 Results for cold instances
Fig. 5 shows the performance for the same settings as for the random problem instances, but computed for the "cold" instances classified by the \(\varepsilon\)- and \(\sigma\)-measures respectively. From the baselines, we see that the optimal solution is never reached by simple rounding of the relaxed solution (since all cold instances exhibit \(\varepsilon>0.5\), see Fig. 2) and the budget constraint is never met. Besides that, the performance of warm-start can be reproduced (approximation ratio of \(r_{\varepsilon}\approx 89\%\) compared to \(r_{\varepsilon,\text{WS}}\approx 92\%\) and of \(r_{\sigma}\approx 83\%\) compared to \(r_{\sigma,\text{WS}}\approx 86\%\) at \(p=7\)) for both measures in the case of the larger symmetric bound, see the dashed green line in Fig. 5(a) and (c). For the \(\varepsilon\)-measure, see Fig. 5(a), it is also achieved with the asymmetric bound (dashed beige line) which, in addition, yields the highest ground state probability, see Fig. 5(b). In contrast, for the \(\sigma\)-measure, the approximation ratios of the lower symmetric and asymmetric bounds, see the dashed beige and black lines in Fig. 5(c), coincide within the error bars. Furthermore, the probabilities of finding the optimal solution are below warm-start for all cases, see Fig. 5(c).
Note that, in Fig. 5 (b), we observe extremely large error bars especially in case of the classically preprocessed standard QAOA with bound \(\delta_{0}=\delta_{1}=0.25\). This is due to the fact that, for most of the 20 \(\varepsilon\)-cold instances, the ground state probability strictly vanishes, since at least one bit is rounded to the wrong value during the preprocessing. For \(\delta_{0}=\delta_{1}=0.25\), this occurs if \(\varepsilon>0.75\), which, as evident from Fig. 2, concerns 16 out of the 20 \(\varepsilon\)-cold instances. In other words, the mean probability originates from only a few instances with \(P>0\), which leads to a large standard deviation.
#### 4.0.4 Results for hot instances
In Fig. 6, we now discuss the results for the "hot" problem instances. For these instances, the deviation between the relaxed and the discrete solution is small, so after simply rounding, the optimal solution is expected to be found in most cases. Concerning the \(\varepsilon\)-measure, see Fig. 6(a,b), the approximation ratio and probability for the simple rounding (baseline) are both at \(r=P=100\%\), reflecting exactly what has just been described. For the other measure (\(\sigma\)), depicted in Fig. 6(d,c), the baseline takes an approximation ratio and probability of \(r=P=85\%\) (\(=\frac{17}{20}\)), since 3 out of the 20 \(\sigma\)-hot instances exhibit \(\varepsilon>0.5\), see Fig. 2, and violate the constraint after naive rounding.
For the symmetric bound setting with the largest rounding (dashed green line), we obtain a maximum approximation ratio of \(r_{\varepsilon}\approx 96,7\%\) and a probability of \(P_{\varepsilon}\approx 60\%\) (compared to \(r_{\varepsilon,\text{WS}}\approx 96,9\%\) and \(P_{\varepsilon,\text{WS}}\approx 77\%\)) for the \(\varepsilon\)-measure, see Fig. 6(a,b), and an approximation ratio of \(r_{\sigma}\approx 98\%\) and a probability of \(P_{\sigma}\approx 86\%\) (compared to \(r_{\sigma,\text{WS}}\approx 96\%\) and \(P_{\sigma,\text{WS}}\approx 83\%\)) for the \(\sigma\)-measure, see Fig. 6(c,d). With this result, we conclude that, for sufficiently large bound settings, the improved performance of the warm-start QAOA can be reproduced and even be surpassed by standard QAOA with classical preprocessing.
## 5 Conclusion and Outlook
We presented a classical preprocessing approach to warm-start optimization on different problem instances. For this purpose, we applied the standard and the warm-start version of the QAOA algorithm to the portfolio optimization problem using an ensemble of 20 random, "hot", and "cold" instances with \(N=10\) assets, where the two latter ones are created by comparing the relaxed and discrete solution of 1000 random instances in terms of the maximum deviation per instance \(\varepsilon\) and the root mean square error \(\sigma\).
Concerning the performance of the standard QAOA, we have found that dividing into the two subsets does not show an improved or worse performance compared to using the random instances. In contrast, for the warm-start QAOA, we clearly see that the "hot" instances are better to optimize than the "cold" instances, although an advantage as compared to standard QAOA still holds for the latter ones.
We introduced a classical preprocessing approach to warm-start optimization by first using a rounding scheme on the relaxed solution of the continuous problem for values close to 0 or 1 and then solving the reduced problem with standard QAOA. As a result, the performance of the standard QAOA, in terms of the approximation ratio, can be boosted by applying this classical preprocessing and, especially for smaller \(p\), it also outperforms the results of warm-start QAOA if the
bounds for the rounding process are large enough. For the ground state probability, we observe an increase compared to the values generated by the standard QAOA, but, in general, we find them to be lower than those of warm-start QAOA.
Finally, we conclude that the improved performance of warm-start can be reproduced by classical methods and is thus not a result of quantum effects alone. In future work, this insight could be useful in order to explore further ways of splitting an optimization routine into classical and quantum processing parts and thereby to realize a quantum advantage over purely classical methods. A better understanding of which steps in the optimization routine can be replaced classically will be helpful to achieve a better performance through more targeted use of quantum computing.
## 6 Acknowledgement
This work is funded by the Ministry of Economic Affairs, Labour and Tourism Baden Wurttemberg in the frame of the Competence Center Quantum Computing Baden-Wurttemberg (project 'QORA II').
## Appendix A Portfolio data
In the tables 1 and 2, the return vector and covariance matrix are given as an example for one of the random portfolio instances used in section 3. Other instances can be generated as described in [19] (supplementary material).
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline FRE.DE & DTE.DE & IFX.DE & SIE.DE & ALV.DE & BAS.DE & HEN3.DE & LIN.DE & RWE.DE & MUV2.DE \\ \hline \hline -0.07998594 & 0.0444345 & 0.20639829 & 0.10283742 & 0.1030686 & 0.05094806 & 0.00832845 & 0.26801758 & 0.30300314 & 0.1128935 \\ \hline \end{tabular}
\end{table}
Table 1: Return vector \(\mu_{i}\) for 10 assets chosen from the German DAX 30
Figure 6: Same as in Figs. 4 and 5 for \(\varepsilon\)-hot (a,b) and \(\sigma\)-hot (c,d) problem instances. As for the cold instances, the preprocessed QAOA with largest bound setting (\(\delta_{0}=\delta_{1}=0.25\), dashed green line) shows the highest approximation ratio (a,c). In particular, it surpasses warm-start (solid pink line) especially at smaller \(p\). In case of the \(\sigma\)-hot instances, it also displays the largest ground-state probability, see (d). |
2309.16819 | Multi-Bellman operator for convergence of $Q$-learning with linear
function approximation | We study the convergence of $Q$-learning with linear function approximation.
Our key contribution is the introduction of a novel multi-Bellman operator that
extends the traditional Bellman operator. By exploring the properties of this
operator, we identify conditions under which the projected multi-Bellman
operator becomes contractive, providing improved fixed-point guarantees
compared to the Bellman operator. To leverage these insights, we propose the
multi $Q$-learning algorithm with linear function approximation. We demonstrate
that this algorithm converges to the fixed-point of the projected multi-Bellman
operator, yielding solutions of arbitrary accuracy. Finally, we validate our
approach by applying it to well-known environments, showcasing the
effectiveness and applicability of our findings. | Diogo S. Carvalho, Pedro A. Santos, Francisco S. Melo | 2023-09-28T19:56:31Z | http://arxiv.org/abs/2309.16819v1 | # Multi-Bellman operator for convergence of \(Q\)-learning with linear function approximation
###### Abstract
We study the convergence of \(Q\)-learning with linear function approximation. Our key contribution is the introduction of a novel multi-Bellman operator that extends the traditional Bellman operator. By exploring the properties of this operator, we identify conditions under which the projected multi-Bellman operator becomes contractive, providing improved fixed-point guarantees compared to the Bellman operator. To leverage these insights, we propose the multi \(Q\)-learning algorithm with linear function approximation. We demonstrate that this algorithm converges to the fixed-point of the projected multi-Bellman operator, yielding solutions of arbitrary accuracy. Finally, we validate our approach by applying it to well-known environments, showcasing the effectiveness and applicability of our findings.
## 1 Introduction
Reinforcement learning aims to approximate the value of actions in different states, considering the expected sum of time-discounted rewards in a Markovian environment. The importance of this task cannot be overstated, as an accurate value function enables an agent to make optimal decisions by selecting actions with the highest value in a given state (Puterman, 2005). Additionally, the value function facilitates environment evaluation and enables comparisons between different environments.
If we can use a tabular representation for the value function, meaning that we can store the value of performing each action on each state individually, the \(Q\)-learning algorithm converges to the correct value function (Watkins and Dayan, 1992). When it is not possible or desirable to store values in a table, for example when there are too many states or actions, \(Q\)-learning can be combined with function approximation (Melo and Ribeiro, 2007). Unfortunately, the combination of \(Q\)-learning and function approximation is troublesome. Even when the function approximation space is linear, the algorithm is not guaranteed to converge (Sutton and Barto, 2018). In fact, the approximation problem that \(Q\)-learning addresses does not have, in general, a solution (Melo et al., 2008).
To address these limitations, we propose an alternative algorithm that effectively solves the function approximation problem, unlike the original \(Q\)-learning. Both the problem and the algorithm proposed can be seen as extensions of the original. In this work, our contributions are as follows:
* Introduction of the multi-Bellman operator and analysis of its functional properties.
* Identification of conditions under which the projected multi-Bellman operator is contractive.
* Proposal of the multi \(Q\)-learning algorithm with linear function approximation.
* Theoretical and empirical demonstrations of the convergence of multi \(Q\)-learning.
* Theoretical and empirical evidence that the obtained solution can achieve arbitrary precision.
## 2 Background
A Markov decision problem (MDP) is a tuple \((\mathcal{X},\mathcal{A},\mathcal{P},\mathcal{R},\gamma)\), where \(\mathcal{X}\) is a discrete set of states (the state space), \(\mathcal{A}\) is a finite set of actions (the action space), \(\mathcal{P}\) is a set of distributions over \(\mathcal{X}\) for each state and action (the transitions), \(\mathcal{R}\) is a set of distributions over a bounded subset of \(\mathbb{R}\) for each state and action with expected value \(r\) (the rewards) and \(\gamma\) is a real in \([0,1)\) (the discount).
Given an MDP, the value of a policy \(\pi:\mathcal{X}\to\Delta\left(\mathcal{A}\right)\) is the function \(q:\mathcal{X}\times\mathcal{A}\to\mathbb{R}\)
\[q(x,a)=\mathbb{E}\left[\sum_{t=0}^{\infty}\gamma^{t}r_{t}\mid x_{0}=x,a_{0}=a \right],\]
where the expectation is with respect to rewards \(r_{t}\) that are obtained from performing action \(a_{t}\) on state \(x_{t}\), states \(x_{t+1}\) that are obtained by performing action \(a_{t}\) on state \(x_{t}\), and actions \(a_{t}\) that are selected according to \(\pi\). There is at least one policy \(\pi^{*}\) that maximizes \(q_{\pi}\) on every state and action (Puterman, 2005) and we refer to its value as \(q^{*}\). The value \(q^{*}\) satisfies the Bellman equation
\[q^{*}=\mathbf{H}q^{*},\]
where \(\mathbf{H}\) is the Bellman operator defined for arbitrary \(q:\mathcal{X}\times\mathcal{A}\to\mathbb{R}\) as
\[(\mathbf{H}q)(x,a)=\mathbb{E}\left[r(x,a)+\gamma\max_{a^{\prime}\in\mathcal{A }}q\left(x^{\prime},a^{\prime}\right)\right],\]
where the expectation is with respect to the next state \(x^{\prime}\) obtained by performing action \(a\) on state \(x\).
Our goal is to approximate \(q^{*}\). In other words, we want to find a good parameterized representation of the value of an optimal policy for a given MDP given a function approximation space.
### Function approximation
Let us consider a differentiable function space \(\mathcal{H}=\{h_{\omega}:\mathcal{Z}\to\mathbb{R},\omega\in\mathbb{R}^{k}\}\), a distribution \(\mu\) over a random \(z\) in a discrete set \(\mathcal{Z}\) and a loss \(l(\omega)=\frac{1}{2}\left\|h-h_{\omega}\right\|^{2}\) with the \(\mu\)-norm \(\left\|h\right\|=\sqrt{\mathbb{E}_{\mu}\left[h^{2}(z)\right]}\). To approximate a function \(h\) is to find an element in the subset \(\operatorname{Proj}h\) of \(\mathcal{H}\) defined as
\[\operatorname{Proj}h=\operatorname*{argmin}_{h_{\omega}\in\mathcal{H}}l(\omega).\]
Any \(\omega\) parameterizing a \(h_{\omega}\) in \(\operatorname{Proj}h\) is a critical point of \(l\) and must verify \(\nabla_{\omega}l(\omega)=0\).
Linear function approximationGiven features \(\phi:\mathcal{Z}\to\mathbb{R}^{k}\), a linear function approximation space is given by the functions such that \(h_{\omega}(z)=\phi^{T}(z)\omega\). In this case, the gradient of the loss is
\[\nabla_{\omega}l(\omega)=-\mathbb{E}\left[\phi(z)\left(h(z)-h_{\omega}(z) \right)\right].\]
Solving for \(\nabla_{\omega}l(\omega)=0\) we obtain that
\[\omega=\mathbb{E}\left[\phi(z)\phi^{T}(z)\right]^{-1}\mathbb{E}\left[\phi(z)h (z)\right].\]
Thus, if the inverted matrix above exists, the set \(\operatorname{Proj}h\) has a single element and we can refer to both as \(h_{\tilde{\omega}}\). The solution \(\tilde{\omega}\) is also the globally asymptotically stable equilibrium of the dynamical system
\[\dot{\omega}=\nabla_{\omega}l(\omega),\]
and the limit of the sequence of \(\omega_{t}\) obtained by performing a discretized update with \(z_{t}\) i.i.d. from \(\mu\)
\[\omega_{t+1}=\omega_{t}+\alpha_{t}\left[\phi\left(z_{t}\right)\left(h(z_{t})-h _{\omega_{t}}(z_{t})\right)\right].\]
Stochastic approximationWithout access to \(h\), stochastic approximation performs the update
\[\omega_{t+1}=\omega_{t}+\alpha_{t}\left[\phi\left(z_{t}\right)\left(\tau_{t+1}- h_{\omega_{t}}(z_{t})\right)\right],\]
with \(\tau_{t+1}\) a possibly \(\omega_{t}\)-dependent estimate of \(h\left(z_{t}\right)\) called the target and \(\alpha_{t}\) a small positive real called the learning rate. If there exists an equilibrium for the ordinary differential equation (o.d.e.)
\[\dot{\omega}=\mathbb{E}\left[\phi\left(x,a\right)\left(\tau(\omega)-h_{\omega}( z)\right)\right],\]
where \(\tau(\omega)=\mathbb{E}\left[\tau_{t+1}\mid z_{t}\right]\), and it is globally asymptotically stable, well-established conditions guarantee the stochastic approximation update converges to such equilibrium (Borkar, 2008, Chapter 2).
### \(Q\)-learning with linear function approximation
We consider now features \(\phi:\mathcal{X}\times\mathcal{A}\to\mathbb{R}^{k}\), parameters \(\omega\) in \(\mathbb{R}^{k}\), a distribution over states and actions \(\mu\) and linearly parameterized functions \(q_{\omega}:\mathcal{X}\times\mathcal{A}\to\mathbb{R}\) such that \(q_{\omega}(x,a)=\phi^{T}(x,a)\omega\).
In reinforcement learning we want to approximate \(q^{*}\). Formally, we want to compute \(\omega^{*}\) such that
\[q_{\omega^{*}}=\operatorname{Proj}q^{*}.\]
We have no knowledge of \(q^{*}\) and are unable to perform an exact stochastic approximation update
\[\omega_{t+1}=\omega_{t}+\alpha_{t}\left[\phi\left(x_{t},a_{t}\right)\left(q^{* }(x_{t},a_{t})-q_{\omega_{t}}(x_{t},a_{t})\right)\right].\]
Thinking of the identity \(q^{*}=\mathbf{H}q^{*}\), \(Q\)-learning performs instead the stochastic approximation update
\[\omega_{t+1}=\omega_{t}+\alpha_{t}\left[\phi\left(x_{t},a_{t}\right)\left( \tau_{t+1}-q_{\omega_{t}}(x_{t},a_{t})\right)\right],\]
where the target \(\tau_{t+1}\) is a sample for \(\left(\mathbf{H}q_{\omega_{t}}\right)\left(x_{t},a_{t}\right)\) such that
\[\tau_{t+1}=r_{t}+\gamma\max_{a_{t+1}\in\mathcal{A}}q_{\omega_{t}}\left(x_{t+1},a_{t+1}\right).\]
A solution to \(Q\)-learning with linear function approximation must then be the fixed-point of the projected Bellman operator, verifying the equation
\[q_{\omega}=\operatorname{Proj}\left(\mathbf{H}q_{\omega}\right).\]
Unfortunately, in general such fixed-point equation does not have a solution and, even when it does, the solution may not be an asymptotically stable equilibrium of the associated dynamical system
\[\dot{\omega}=\mathbb{E}\left[\phi\left(x,a\right)\left(\tau(\omega)-q_{\omega }(x,a)\right)\right].\]
Consequently, \(Q\)-learning with linear function approximation can diverge.
The divergence of \(Q\)-learning is evidenced in classic counter-examples where the parameters of the approximator do not approach any solution, either oscillating within a window (Boyan and Moore, 1995; Gordon, 2001) or growing without bound (Tsitsiklis and Van Roy, 1996; Baird, 1995). In practice, there is also evidence of phenomena of catastrophic forgetting (Cahill, 2011) and of convergence to incompetent solutions (van Hasselt et al., 2018). Currently, the theoretical results that establish convergence of \(Q\)-learning restrict the data or the features too much (Szepesvari and Smart, 2004; Melo et al., 2008), and the proposed variants of \(Q\)-learning that are guaranteed to converge under more general conditions (Carvalho et al., 2020; Zhang et al., 2021; Lim et al., 2022) converge to biased limit solutions that do not hold good performance guarantees (Chen et al., 2022). Differently, our approach does not bias the solution nor restricts the data and the features too much.
## 3 Multi-Bellman operator
Let us define a multi-Bellman operator \(\mathbf{H}^{n}\) from the space of functions \(q:\mathcal{X}\times\mathcal{A}\to\mathbb{R}\) to itself such that \(\mathbf{H}^{n+1}q=\mathbf{H}\left(\mathbf{H}^{n}q\right)\) and \(\mathbf{H}^{1}q=\mathbf{H}q\). For example, we have that
\[\left(\mathbf{H}^{2}q\right)\left(x_{0},a_{0}\right)=\mathbb{E}\left[r\left( x_{0},a_{0}\right)+\gamma\max_{a_{1}\in\mathcal{A}}\mathbb{E}\left[r(x_{1},a_{1})+ \gamma\max_{a_{2}\in\mathcal{A}}q\left(x_{2},a_{2}\right)\right]\right].\]
and, more generally,
\[\left(\mathbf{H}^{n}q\right)\left(x_{0},a_{0}\right)=\mathbb{E}\left[r\left( x_{0},a_{0}\right)+\gamma\max_{a_{1}\in\mathcal{A}}\mathbb{E}\left[r\left(x_{1},a_{1} \right)+\gamma\max_{a_{2}\in\mathcal{A}}\mathbb{E}\left[\cdots+\gamma\max_{a _{n}}q\left(x_{n},a_{n}\right)\right]\right]\right].\]
Since \(q^{*}=\mathbf{H}q^{*}\), also \(q^{*}=\mathbf{H}^{2}q^{*}\) and, for every, \(n\) in \(\mathbb{N}\) we have that
\[q^{*}=\mathbf{H}^{n}q^{*}.\]
First, we present the following lemma.
**Lemma 1**.: _The operation \(\mathbf{H}^{n}\) is a contraction in the \(\infty\)-norm with contraction factor \(\gamma^{n}\)._
While unsurprising, the result has profound implications on the use of linear function approximation.
### Linear function approximation
We consider the following assumption.
**Assumption 1**.: The features are such that the covariance matrix \(\mathbb{E}_{\mu}\left[\phi(x,a)\phi^{T}(x,a)\right]\) is invertible.
Under the assumption above, we set the following result.
**Proposition 1**.: _There exists \(N\in\mathbb{N}\) such that, for all \(n\geq N\), \(\mathrm{Proj}\,\mathbf{H}^{n}\) is a contraction in the \(\mu\)-norm._
**Corollary 1**.: _There exists \(N\in\mathbb{N}\) such that, for all \(n\geq N\), there exists a unique solution \(\tilde{\omega}^{n}\) to_
\[q_{\omega}=\mathrm{Proj}(\mathbf{H}^{n}q_{\omega}). \tag{1}\]
The result states that, while the projected Bellman operator may not be contractive in the \(\mu\)-norm, the projected multi-Bellman operator is contractive in the \(\mu\)-norm for sufficiently large \(n\). As a consequence, whereas the fixed-point equation of the projected Bellman operator may fail to have any solution, the fixed-point equation of the projected multi-Bellman operator has a unique solution.
The previous result establishes existence and uniqueness of solution. However, it says nothing about the quality of such solution. In the following, we analyze \(\tilde{\omega}^{n}\) in comparison with the projected \(q_{\omega^{*}}\).
**Proposition 2**.: _For all \(n\geq N\), where \(N\) is identified in Corollary 1, \(\tilde{\omega}^{n}\) is such that_
\[\left\|q^{*}-q_{\tilde{\omega}^{n}}\right\|\leq\frac{1}{1-\frac{\sigma_{\max} \tilde{\omega}^{n}_{\max}}{\mu_{\min}}\gamma^{n}}\left\|q^{*}-q_{\omega^{*}} \right\|, \tag{2}\]
_where \(\sigma_{\max}=\left\|\mathbb{E}\left[\phi(x,a)\phi^{T}(x,a)\right]^{-1}\right\|\) and \(\phi_{\max}=\max_{x,a}\left\|\phi(x,a)\right\|_{2}\)._
**Corollary 2**.: _The sequence \(\{\tilde{\omega}^{n}\}_{n\geq N}\) such that \(N\) and \(\tilde{\omega}^{n}\) are identified in Corollary 1 gives_
\[\lim_{n\to\infty}\left\|\omega^{*}-\tilde{\omega}^{n}\right\|_{2}=0.\]
The result states that, as \(n\) increases, \(\tilde{\omega}^{n}\) (resp. \(q_{\tilde{\omega}^{n}}\)) becomes arbitrarily close to \(\omega^{*}\) (resp. \(q_{\omega^{*}}\)).
In the following, we propose a stochastic approximation algorithm for computing \(\tilde{\omega}^{n}\) for given \(n\).
## 4 Multi \(Q\)-learning
We want to solve the fixed-point equation
\[\omega=\mathrm{Proj}(\mathbf{H}^{n}q_{\omega}).\]
for arbitrary \(n\in\mathbb{N}\). We propose to perform the multi \(Q\)-learning update
\[\omega^{n}_{t+1}=\omega^{n}_{t}+\alpha_{t}\left[\phi(x_{t},a_{t})\left(\tau^ {n}_{t+1}-q_{\omega^{n}_{t}}(x_{t},a_{t})\right)\right],\]
where the target \(\tau^{n}_{t+1}\) is a sample for \(\left(\mathbf{H}^{n}q_{\omega^{n}_{t}}\right)(x_{t},a_{t})\) such that
\[\tau^{n}_{t+1}=r_{t}+\gamma\max_{\bar{a}\in\mathcal{A}^{n}}\left[r_{t+1}+ \gamma^{1}r_{t+2}+\ldots+\gamma^{n-1}q_{\omega^{n}_{t}}(x_{t+n},a_{t+n})\right],\]
with \(\bar{a}=(a_{t+1},a_{t+2},\ldots,a_{t+n})\) and \(r_{t+m}\) and \(x_{t+m+1}\) are obtained by performing action \(a_{t+m}\) on state \(x_{t+m}\) for \(m\) between \(1\) and \(n\). In practice, at time step \(t\), instead of building a \(1\)-step greedy target, multi \(Q\)-learning builds an \(n\)-step target that is obtained by trying every action on every state encountered along an \(n\)-step trajectory starting at \(x_{t+1}\). If we were to use terminology from the planning literature, we could say the algorithm searches with fixed-depth and full-breadth, a rather unexplored setting in reinforcement learning according to (Moerland et al., 2023, Section 5.3).
Before we present our convergence result, let us consider a couple more of assumptions.
**Assumption 2**.: For all \(t\in\mathbb{N}\), states and actions are i.i.d. from \(\mu\) and \(\mu(x,a)\geq\mu_{\min}\), with \(\mu_{\min}>0\).
**Assumption 3**.: The learning rates satisfy the conditions \(\sum_{t=0}^{\infty}\alpha_{t}=\infty\) and \(\sum_{t=0}^{\infty}\alpha_{t}^{2}<\infty\).
The existence of a solution is guaranteed by Corollary 1, whose proof resorted to Banach's fixed-point theorem and Proposition 1. To establish global asymptotic stability, we make use of a Lyapunov argument. Then, we have the following result.
**Theorem 1**.: _There exists \(N\) such that, for all \(n\) larger than \(N\), the sequence of \(\omega_{t}^{n}\) is such that_
\[\lim_{t\to\infty}\left\|\tilde{\omega}^{n}-\omega_{t}^{n}\right\|_{2}=0.\]
The result establishes conditions under which multi \(Q\)-learning converges to the unique solution of the fixed-point equation of the projected multi-Bellman operator (1). The assumptions are commonplace (Tsitsiklis and Van Roy, 1996; Carvalho et al., 2020; Melo et al., 2008). Assumptions 1 and 3 are mild. Assumption 2 is the most restrictive of the three, as it requires the data distribution to have no shift during training. In practice, that is usually not the case. For example it is common that throughout the interaction of the agent with the environment, the policy used to collect data changes in response to changes in the approximated value function. Consequently, in general, the samples are not i.i.d. to a fixed distribution. Nevertheless, we can use replay buffers to slow down the data distribution shift. In the limit, the case of offline reinforcement learning, where the replay buffer is completed before reinforcement learning starts, the distribution is indeed fixed and samples are independent and identically distributed. Nevertheless, we do believe our result still holds if the data distribution, while changing in response to the reinforcement learning, converges.
For performing the maximization in the target for the update of multi \(Q\)-learning, we require the ability to simulate transitions and rewards. For our experiments, we assume access to a simulator. We highlight the limitation in Section 7.1. Nevertheless, we highlight that it is possible to use a learned model of the transitions and rewards. Such use would not harm the convergence guarantees of multi \(Q\)-learning. In fact, it is even possible to use non-parametric models, such as replay buffers, to sample transitions and rewards, without prejudice of the convergence established. It is also possible to concurrently learn parametric models in a supervised way, even though the theoretical analysis for the extension to this model-learning interaction would require additional mathematical machinery.
We finish the section with a sketch of the proof of Theorem 1, referring to the appendix for details.
Proof.: Under assumptions 1, 2 and 3, respectively on the data distribution, the features and the learning rates, multi \(Q\)-learning satisfies conditions under which a stochastic approximation algorithm converges to the equilibrium of the associated dynamical system described in Section 2.
We consider the result of Borkar (2008) that we reproduce in the supplementary material as Theorem 0. Therein, we identify four conditions that we need to prove that hold for multi \(Q\)-learning.
First, the expected update must be a smooth function of the parameters \(\omega\). To verify this condition, we consider, for some \(n\in\mathbb{N}\), the expected update map \(g:\mathbb{R}^{k}\to\mathbb{R}^{k}\) such that
\[g(\omega)=\mathbb{E}\left[\phi(x,a)\left(\tau^{n}(\omega)-q_{\omega}(x,a) \right)\right]\]
and show that it is a Lipschitz function of the parameters in Lemma 2.
Second, the expected difference between the expected update and the actual update, i.e., the noise, must equal zero when conditioned on the past, have bounded expectation and bounded variance. We consider the noise sequence \(\{m_{t}\}_{t\in\mathbb{N}}\) such that
\[m_{t+1}=\phi(x_{t},a_{t})\left(\tau_{t+1}^{n}-q_{\omega_{t}}(x_{t},a_{t}) \right)-g(\omega_{t})\]
and verify it forms a martingale difference sequence with bounded variance in Lemma 3.
Third, in Lemma 4, we show that, for sufficiently large \(N\) and \(n\geq N\), the o.d.e.
\[\dot{\omega}=g(\omega)\]
has a unique and globally asymptotically stable equilibrium \(\tilde{\omega}^{n}\) that solves the fixed-point equation
\[\omega=\mathbb{E}\left[\phi\left(x,a\right)\phi^{T}\left(x,a\right)\right]^{- 1}\mathbb{E}\left[\phi\left(x,a\right)\tau^{n}(\omega)\right].\]
To establish the existence and uniqueness of a solution we use Corollary 1, which in turn uses Banach's fixed-point theorem and Proposition 1. To establish global asymptotic stability, we make a Lyapunov argument. We establish the result for all \(n\geq N\) with \(N=-\log_{\gamma}\left(\frac{\sigma_{\max}\phi_{\max}^{2}}{\mu_{\min}}\right)\). We do not say, however, that this is the minimum \(N\) such that result would hold. Specifically, we do not guarantee our convergence result is the tightest possible.
The first three conditions ensure that, if the updates remain bounded, they converge to \(\tilde{\omega}^{n}\)(Borkar, 2008; Chapter 2). The fourth condition, that we verify in Lemma 5, ensures such boundedness.
Having verified the conditions of Theorem 0, we are able to conclude that, for sufficiently large \(N\), the sequence of \(\omega_{t}^{n}\) generated by multi \(Q\)-learning converges to \(\tilde{\omega}^{n}\) with probability 1.
## 5 Experiments
First, we evaluate multi \(Q\)-learning for growing \(n\) first in the task of approximating \(q^{*}\). We use the classic counter-examples for the convergence of \(Q\)-learning with linear function approximation. Then, we consider the task of using a learned approximation of \(q^{*}\) to select actions. We use the classic control environments. For the function approximation space, we use discretized Gaussian features. We use an \(\epsilon\)-greedy policy where \(\epsilon\) decays linearly from 100% to 5% during the first half of interactions and remains constant afterwards. We use a replay buffer with 20% of the total number of timesteps used for the environment. We further detail hyperparameters used for each environment in the corresponding paragraph. We average the results across five runs, with standard deviation intervals, and plot a moving average of the last \(5\%\) of the total number of time steps.
### Classic counter-examples
\(\omega\to 2\omega\)The \(\omega\to 2\omega\) classic counter-example is due to Tsitsiklis and Van Roy (1996). Here, the MDP has two states \(y_{1}\) and \(y_{2}\) and only one action \(b_{1}\). Performing the action on any of the two states always takes the agent to the second state. The reward received is always zero. Therefore, \(q^{*}\) is zero. To approximate \(q^{*}\) we consider features \(\phi:\mathcal{X}\times\mathcal{A}\to\mathbb{R}\) such that \(\phi(y_{1},b_{1})=1\) and \(\phi(y_{2},b_{1})=2\). The projection of \(q^{*}\) is \(\omega^{*}=0\) and \(q_{\omega^{*}}\) equals the solution \(q^{*}\). \(\omega^{*}\) also verifies the Bellman fixed-point equation \(q_{\omega^{*}}=\operatorname{Proj}\left(\mathbf{H}q_{\omega^{*}}\right)\). Regardless, considering a discount factor of \(0.9\) and a learning rate of \(10^{-2}\), when the distribution over states and actions is uniform, the parameters of \(Q\)-learning, that is multi \(Q\)-learning with \(n=1\), diverge to infinity. Figure 0(a) shows that for sufficiently large \(n\), specifically \(n=4\), we have convergence to the correct solution.
StarThe Star classic counter-example is due to Baird (1995). Here, the MDP has six states \(y_{1}\) to \(y_{6}\) and two actions \(b_{1}\) and \(b_{2}\). The first action always takes the agent two the last state, the second action takes the agent to any of the first five states uniformly. The reward received is always zero. Therefore, \(q^{*}\) is zero. To approximate \(q^{*}\) we consider features \(\phi:\mathcal{X}\times\mathcal{A}\to\mathbb{R}^{13}\) such that, for \(j\) between \(1\) and \(6\), for all \(i\) between \(1\) and \(13\)\(\phi_{i}(y_{j},b_{2})=\mathbf{1}(i=j+1)\), for \(i\) between \(2\) and \(6\)\(\phi_{i}(y_{j},b_{1})=2\cdot\mathbf{1}(i=j+1)\), for \(j\) between \(1\) and \(5\)\(\phi_{1}(y_{j},b_{1})=\mathbf{1}(j\leq 5)\) and \(\phi_{1}(y_{6},b_{1})=2\), \(\phi_{i}(y_{j},b_{1})=0\) otherwise. The projection of \(q^{*}\) is \(\omega^{*}=0\) and \(q_{\omega^{*}}\) equals the solution \(q^{*}\). \(\omega^{*}\) also verifies the Bellman fixed-point equation \(q_{\omega^{*}}=\operatorname{Proj}\left(\mathbf{H}q_{\omega^{*}}\right)\). Despite the apparently benign conditions, considering a discount factor of \(0.995\) and a learning rate of \(10^{-2}\), when the distribution over states and action is generated by selecting the first action one sixth of the times and the second action five sixths of the times, the parameters of \(Q\)-learning, that is multi \(Q\)-learning with \(n=1\), diverge to infinity. Figure 0(b) shows that for sufficiently large \(n\), specifically \(n=4\), we have convergence to the correct identically zero solution.
Figure 1: Classic counter-examples. The \(x\)-axis shows the number of environment steps. The \(y\)-axis shows the estimate of \(q\)-values of the state visited. In both environments, \(n=4\) was necessary and sufficient to achieve convergence to the exact optimal \(q\)-values which are identically zero.
### Classic control
CartpoleCarptole is a classic control problem proposed by Barto et al. (1983), where a cart balances a pole. The actions of the agent are to push the cart left or right. The state space is a four-tuple with the position and velocity of the cart and the angle and angular velocity of the pole. The reward is always one unless the pole falls and the agent reaches a terminal state with reward zero. During training, episodes last at most five hundred interactions. We use Gaussian features in \(\mathbb{R}^{16}\), obtained by discretizing each dimension of the state space in two, a discount factor of \(0.99\) and a learning rate of \(3\cdot 10^{-2}\). Figure 1(a) shows the results. \(Q\)-learning can not perform. When \(n\geq 2\), multi \(Q\)-learning is able to balance the pole and collect rewards.
MountaincarMountaincar is a classic control problem proposed by Moore (1990) where a car must climb a valley. The actions of the agent are to push the car left, push the car right or do nothing. The state space is a double of position and velocity of the car. The reward is always minus one unless the car is at the top of the valley, after which the agent reaches a terminal state with reward zero. During training, episodes last a maximum of two hundred interactions. We use Gaussian features in \(\mathbb{R}^{256}\) obtained by discretizing each dimension of the state space in sixteen, a discount factor of \(0.99\) and a learning rate of \(3\cdot 10^{-3}\). Figure 1(b) shows the results. For all \(n\), multi \(Q\)-learning is able to select actions to successfully climb the hill and solve the problem.
AcrobotAcrobot is a classic control problem proposed by Sutton (1995) where a joint actuates two links such that one end is fixed and the other is free. The actions of the agent are to apply a negative torque to the joint, apply a positive torque to the joint or do nothing. The state space is the six-tuple composed of sine, cosine and angular velocity of each link. The reward is always minus one unless the free end of the links reaches a target height, after which the agent reaches a terminal state with reward zero. During training, each episode last a maximum number of five hundred interactions. We use Gaussian features in \(\mathbb{R}^{4096}\) obtained by discretizing each dimension of the state space in four, a discount factor of \(0.99\) and a learning rate of \(3\cdot 10^{-3}\). Figure 1(c) shows the results. As \(n\) increases, the performance of multi \(Q\)-learning increases and becomes more stable.
## 6 Related work
### Convergence results
We analyze results on the convergence of \(Q\)-learning with linear function approximation. A few works provide conditions under which the standard \(Q\)-learning method converges. Others, propose a different learning objective and thus result in significantly modified variants of \(Q\)-learning, which are the gradient-TD methods. More recently, other works exploit the finding that regularization can ensure convergence of the \(Q\)-learning updates. From a different perspective, a few works have analyzed the impact of lookahead in policy improvement and evalutation steps.
Convergence conditionsAround the time the divergence of \(Q\)-learning was established, some works aimed at reaching conditions under which \(Q\)-learning or small variants of it would converge.
Figure 2: Classic control problems. The \(x\)-axis shows the number of environment steps. The \(y\)-axis shows the average return achieved over 30 evaluation episodes. We can see that, as \(n\) increases, the performance of multi \(Q\)-learning with linear function approximation remains or improves.
The first works proposed restrictions of the function approximation spaces themselves. They establish that with some specific choices of features, \(Q\)-learning is guaranteed to converge (Singh et al., 1994; Ormoneit and Sen, 2002; Szepesvari and Smart, 2004b). The linear architectures considered across the three works are extensions of what would be the one-hot representation in the tabular case. Afterwards, another work considered again general linear function approximation architectures. Melo et al. (2008) prove that \(Q\)-learning with linear function approximation converges if the distribution of state-action pairs the agent uses to learn is sufficiently close to the distribution that the optimal policy would induce. Such case restricts the convergence of \(Q\)-learning closer to on-policy settings.
Gradient-TD methodsInstead of finding conditions under which \(Q\)-learning converges, a different line of works proposes to take a step back and modify the objective that \(Q\)-learning is trying to solve. Maei et al. (2010) propose to perform full-gradient descent on a different objective, called the projected Bellman error. The resulting algorithm, called Greedy-GQ, is part of a class of gradient-TD methods. Being a full gradient method, Greedy-GQ is provably convergent to a minimum in the linear approximation space. However, the method can converge to strictly local minima and there is no guarantee that the resulting greedy policy is a good control policy (Scherrer, 2010). Gradient-TD methods are also less efficient than semi-gradient methods (Mahadevan et al., 2014; Du et al., 2017).
Regularized methodsThe problem of divergence of \(Q\)-learning with function approximation was significantly revived after an empirical success story of \(Q\)-learning with deep neural networks (Mnih et al., 2015). One of the components of the renowned deep \(Q\)-network (DQN) therein is a target network that aims at compensating the instability generated by the \(Q\)-learning updates. While the DQN is not provably convergent, its empirical success inspired theoretical results. The works of Carvalho et al. (2020), Zhang et al. (2021) and Chen et al. (2022) provided convergence guarantees for variants with target networks. Additionally, another work points out that the target network can be seen as a regularizer Piche et al. (2021). As hinted by Farahmand (2011), several works prove that various forms of regularization of the \(Q\)-values or the parameters themselves can stabilize \(Q\)-learning, resulting in a convergent algorithms (Zhang et al., 2021; Lim et al., 2022; Agarwal et al., 2021). However, the introduction of any of the regularizers biases the limit solution encountered.
Non-linear function approximationThe behaviour of \(Q\)-learning with linear function approximation has been the focus of several theoretical works. In practice, however, \(Q\)-learning is mostly used with non-linear function approximation, especially through neural networks (Mnih et al., 2015). Still, there are works that address this more general setting. A recent work suggests a loss function that is decreasing over time, assuming the neural network converges to a target network at each step (Wang and Ueda, 2021). However, having a loss function that is monotonically decreasing does not imply that neither the parameters of the approximator are converging nor that the \(Q\)-values are converging. Cai et al. (2019) and Xu and Gu (2020) provide finite-time analysis of \(Q\)-learning with over-parameterized neural networks that imply that as the size of the network grows to infinity, convergence is guaranteed at a sub-linear rate. We note that as the size of the network grows to infinity, the learning architecture also grows closer to a tabular representation. Therefore, despite interesting, the results do not imply convergence when a practical neural network is employed.
### Lookahead, planning and model-based reinforcement learning
The multi-Bellman operator considered relates to other operators that lookahead. The multi \(Q\)-learning algorithm relates to other planning and model-based reinforcement learning algorithms.
LookaheadIn the context of policy iteration, several works hinted at the theoretical benefits of lookahead in the tabular case De Asis et al. (2018), Efroni et al. (2018). Efroni et al. (2018) then identified a problem with soft policy improvement for lookahead policies, which happens if function approximation is used. Specifically, contrarily to what happens with a single-step policy improvement step, a multistep policy improvement step is not necessarily monotonically increasing. However, Efroni et al. (2020) and Winnicki and Srikant (2022) respectively provide finite-time and asymptotic results for lookahead in approximate policy iteration. Our work differs both on the problem side and on the solution side. On the problem side, we focus on the problem of divergence of \(Q\)-learning, a value-based algorithm, when used with linear function approximation architectures that are not tables. The projected multi-Bellman operator that we introduced differs from ones in the referred works in that it is designed for evaluation of the optimal policy, not evaluating or improving on a given policy.
Planning and model-based reinforcement learningMulti \(Q\)-learning can be casted under the umbrella of multi-step real time dynamic programming algorithms, as defined by Efroni et al. (2018) and discussed in Moerland et al. (2020). Such algorithms integrate planning and learning and have several successful practical applications (Silver et al., 2017, 2018). We refer to the survey of Moerland et al. (2023) for varied interesting algorithms that include \(Q(\sigma)\)(De Assi et al., 2018), tree-backup (Precup, 2000) and multi-step expected SARSA (Sutton and Barto, 2018). While most such algorithms are policy-based and work on-policy, requiring separate value and policy networks and behavior-dependent solutions, in our case, we have an off-policy, value-based algorithm. Moreover, multi \(Q\)-learning plans exclusively at training time--planning is not necessary in order to select an action to execute. Multi \(Q\)-learning plans with fixed depth and full breadth, which is a rather unexplored setting according to (Moerland et al., 2023). Specifically, the depth is not adaptive nor full and every action is tried on every state along a planning tree. Unfortunately, we do not believe adaptive depth or limited breadth would result in convergence. The setting could, however, bring computational benefits that could make multi \(Q\)-learning more practical.
## 7 Conclusion
In conclusion, our work has made significant contributions to addressing the convergence challenge in \(Q\)-learning with linear function approximation. By introducing the multi-Bellman operator and demonstrating its contractive nature, we have paved the way for improved convergence properties. The proposed multi \(Q\)-learning algorithm effectively approximates the fixed-point solution of the projected multi-Bellman operator. Importantly, we have shown that the algorithm converges under relatively mild conditions. The implications of our findings extend beyond this study, as they represent a substantial breakthrough in the problem of convergence in \(Q\)-learning with linear function approximation. Our work has the potential to inspire further advancements in theory and algorithm development within the field of reinforcement learning research.
We highlight two limitations of our work below. Afterwards, we link them with future work.
### Limitations
Even though our algorithm can be combined with models that are learned concurrently to the reinforcement learning update, in our experiments, we used a known model of the environment. It is not the case that such known model is always available in applications.
Besides, multi \(Q\)-learning plans by performing every action on every reached state along trajectories of \(n\) steps. Thus, the computational cost of performing an update grows exponentially with \(n\), where the base for the exponent is the size of the action space.
### Future work
There are several techniques for learning a model of the environment. For example, we can learn the model previously or concurrently to reinforcement learning; the model can be exact or inexact; the model can be parametric or non-parametric. Multi \(Q\)-learning can be combined with any of the mentioned model-based approaches in order to remove the need for a known model.
Multi \(Q\)-learning does not hold so benign properties when we are unable to perform every action on every state along \(n\) step trajectories. In case only some actions are performed on each state, we cannot show convergence to the solution to an appropriate fixed-point equation. Nevertheless, we expect that, in some cases, the computational benefits of not performing every action on every state to outweigh the theoretical comfort. A practical analysis of this trade-off would be valuable.
It would also be interesting to analyze multi \(Q\)-learning with non-linear function approximation, theoretically and empirically. Specifically, under which conditions and function approximation architectures would our convergence result still hold? How would multi \(Q\)-learning with non-linear function approximation compare, in practice, with relevant policy-free reinforcement learning algorithms such as the DQN (Mnih et al., 2015)? |
2310.20204 | General-Purpose Retrieval-Enhanced Medical Prediction Model Using
Near-Infinite History | Machine learning (ML) has recently shown promising results in medical
predictions using electronic health records (EHRs). However, since ML models
typically have a limited capability in terms of input sizes, selecting specific
medical events from EHRs for use as input is necessary. This selection process,
often relying on expert opinion, can cause bottlenecks in development. We
propose Retrieval-Enhanced Medical prediction model (REMed) to address such
challenges. REMed can essentially evaluate unlimited medical events, select the
relevant ones, and make predictions. This allows for an unrestricted input
size, eliminating the need for manual event selection. We verified these
properties through experiments involving 27 clinical prediction tasks across
four independent cohorts, where REMed outperformed the baselines. Notably, we
found that the preferences of REMed align closely with those of medical
experts. We expect our approach to significantly expedite the development of
EHR prediction models by minimizing clinicians' need for manual involvement. | Junu Kim, Chaeeun Shim, Bosco Seong Kyu Yang, Chami Im, Sung Yoon Lim, Han-Gil Jeong, Edward Choi | 2023-10-31T06:04:18Z | http://arxiv.org/abs/2310.20204v4 | # General-Purpose Retrieval-Enhanced Medical Prediction Model Using Near-Infinite History
###### Abstract
Machine learning (ML) has recently shown promising results in medical predictions using electronic health records (EHRs). However, since ML models typically have a limited capability in terms of input sizes, selecting specific medical events from EHRs for use as input is necessary. This selection process, often relying on expert opinion, can cause bottlenecks in development. We propose Retrieval-Enhanced Medical prediction model (REMed) to address such challenges. REMed can essentially evaluate unlimited medical events, select the relevant ones, and make predictions. This allows for an unrestricted input size, eliminating the need for manual event selection. We verified these properties through experiments involving 27 clinical prediction tasks across two independent cohorts, where REMed outperformed the baselines. Notably, we found that the preferences of REMed align closely with those of medical experts. We expect our approach to significantly expedite the development of EHR prediction models by minimizing clinicians' need for manual involvement.
**Keywords:** Medical Prediction, EHR, Retrieval
## 1 Introduction
A patient's medical records in a hospital are archived as a sequence of medical events (_e.g._, lab measurements, prescriptions, procedures) in electronic health records (EHRs). In recent years, machine learning (ML) has shown remarkable potential in predicting various medical outcomes (_e.g._, mortality, length of stay) using EHR data [1, 2, 3]. However, the sheer volume of events in EHRs presents a significant challenge for developing predictive models. For instance, a patient in an intensive care unit (ICU) typically generates thousands of events daily [4]. The computational requirements of ML models scale with the size of the input [5, 6], making it challenging to effectively harness all this information, even with efficient modern architectures specialized to handle long input [7, 8, 9].
Accordingly, heuristic event selection is required to reduce the input size. This process typically relies on human decisions made by domain experts, such as experienced clinicians, which is costly and time-consuming. This acts as a significant bottleneck in the model development process. While some recent studies have explored methods to alleviate the need for event selection, none have addressed the issue of limited input size, a fundamental reason for such selection [2, 10, 11, 12, 13, 14]. As a result, none have completely eliminated the need for domain experts' involvement. Therefore, our main objective is to develop a model capable of handling a near-infinite number of events, thereby eliminating this bottleneck.
Recent studies have explored methods to eliminate the need for feature selection [2, 10, 11, 12, 13, 14]. Notably, empirical findings from some of these studies suggest that models incorporating more features often outperform those with selected features [2, 12, 13, 14]. In general, there are two dominant approaches to achieve this. The first way is mapping each \(c_{i}\) and \(d_{i}\) to a unique vocabulary [2, 10, 11, 12]. However, given that a typical EHR contains tens of thousands of unique \(c_{i}\) and \(d_{i}\)[15, 16, 17], this method often struggles to handle the rare ones. The second approach treats both the \(c_{i}\) and \(d_{i}\) as text, mapping them to a natural language space [13, 14]. This method ensures that \(c_{i}\) and \(d_{i}\) with similar meanings (e.g., the frequently occurred code "Non-invasive blood pressure systolic" and the less common "Manual blood pressure systolic") are represented similarly, often outperforming the first approach [13, 14]. Among them, GenHPF [13] achieved superior performance by utilizing all information of \(d_{i}\). However, none of these approaches have addressed the issue of limited input size, a fundamental reason for event selection. As a result, they all rely on manual observation window selection to limit the number of input events to a computationally feasible scale. This necessitates the involvement of domain experts, which becomes a significant bottleneck in the model development process. Therefore, we aim to develop a model capable of handling a near-infinite number of events, thereby eliminating the need for feature and observation window selection.
We tackle this challenge by employing a Retrieval-Based Approach (RBA). RBA, which has been widely explored in the natural language processing (NLP) question-answering (QA) domain, operates in two primary steps: 1) retrieving a collection of documents relevant to a specific question and 2) using these documents to make informed predictions [18, 19]. Inspired by RBA's capability to efficiently process
millions of documents [20], we adopt its methodology for managing virtually infinite medical events. Our model, named _Retrieval-Enhanced Medical prediction model_ (REMed), 1) retrieves events that are useful for predicting the target outcome, and 2) performs a prediction by leveraging the correlations among these selected events (Figure 1). As a result, REMed can process a near-infinite number of events, thereby eliminating the need for event selection, ultimately minimizing the domain expert involvement in the development process.
We trained REMed on 27 clinical prediction tasks, including mortality, length of stay, creatinine, and platelets prediction, using two independent cohorts from publicly available EHR datasets: MIMIC-IV [16] and eICU [17]. From this comprehensive evaluation, REMed showcased its superior performance compared to various baselines. Notably, REMed's retrieval result is compatible with established medical knowledge. Our contributions can thus be summarized as follows:
* We propose REMed, the first attempt to introduce the Retrieval-Based Approach to the medical prediction task. REMed can handle virtually an unlimited number of events and demonstrates superior performance in handling a large number of events.
* REMed eliminates the fundamental need for event selection due to its ability to manage unlimited events. We empirically demonstrated that abstaining from such selection does not compromise the prediction performance.
* REMed can identify and retrieve clinically relevant events. We verified that the retrieval results are compatible with established clinical knowledge.
Figure 1: Model Architecture: REMed receives a series of event vectors as input, continuously identifies important events, retrieves them, and makes predictions. To ensure both Retriever and Predictor are trainable, our model alternates between two forward paths during the training stage. Note that the timestamps are omitted in this figure.
In this study, we utilized open-source datasets only and made all our codes accessible to the public 1, guaranteeing transparency and reproducibility of our results. We believe that REMed can accelerate the development of medical prediction models by minimizing the involvement of domain experts.
Footnote 1: [https://github.com/starmpcc/REMed](https://github.com/starmpcc/REMed)
## 2 Backgrounds
### Problem Definition
Formally, a patient's medical history \(H\) can be represented as:
\[H=\{(e_{1},t_{1}),(e_{2},t_{2}),\ldots,(e_{i},t_{i}),\ldots\}, \tag{1}\]
where \(e_{i}\) is the \(i^{th}\) medical event and \(t_{i}\) is its corresponding timestamp. Medical prediction aims to predict the specific outcomes of a patient (_e.g._, mortality) at a certain time-point using the patient's medical history, such that
\[\hat{y}=f(\{(e_{i},t_{i})|t_{i}<T\}), \tag{2}\]
where \(f\) is a prediction model, and T denotes the moment the prediction is carried out (_i.e._, prediction time).
### Event Selection
A medical event \(e_{i}\) occurring at timestamp \(t_{i}\) is typically composed of a medical code \(c_{i}\) that provides high-level information (_e.g._, a medical code "L123" denotes "Lab measure of white blood cells"), and accompanied details \(d_{i}\) (_e.g._, "Value=3.7", "Unit of Measurement=K/uL", "Flag=abnormal"). There are two primary strategies in event selection: 1) Feature selection - This strategy focuses on selecting a particular set of \(c_{i}\)'s that are considered relevant to the prediction target; 2) Observation window selection - This strategy often selects recent events based on their \(t_{i}\)'s. However, since both strategies rely on the heuristic decision of domain experts, it acts as a bottleneck in the model development process.
### Feature Selection-Free Methods
Recent studies have explored methods to eliminate the need for feature selection [10, 11, 12, 13, 2, 14]. Notably, empirical findings from some of these studies suggest that models incorporating more features often outperform those with selected features [10, 12, 13, 2]. In general, there are two dominant approaches to achieve this. The first is to map each \(c_{i}\) and \(d_{i}\) to a unique vocabulary [10, 11, 12, 2]. However, given that a typical EHR contains tens of thousands of unique \(c_{i}\) and \(d_{i}\)[15, 16, 17], having unique embeddings for all \(c_{i}\) and \(d_{i}\) not only increases computational burden, but also hinders learning embeddings for rare \(c_{i}\) and \(d_{i}\). The second approach treats both the \(c_{i}\) and \(d_{i}\) as text, mapping them to a natural language space [13, 14]. This method ensures that \(c_{i}\) and \(d_{i}\) with similar
meanings (e.g., the frequently occurred code "Non-invasive blood pressure systolic" and the less common "Manual blood pressure systolic") are represented similarly, often outperforming the first approach [13, 14]. Among them, GenHPF [13] achieved superior performance by utilizing all information of \(d_{i}\).
However, existing approaches have not addressed the fundamental issue of limited input size, which necessitates event selection. Consequently, they still depend on domain experts to decide the observation window. Furthermore, to the best of our knowledge, there have been no direct attempts to tackle the problem of observation window selection. Therefore, our aim is to develop a model that eliminates the need for both feature and window selection.
## 3 Retrieval-Enhanced Medical Prediction Model
This section explains our model architecture, as illustrated in Fig 1. While conventional approaches necessitate feature or observation window selection to reduce the number of events, we aim to build a model without such conditions. Consequently, as mentioned earlier, we start with GenHPF, which has demonstrated superior performance among the feature selection-free methods. Following this, we first convert each event \(e_{i}\) to its text representation \(r_{i}\) by first converting the code \(c_{i}\) to its description (e.g., "L123" \(\rightarrow\) "Lab measure for white blood cells") and then concatenating with its accompanied details \(d_{i}\) (e.g., "Lab measure for white blood cells, Value 3.7, Unit of Measurement K/uL, Flag abnormal").
Similar to the typical Retrieval-Based Approach (RBA) in question-answering (QA), each \(r_{i}\) is initially encoded into a dense vector \(v_{i}\) using a pre-trained text encoder \(\textit{Enc}_{\text{PT}}\)[13, 21].
\[v_{i}=\textit{Enc}_{\text{PT}}(r_{i}) \tag{3}\]
In a typical question-answering task, each document's importance is calculated by comparing each document to the given query, which has theoretically an infinite degree of freedom (_i.e._, a user can ask anything.). In contrast, for medical prediction tasks, typically a set of prediction targets (_e.g._, mortality, readmission) is fixed2. As a result, evaluating the events with respect to queries that vary for each prediction is not required. Instead, we directly assess the scalar importance \(s_{i}\) of each event vector \(v_{i}\) while considering its timestamp \(t_{i}\) using Retriever \(R\), which is implemented with a multi-layer perceptron (MLP),
Footnote 2: One could try prompt-based medical prediction with a large language model (LLM), thus having unfixed prediction targets. Further discussion is provided in Section 5.
\[s_{i}=R(v_{i},t_{i}). \tag{4}\]
In this way, the information related to the prediction targets is embedded in the parameters of \(R\). Following this, the top-\(k\) event vectors \(v_{j}\), ranked by their scores \(s_{j}\), are retrieved and fed into the Predictor \(P\) along with their respective timestamps \(t_{j}\), which is implemented with Transformer [6]. \(P\) interprets the meaning of events in relation to their surrounding events, making a prediction \(\hat{y}\).
\[\hat{y}=P(\{v_{j},t_{j}\}) \tag{5}\]
While processing all events simultaneously with a Transformer is impractical due to its high computational requirements, evaluating all events _independently_ with an MLP is feasible. By limiting the input into the Transformer to only the most relevant events, we can harness the powerful predictive performance of the Transformer while also ensuring computational efficiency.
Our training objectives are twofold: To train \(R\) to understand the significance of each event, and to train \(P\) to exploit the correlations among events. It is, however, not straightforward to train \(R\) and \(P\) in an end-to-end manner, which requires that \(s_{j}\) directly affect \(\hat{y}\) while acting as an event importance indicator. Feeding \(s_{j}\) into equation 5 will only partially satisfy this requirement (see Section 6.1.3 for further discussion on this point), and therefore we devise a new training strategy that involves two paths, namely the \(R\)_Path_, and the \(P\)_Path_. In the \(R\) Path, we make predictions using each event independently, then combine them based on their importance score to make a final prediction.
\[\hat{y}=\sum_{j}s_{j}\hat{y}_{j},\;\;\text{where}\;\;\hat{y}_{j}=P(v_{j},t_{j}) \tag{6}\]
Following this, \(s_{j}\) is directly affecting \(\hat{y}\) while acting as an event importance indicator, since \(R\) would be trained to increase \(s_{j}\) when \(\hat{y}_{j}\) is consistent with the label \(y\). Therefore, \(R\) can learn to calculate the importance of each event. However, the \(R\) Path alone can cause \(P\) to be biased towards making predictions based on individual events, which is far from our intention. Therefore, in the \(P\) Path, we train \(P\) based on all retrieved events (_i.e._, equation 5), so that it can consider correlations among multiple events. Throughout the training process, we alternate between these two paths. Note that in the \(R\) Path, we only update the parameters of \(R\) while keeping \(P\) frozen. In the \(P\) Path, \(R\) is naturally frozen as \(s_{j}\) and does not take part in equation 5. During the evaluation, REMed relies solely on the \(P\) Path, drawing from the combined strengths of both \(R\) and \(P\).
In this manner, we successfully built a powerful prediction model that can process a near-infinite unlimited number of events. By resolving the fundamental cause of the event selection, REMed significantly reduces the need for a domain expert's involvement, ultimately resolving a significant bottleneck in the model-building process. For additional details, please refer to section 6.1.
## 4 Results
### Experimental Settings
Even if REMed can bypass event selection, its practicality will be limited if this bypass decreases its prediction performance. While some research suggests that abstaining from feature selection does not compromise performance [12, 13, 2, 14], there are no such results for observation window selection. Accordingly, we aim to demonstrate the following two key properties of REMed: 1) REMed can effectively handle long inputs compared to multiple baselines, and 2) the performance of REMed is not compromised when the observation window selection is bypassed. We validated these properties
through extensive experiments using two publicly available EHR datasets: MIMIC-IV [16] and eICU [17]. These datasets are commonly employed in medical prediction research [13, 14, 22], and their wide accessibility guarantees the reproducibility of our experiments by the research community. Furthermore, these datasets consist of EHRs from ICU-admitted patients, meaning the events are densely recorded (_i.e._, a large number of medical events). This characteristic is advantageous for showcasing REMed's strength in processing long inputs. Detailed experimental setups can be found in Section 6.2.
We demonstrated REMed's robust capabilities under various conditions. First, we examined REMed and compared it with baselines using two datasets. Second, we tested our model at two prediction times: 24 hours and 48 hours after ICU admission. Third, we trained and evaluated our model on ten categories and 27 tailored medical prediction tasks (Table 1), in a multi-task manner3. In addition to the administrative prediction tasks commonly used in prior research [2, 3, 11, 13, 14, 22], we further added frequent lab measurement prediction tasks that are closely related to a patient's overall status.
Footnote 3: While examining REMed for each task can also showcase its robustness, building multiple models corresponding to each task comes with severe overhead in practical scenarios. Further discussion is provided in Section 6.1.4.
We compared REMed with various baselines, including GenHPF [13] and its variants. _GenHPF_, the basis of our model, uses two Transformer [6] models in an end-to-end manner, one for encoding each medical event into a vector representation and another for making predictions. _Flattened_ model, a variant proposed in the same paper [13], concatenates all \(r_{i}\) in chronological order, and passes them to a single Transformer model. Additionally, we introduce _Cached_ model, which uses the event vectors \(v_{i}\) as input to a single Transformer model, similar to REMed. Since all these baselines can only process a limited number of events, we prioritized the most recent events as input when the input size reaches the computational limit. Additionally, to partially
\begin{table}
\begin{tabular}{l l} \hline \hline
**Category** & **Description** \\ \hline \hline Mortality & Whether the patient will die within 1/2/3/7/14 days\({}^{*}\). \\ \hline Length of Stay (LOS) & Whether the length of ICU stay will be longer than 7/14 days\({}^{*}\). \\ \hline Readmission & Whether the patient will be readmitted to the ICU within the same hospital stay\({}^{*}\). \\ \hline Diagnosis & Predict all categories of diagnosis codes of the hospital admission\({}^{**}\). \\ \hline Creatinine & Predict creatinine measurement value closest to 1/2/3 days\({}^{***}\). \\ \hline Platelets & Predict platelets measurement value closest to 1/2/3 days\({}^{***}\). \\ \hline White Blood Cells & Predict white blood cells measurement value closest to 1/2/3 days\({}^{***}\). \\ \hline Hemoglobin & Predict hemoglobin measurement value closest to 1/2/3 days\({}^{***}\). \\ \hline Bicarbonate & Predict bicarbonate measurement value closest to 1/2/3 days\({}^{***}\). \\ \hline Sodium & Predict sodium measurement value closest to 1/2/3 days\({}^{***}\). \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of tasks. For instance, there are two tailored tasks in the Length of Stay category: LOS-7day and LOS-14day. \({}^{*}\), \({}^{**}\), and \({}^{***}\) represent binary, multi-class, and multi-label classification tasks, respectively.
alleviate this input size restriction, we replaced their backbone Transformer with modern, efficient architectures that are specialized for processing long inputs. Specifically, we selected Performer [7], S4 [8], and MEGA [9], which have demonstrated state-of-the-art performance in the benchmark for long input [23]. This modification enables the baselines to handle a larger number of events. We also considered RMT [24] as a backbone; however, it failed to converge without the specific curriculum learning they proposed (Section 6.3). Details about these baselines can be found in Section 6.4.
### Performance Analysis
The results are displayed in Figure 2. First, REMed outperformed all baseline models in most settings, regardless of whether the long or short observation window was used. To statistically affirm the superior prediction performance of REMed, we employed a one-sided Mann-Whitney U test [25] on each dataset, prediction time, and observation window size, comparing it against the best baseline performance at each setting. The results substantiated REMed's superiority in all settings (\(p<0.05\)), barring two cases (MIMIC-IV, prediction time 24h, observation windows 6h and 24h). Even for those two cases, our model's performance was still on par with the best baselines. Therefore, we can conclude that REMed processes long input more effectively than the baselines.
We noticed a monotonic increase in our model's prediction performance as the observation window length was extended. This is supported by the Kendall-Tau test [26], with \(p\)-values ranging from 0.001 to 0.01 across all four graphs. Even though the performance plateaued in MIMIC-IV, no decrease was observed with longer windows. In contrast, the performance of most baseline models either plateaued or declined. Even for those that occasionally demonstrated monotonically increasing performance, their results were inconsistent across the four settings. We conclude that bypassing the observation window selection with REMed does not compromise the performance; the unlimited observation window consistently yields the best performance. We believe it is all the more valuable that REMed outperformed the baselines in the unlimited observation window scenarios.
We also conducted these analyses under a different model configuration and found that these two properties still held (Section 6.5). Owing to this robust and powerful capability to process near-infinite events, REMed can minimize the need for manual involvement of domain experts, a common bottleneck in developing medical prediction models.
### Retrieval Result Analysis
While retrieval-based models in the general domain measure their retrieval performance using labeled data, [18, 19], there is no labeled data in medical prediction tasks (_i.e._, ground-truth label indicating which event(s) must be retrieved). Therefore, we indirectly measured the retrieval performance of the Retriever \(R\) by analyzing whether its behavior is compatible with established clinical knowledge.
We first checked which medical codes \(c_{i}\) were frequently retrieved. We calculated the average number of events corresponding to a specific code retrieved every time the model makes a prediction for all test set samples. We narrowed our analysis to
the 250 medical codes that occurred most frequently in the test set and used our best-performing models for each cohort. These best models were trained on a 48-hour prediction time and an unlimited observation window. Table 2 displays the top-30 most frequently retrieved codes from each cohort. Our model frequently retrieved codes related to core lab measurements, vital signs, neurologic status, analogsedative drugs, ventilation data, and input/output records.
Figure 2: Performance Analysis Result. We evaluated REMed and the baselines on two datasets, two prediction times, and multiple observation window sizes. The y-axis corresponds to the micro-average AUROC over tasks. The error bar represents the standard error mean.
To check whether these codes were truly useful for predicting the target tasks, we conducted an expert test involving two professors and a clinical fellow, all with expertise in ICU. For each dataset, we showed them the same 250 codes and asked them to
\begin{table}
\begin{tabular}{l r r} \hline \hline
**Table** & **Code Avg. Ret** \\ \hline labevents & **Hemoglobin** & 4.41 \\ \hline labevents & **Hematocrit** & 3.61 \\ \hline chartevents & **Hematocrit (serum)** & 3.46 \\ \hline labevents & **Bicarbonate** & 2.8 \\ \hline labevents & **Platelet Count** & 2.64 \\ \hline chartevents & **HCO\({}_{3}\) (serum)** & 2.52 \\ \hline labevents & **Creatinine** & 2.42 \\ \hline chartevents & **Respiratory Rate** & 2.39 \\ \hline labevents & **White Blood Cells** & 2.36 \\ \hline labevents & **Sodium** & 2.35 \\ \hline chartevents & **WBC\({}^{1}\)** & 2.29 \\ \hline chartevents & **Heart Rhythm** & 2.27 \\ \hline chartevents & Sodium (serum) & 2.26 \\ \hline chartevents & **Creatinine (serum)** & 2.15 \\ \hline labevents & **Calculated Total CO\({}_{2}\)** & 2.09 \\ \hline chartevents & **GCS\({}^{2}\)-Motor Response** & 1.95 \\ \hline chartevents & Non Invasive BP\({}^{3}\)Diastolic & 1.9 \\ \hline chartevents & **GCS\({}^{2}\)-Verbal Response** & 1.73 \\ \hline chartevents & **Non Invasive BP\({}^{3}\)Systolic** & 1.48 \\ \hline labevents & Chloride & 1.46 \\ \hline chartevents & Chloride (serum) & 1.45 \\ \hline labevents & MCHC\({}^{4}\) & 1.33 \\ \hline labevents & Red Blood Cells & 1.31 \\ \hline chartevents & Pulmonary Artery Pressure Mean & 1.1 \\ \hline chartevents & **Mean Airway Pressure** & 1.08 \\ \hline chartevents & \(\rm O_{2}\) Flow & 1.05 \\ \hline chartevents & **Anion Gap** & 0.99 \\ \hline chartevents & BUN\({}^{5}\) & 0.99 \\ \hline outputevents & Foley & 0.98 \\ \hline chartevents & **Non Invasive BP\({}^{3}\)Mean** & 0.96 \\ \hline \hline \end{tabular}
\begin{tabular}{l r} \hline \hline
**Table** & **Code Avg. Ret** \\ \hline vitalPeriodic & **vitalPeriodic** & 56.87 \\ \hline vitalAperiodic & **vitalAperiodic** & 7.99 \\ \hline lab & **Hgb\({}^{6}\)** & 4.88 \\ \hline lab & **Hct\({}^{7}\)** & 4.
identify the 30 most significant ones. The average overlap of the top-30 codes between any two clinicians was 12.8 out of 30. While this overlap value might seem low, they do not necessarily indicate a lack of agreement. In fact, this reflects the complex nature of clinical decision-making, where multiple valid perspectives can exist. Clinicians may prioritize different codes based on their unique experiences and specialties, leading to a degree of variability in the selection. The degree of overlap between the top 30 codes of our model and each clinician's selection averaged 10.9 out of 30. Although the agreement between the model and clinicians was slightly lower than that between two clinicians, this discrepancy may be due to inherent differences between models and human judgment. For instance, when both a high-level and low-level code (_e.g._, a chart event code _heart rate alert_ and a vital sign code _heart rate_) are available, clinicians tend to prefer the former, while REMed the latter. Given this, the alignment of our model's choices with those of the clinicians was roughly equivalent to the alignment observed between different clinicians. This suggests that \(R\) can identify codes useful for the target task.
In addition to analyzing the medical code \(c_{i}\), we explored the effects of the accompanied details \(d_{i}\) and the timestamp \(t_{i}\) on the behavior of the Retriever \(R\). \(d_{i}\) is composed of various fields (_e.g._, value, unit of measurement, flag, comment), and the composition varies based on the category of events, such as lab measurements or prescriptions,
Figure 3: Retriever Analysis Result. The left column is for MIMIC-IV, and the right column is for eICU. (a) Allocated importance scores of platelets events against their timestamps and “value” fields. (b) Example of vital events from both datasets.
complicating the analysis. We specifically focused on lab measurement events associated with a platelets code, allowing us to clarify the fields of \(d_{i}\). Typically, \(d_{i}\) for a lab measurement event includes "value", "unit of measurement", and "flag" fields. The "unit of measurement" remains consistent for the same code, and the "flag" is often derived from the "value". Hence, we analyzed the scalar importance \(s_{i}\) in relation to \(t_{i}\) and the "value" for events associated with the platelets code. Figure 3 (a) presents the results. \(R\) assigned high scores to recent events or those with values in the abnormal range. These events are also regarded as important based on clinical knowledge. While peaks are observed around the ICU admission time, especially in the case of eICU, the lab results at this moment are typically regarded as pivotal in predicting future outcomes [27].
From the analysis presented above, the overall trends in both datasets are similar, but there are a few notable differences. These variations can be attributed to the unique characteristics inherent in each dataset. For example, the "vitalPeriodic" code is frequently retrieved from eICU as seen in Table 2, whereas no single dominant code exists in MIMIC-IV. In the eICU EHR system, 16 types of vital signs--such as respiratory rate, heart rate, and blood pressure--are consolidated under a single event with the code "VitalPeriodic" (Figure 3 (b, right)). On the other hand, in MIMIC-IV, each vital sign is recorded as a separate event with its unique code (Figure 3 (b, left)). This leads to the frequent retrieval of events with the "vitalPeriodic" code in eICU, while in MIMIC-IV, events associated with various vital sign codes are retrieved more evenly. This behavior not only aligns with established clinical knowledge but also suggests REMed's potential adaptability across different datasets.
In conclusion, the Retriever \(R\) can correctly identify useful events for predicting the target tasks, based on \(c_{i}\), \(d_{i}\), and \(t_{i}\), and its behavior was compatible with established clinical knowledge. Additionally, REMed showcased its adaptability to various characteristics of datasets.
## 5 Discussion
By proposing REMed and conducting extensive experiments, we have demonstrated REMed can perform favorably compared to modern baselines while minimizing the involvement of domain experts.
One of the major limitations of REMed is its inability to account for the correlation between events when evaluating the importance score \(s_{i}\). Although the Predictor \(P\) can partially mitigate this limitation by considering the correlations in its predictions, this may not be sufficient for complex tasks where understanding the relationship between events is vital.
Another limitation of REMed is its need for retraining to adapt to new tasks, but there is room for improvement. Recently, the zero-shot [28] and few-shot [29] capabilities of large language models (LLMs) have been demonstrated for general domain tasks. This suggests that integrating LLMs with REMed might be able to lessen the burden of additional training for new tasks. However, smaller, supervised models often outperform LLMs on specific tasks since LLMs are primarily designed to predict the next natural language tokens autoregressively [28, 29]. Therefore, integrating medical
prediction with LLMs remains a challenging task. Nevertheless, considering the rapid development of LLMs, it is worth exploring their potential for such applications.
We emphasize that our research solely relies on publicly available datasets and have made our code publicly accessible for transparency and reproducibility. We believe REMed can expedite the development of medical prediction models by reducing the dependency on domain experts.
## 6 Methods
### Model Design
In this section, we describe the design choices of REMed. Unless specifically mentioned, all experiments described in this section are performed using the MIMIC-IV dataset, a 48-hour prediction time, and an unlimited observation window setting. All of the experiments were performed with a single A6000 48G GPU.
#### 6.1.1 Event Encoding
We investigated both unsupervised and supervised models for the event encoder \(Enc_{\text{PT}}\), which encodes each event \(e_{i}\) into a vector \(v_{i}\). First, we employed Bio-ClinicalBERT [21], a derivative of BERT, further unsupervised pre-trained on biomedical and clinical domain literature. Despite its widespread use for clinical text encoding, it has been trained with MIMIC-III clinical notes, presenting two issues. 1) Since MIMIC-III and MIMIC-IV have overlapping patients, some of the samples in our test set might have been exposed during the training of the model. 2) There is a potential discrepancy in the distribution between note data and event data. To evaluate an unsupervised event encoder without these issues, we additionally trained a Transformer [6] from scratch with a masked language modeling objective. This model is trained using the text representations \(r_{i}\)'s as input, which originated from the training set of our MIMIC-IV cohort. Lastly, we used GenHPF [13], a supervised medical prediction model that employs two Transformers for event encoding and prediction. For our purposes, we trained GenHPF and employed the first Transformer as the event encoder.
We trained REMed using the \(v_{i}\)'s encoded by these models, respectively. From the preliminary evaluation, the version of REMed using \(v_{i}\)'s encoded with GenHPF achieved the best AUROC of 0.8747, compared to Bio-ClinicalBERT (0.7901) and MLM-based approach (0.8456).
However, GenHPF is primarily designed to predict based on a limited number of recent events. As a result, it struggles to encode events that occurred far back in a patient's history, such as emergency department events. To mitigate this, we randomly sampled events from the patient's entire history and fed them as input, thereby achieving an AUROC 0.9027 (top-right of Figure 2). We adopted this modified version of the GenHPF event encoder in all other experiments presented in this paper.
#### 6.1.2 Importance Scoring
In the Retrieval-Based Approach, both the question given by the user and documents are encoded as vectors. The cosine similarity between these vectors is then computed to determine the relevance of the documents to the question. To adapt this methodology for medical prediction, one might consider using a trainable vector that represents the predefined task (_e.g._, mortality prediction) and then measuring its cosine similarity with the event vectors. We preliminarily compared the cosine similarity method with the Multi-Layer Perceptron (MLP) method. For simplicity, we ignored \(t_{i}\) and used \(v_{i}\) exclusively as input in this comparison. The results indicated that the MLP method outperformed the cosine similarity method, with scores of 0.8898 versus 0.8849. Moreover, we encountered challenges when trying to incorporate temporal information, which can significantly affect the performance, into the cosine similarity method. In contrast, when we concatenated \(v_{i}\) and \(t_{i}\) and input them into the MLP, there was a noticeable performance improvement, reaching an average AUROC of 0.9027. Using this scalar importance \(s_{i}\), REMed retrieves the top-\(k\) event vectors \(v_{i}\). Empirical testing on the validation set revealed that setting \(k\) to 128 consistently delivered the best performance.
#### 6.1.3 Training Path
Using \(s_{i}\) to retrieve the top-\(k\) documents cannot make the gradients reach the Retriever \(R\). Hence, \(s_{i}\) must be directly involved in the final prediction to render \(R\) trainable. Furthermore, \(s_{i}\) must indicate the event's importance to be used for top-\(k\) retrieval. Incorporating \(s_{i}\) naively into equation 5 can satisfy the first requirement. This integration enables backpropagation from the prediction loss to \(R\), allowing both \(R\) and \(P\) to be trainable end-to-end. However, because the top-\(k\) retrieval operation does not propagate the gradient, \(P\) cannot recognize that \(s_{j}\) should reflect the importance of events, thus failing to meet the second requirement.
In contrast, our proposed method, \(R\) Path, effectively addresses both of these challenges. While \(s_{i}\) is directly involved in the final prediction, \(R\) is trained to increase \(s_{i}\) when \(\hat{y}_{i}\) is consistent with \(y\). The \(s_{i}\) trained in this manner signifies the event's importance, and can therefore be used for top-\(k\) retrieval.
#### 6.1.4 Multi-Task Prediction
As previously mentioned, though training and evaluating medical prediction models for each prediction task is possible, this approach is impractical in real-world scenarios. The overhead involved in developing and operating numerous models makes using a single, multi-task model a more pragmatic choice. Thus, we evaluated our model in a multi-task setting to validate its robustness across various tasks and its practicality in real-world scenarios. For comparison, we also provide the model's performance in a single-task setting. When we trained REMed for each task and averaged the AUROC, it yielded 0.8978. In contrast, the multi-task version of the model achieved 0.9027.
#### 6.1.5 Model Complexity
REMed consists of an MLP Retriever \(R\) and a Transformer Predictor \(P\). \(R\) evaluates each event vector independently, and each evaluation demands a constant amount of computation and memory. This means processing a patient's history with \(R\) is linear in computational requirements relative to the number of events. On the other hand, although Transformer demands quadratic computational resources based on the input size [6], \(P\) always receives a fixed number of event vectors \(v_{i}\), ensuring constant computational needs. Hence, REMed achieves linear complexity in computation and memory with the number of events, making it even more efficient than the contemporary architectures [7, 8, 9].
Under the finite observation windows, REMed's memory consumption remained below 2GB. During the training of REMed with our longest input, composed of 267k medical events, the peak memory usage was roughly 37GB. In the evaluation mode, REMed processed up to \(2^{20}\) dummy events within the memory restriction of an A6000 48G GPU.
### Experimental Detail
To maximize data utilization, we applied minimal filtering to our datasets: patients had to be over 18 years of age, and their ICU stays needed to exceed 48 hours. Additionally, we treated each ICU admission within a single hospital stay as a separate model input. For instance, under the 48-hour prediction time setting, if a patient was admitted to the ICU twice during a single hospital stay and each ICU stay exceeded 48 hours, we generated two separate model inputs. The first input spanned from the time of hospital admission to 48 hours after the first ICU admission. The second input spanned from the time of hospital admission to 48 hours after the second ICU admission, including the duration of the first ICU stay. This method enabled us to construct cohorts of 25,801 patients and 32,449 ICU stays for MIMIC-IV [16], and 64,276 patients and 77,718 ICU stays for eICU [17], respectively. We divided the cohorts into an 8:1:1 ratio for training, validation, and test sets. We also ensured that all ICU stays from a single patient were grouped into the same partition to prevent potential test set leakage. The statistics and label distribution for the datasets are provided in Extended Tables 3-5.
For MIMIC-IV, we used the following tables: hosp/labevents, hosp/prescriptions, hosp/microbiologyevents, icu/inputevents, icu/chartevents, icu/outputevents, icu/procedurevents, ed/medrecon, ed/pyxis, ed/vitalsign, ed/diagnosis, and ed/triage. For eICU, we used the following tables: lab, medication, microLab, infusion-Drug, intakeOutput, nurseCharting, nurseCare, nurseAssessment, treatment, vitalAperiodic, and vitalPeriodic. Note that events from the emergency department are only available in MIMIC-IV.
For both the baseline models and REMed, we conducted a grid search for the learning rate, ranging from 1e-6 to 1e-3. We utilized a constant learning rate scheduler and included 500 warm-up steps. Early stopping was employed based on the validation AUROC, with patience set to 3 epochs. All experiments were performed using an A6000 48G GPU with BF16 mixed precision, and each experiment was repeated using three different random seeds. Detailed hyperparameters for the models are provided in Extended Table 6.
### Recurrent Memory Transformer
We also considered using the Recurrent Memory Transformer (RMT) [24], an architecture that can process virtually unlimited input with constant memory, as the backbone for our baselines. However, baselines with RMT did not converge unless we adopted a specific training method as the authors suggested [30]. Using this method, which involves learning rate scheduling and curriculum learning, we compared REMed to baselines with RMT. We evaluated those on MIMIC-IV with a 48-hour prediction time setting, which has the longest average input sequence length in our studies.
The results are illustrated in Figure 4 (a). REMed's performance remained relatively stable regardless of the training method used, and it consistently surpassed both the _Flattened_ and _Cached_ RMT (Mann-Whitney U test, \(p<0.01\)). Furthermore, as the observation window size expanded, REMed showed a monotonic performance increase, even with the addition of curriculum learning and scheduling (Kendall-Tau test, \(p<0.01\)), while the baseline performances often decreased.
### Baselines
_GenHPF_[13]: This approach exploits the inherent hierarchies in EHR data. It employs two Transformers: the first one (_Enc_) encodes each \(r_{i}\) to a vector \(v_{i}\), while the second one (_P_) aggregates these vectors for predictions.
\[\hat{y}_{GenHPF}=P(\{\textit{Enc}(r_{i}),t_{i}\}) \tag{7}\]
_Flattened_ model [13]: This approach chronologically concatenates all \(r_{i}\)'s and then feeds them into a Transformer (_P_) for predictions.
\[\hat{y}_{Flattened}=P(\text{Concat}(\{r_{i}\}),\{t_{i}\}) \tag{8}\]
_Cached_ model: This approach utilizes \(v_{i}\)'s encoded with the pre-trained text encoder \(\textit{Enc}_{\text{PT}}\), similar to that used in REMed. The predictor \(P\) receives these vectors
Figure 4: (a, left) Comparison with RMT. (b, right) Result for small model size ablation study. The error bar represents the standard error mean.
as input and then makes a prediction. The absence of a trainable encoder reduces the computational demands, allowing the model to handle longer sequences.
\[v_{i}=\mathit{Enc}_{\mathrm{PT}}(r_{i}),\ \hat{y}_{\mathit{Cached}}=P(\{v_{i},t_{i}\}) \tag{9}\]
To make these models able to handle longer sequences, we used contemporary, efficient architectures [7, 8, 9] as their backbone (_i.e._, replacing the vanilla Transformer). Theoretically, 12 baselines can be derived from these combinations, including the original Transformer version. However, not all combinations are practical. For _GenHPF_, the computational bottleneck arises during the event encoding step. In this step, the encoder processes numerous \(r_{i}\)'s independently, each consisting of several dozen tokens. Since the efficient architectures do not offer advantages for processing short inputs compared to Transformer, employing them for _GenHPF_ is not beneficial. As a result, we did not replace the Transformer backbone of _GenHPF_ with any contemporary architectures. On the other hand, for the _Flattened_ model, using the Transformer backbone is impractical. The model's strategy--to concatenate all \(r_{i}\)'s--yields inputs with at least a few thousand tokens. Given the quadratic computational complexity of the Transformer, it's infeasible to manage such long inputs using this backbone. Therefore, we only used contemporary architectures for the _Flattened_ baseline. In summary, we constructed eight baselines: _GenHPF_-Transformer, _Flattened_-Performer, S4, MEGA, _Cached_-Transformer, Performer, S4, and MEGA.
### Extended Performance Analysis on Different Configuration
In order to assess REMed's robustness with respect to configuration, we expanded our experiment to another model size. For simplification, our analysis focused on the MIMIC-IV with a 48-hour prediction time, which has the longest average input length among our test scenarios. Furthermore, we only considered _Cached_ baselines, previously shown to outperform others (_i.e._, GenHPF and Flattened) in prior experiments. We configured REMed and the baselines with a hidden dimension of 128 and 4 heads, and conducted the same learning rate grid search for each model. The maximum sequence length for each baseline was adjusted to fit within a 12GB maximum memory allowance.
The results are presented in Figure 4 (b). Despite a reduced model size, REMed outperformed all baselines in every setting. The Mann-Whitney U test [25] confirmed its superior performance over the best-performing baselines in each setting (\(p<0.05\)). Furthermore, the Kendall-Tau test [26] verified a monotonic improvement in REMed's performance by increasing the observation window length (\(p<0.01\)). These results suggest that REMed's key properties hold across different configurations.
|
2310.20562 | A construction of solutions of an integrable deformation of a
commutative Lie algebra of skew hermitian $\mathbb{Z} \times \mathbb{Z}
$-matrices | Inside the algebra $LT_{\mathbb{Z}}(R)$ of $\mathbb{Z} \times
\mathbb{Z}$-matrices with coefficients from a commutative $\mathbb{C}$-algebra
$R$ that have only a finite number of nonzero diagonals above the central
diagonal, we consider a deformation of a commutative Lie algebra
$\mathcal{C}_{sh}(\mathbb{C})$ of finite band skew hermitian matrices that is
different from the Lie subalgebras that were deformed at the discrete KP
hierarchy and its strict version. The evolution equations that the deformed
generators of $\mathcal{C}_{sh}(\mathbb{C})$ have to satisfy are determined by
the decomposition of $LT_{\mathbb{Z}}(R)$ in the direct sum of an algebra of
lower triangular matrices and the finite band skew hermitian matrices. This
yields then the $\mathcal{C}_{sh}(\mathbb{C})$-hierarchy. We show that the
projections of a solution satisfy zero curvature relations and that it suffices
to solve an associated Cauchy problem. Solutions of this type can be obtained
by finding appropriate vectors in the $LT_{\mathbb{Z}}(R)$-module of
oscillating matrices, the so-called wave matrices, that satisfy a set of
equations in the oscillating matrices, called the linearization of the
$\mathcal{C}_{sh}(\mathbb{C})$-hierarchy. Finally, a Hilbert Lie group will be
introduced from which wave matrices for the
$\mathcal{C}_{sh}(\mathbb{C})$-hierarchy are constructed. There is a real
analogue of the $\mathcal{C}_{sh}(\mathbb{C})$-hierarchy called the
$\mathcal{C}_{as}(\mathbb{R})$-hierarchy. It consists of a deformation of a
commutative Lie algebra $\mathcal{C}_{as}(\mathbb{R})$ of anti-symmetric
matrices. We will properly introduce it here too on the way and mention
everywhere the corresponding result for this hierarchy, but we leave its proofs
mostly to the reader. | Aloysius Helminck, Gerardus Helminck | 2023-10-31T15:52:57Z | http://arxiv.org/abs/2310.20562v2 | A construction of solutions of an integrable deformation of a commutative Lie algebra of skew Hermitian \(\mathbb{Z}\times\mathbb{Z}\)-matrices
###### Abstract.
Inside the algebra \(LT_{\mathbb{Z}}(R)\) of \(\mathbb{Z}\times\mathbb{Z}\)-matrices with coefficients from a commutative \(\mathbb{C}\)-algebra \(R\) that have only a finite number of nonzero diagonals above the central diagonal, we consider a deformation of a commutative Lie algebra \(\mathcal{C}_{sh}(\mathbb{C})\) of finite band skew hermitian matrices that is different from the Lie subalgebras that were deformed at the discrete KP hierarchy and its strict version. The evolution equations that the deformed generators of \(\mathcal{C}_{sh}(\mathbb{C})\) have to satisfy are determined by the decomposition of \(LT_{\mathbb{Z}}(R)\) in the direct sum of an algebra of lower triangular matrices and the finite band skew hermitian matrices. This yields then the \(\mathcal{C}_{sh}(\mathbb{C})\)-hierarchy. We show that the projections of a solution satisfy zero curvature relations and that it suffices to solve an associated Cauchy problem. Solutions of this type can be obtained by finding appropriate vectors in the \(LT_{\mathbb{Z}}(R)\)-module of oscillating matrices, the so-called wave matrices, that satisfy a set of equations in the oscillating matrices, called the linearization of the \(\mathcal{C}_{sh}(\mathbb{C})\)-hierarchy. Finally, a Hilbert Lie group will be introduced from which wave matrices for the \(\mathcal{C}_{sh}(\mathbb{C})\)-hierarchy are constructed. There is a real analogue of the \(\mathcal{C}_{sh}(\mathbb{C})\)-hierarchy called the \(\mathcal{C}_{as}(\mathbb{R})\)-hierarchy. It consists of a deformation of a commutative Lie algebra \(\mathcal{C}_{as}(\mathbb{R})\) of anti-symmetric matrices. We will properly introduce it here too on the way and mention everywhere the corresponding result for this hierarchy, but we leave its proofs mostly to the reader.
Dedicated to the memory of G. van Dijk
**Subject classification:22E65, 35Q58, 37K10, 58B25**.
**Keywords: The \(\mathcal{C}_{sh}(\mathbb{C})\)-hierarchy, Lax form, zero curvature relations, Cauchy problem, linearization, wave matrices**
## 1. Introduction
The major part of the scientific work of G. van Dijk centered around developing representation theoretic questions connected with vector bundles over homogeneous spaces \(G/H\), where \(G\) is a locally compact group and \(H\) a closed subgroup of \(G\). A main goal in this field is to decompose a given representation in components that cannot be split anymore, the so-called irreducible components. This leads to a wide range of research problems to which he contributed significantly. We mention a few: a continuous search for new ways and techniques to decompose, what is the role of the structure of the space \(G/H\) in the decomposition and which of the known representations fit in the new one. The following selection illustrates the width of his interests: [4], [5], [6], [7], [8], [9] and [10]. Gerrit mostly worked with concrete spaces and favorite examples of the spaces \(G/H\) were the symmetric spaces, see [16], [11], [3], [12] and [17]. Defined in the most general way, symmetric spaces are spaces \(G/H\), with \(G\) a topological group and \(H\) the fixed point group of a continuous involution \(\sigma\) of \(G\).
In the present paper we present another role of symmetric spaces, namely as a mean to construct solutions for compatible systems of Lax equations, so-called integrable hierarchies,
for difference operators or pseudo difference operators. A wide collection of compatible systems have been discussed, both on the physical, see [18],[19] and [20], as on the mathematical side, see e.g. [23], [22], [1] and [14]. Here we discuss a decomposition of \(LT_{\mathbb{Z}}(R)\) into the direct sum of two Lie algebras different from the ones used in [1] and [14] to get the discrete KP hierarchy and its strict version and deform a commutative Lie algebra \(\mathcal{C}_{sh}(\mathbb{C})\) of skew hermitian matrices leading to the \(\mathcal{C}_{sh}(\mathbb{C})\)-hierarchy. The projections of a solution of the hierarchy are shown to satisfy zero curvature relations and we present a Cauchy problem whose solutions are sufficient to produce solutions of the hierarchy. Solutions of this Cauchy problem can be obtained by finding appropriate vectors in the \(LT_{\mathbb{Z}}(R)\)-module of oscillating matrices. These vectors are called wave matrices and satisfy a set of equations in the oscillating matrices, called the linearization of the \(\mathcal{C}_{sh}(\mathbb{C})\)-hierarchy. Finally, a Hilbert Lie group \(G(2)(\mathbb{C})\) will be introduced from which wave matrices for the \(\mathcal{C}_{sh}(\mathbb{C})\)-hierarchy are constructed.
The contents of the various sections is as follows: Section 2 describes the algebra \(LT_{\mathbb{Z}}(R)\), its relevant decomposition and the commutative algebra \(\mathcal{C}_{sh}(\mathbb{C})\). Further we present there the type of deformation that will be considered in \(LT_{\mathbb{Z}}(R)\). In section 3 we cluster a number of Lax equations that the deformation has to satisfy, into an integrable hierarchy and show that the relevant projections of the deformed basic directions satisfy zero curvature relations. Further we present a Cauchy problem, whose solutions lead directly to solutions of the \(\mathcal{C}_{sh}(\mathbb{C})\)-hierarchy. A good context to produce these solutions is the \(LT_{\mathbb{Z}}(R)\)-module of oscillating matrices. Appropriate vectors in this module, the so-called wave matrices, satisfy a set of equations in the oscillating matrices, the linearization of the \(\mathcal{C}_{sh}(\mathbb{C})\)-hierarchy, that leads to solutions of the hierarchy. Finally, one introduces a Hilbert Lie group \(G(2)(\mathbb{C})\) and its unitary subgroup \(U(2)\) and constructs for each coset \(gU(2),g\in G(2)(\mathbb{C})\), a wave matrix of the \(\mathcal{C}_{sh}(\mathbb{C})\)-hierarchy.
## 2. The algebra \(LT_{\mathbb{Z}}(R)\)
The algebra where the central deformation of this paper takes place is that of the complex pseudo difference operators \(\mathrm{Ps}\Delta\). We use its realization \(LT_{\mathbb{Z}}(R)\) as a subset of the space \(M_{\mathbb{Z}}(R)\) of \(\mathbb{Z}\times\mathbb{Z}\)-matrices with coefficients from a commutative \(\mathbb{C}\)-algebra \(R\). The algebras \(R\) we work with throughout this paper are the complexifications of an algebra of real-valued functions \(R(\mathbb{R})\), i.e. \(R=\mathbb{C}\otimes_{\mathbb{R}}R(\mathbb{R})\) and we will denote \(\alpha\otimes f\), \(\alpha\in\mathbb{C}\) and \(f\in R(\mathbb{R})\), simply by \(\alpha f\). On \(R\) complex conjugation is defined by \(\overline{\alpha f}:=\overline{\alpha}f\). An element \(f\) from \(R(\mathbb{R})\) is called positive, if all its values are, and we write then \(f>0\). We start by recalling a number of basic notations in the algebra \(LT_{\mathbb{Z}}(R)\)
Each \(A\in M_{\mathbb{Z}}(R)\) will be denoted as \(A=(a_{ij})\) or as \(A=(a_{(i,j)})\) if confusion in the labeling might occur. On the space \(M_{\mathbb{Z}}(R)\) we use the ordering of columns and rows as in the finite dimensional case. The transpose \(A^{T}\) of a matrix \(A\in M_{\mathbb{Z}}(R)\) is given by the matrix \((a_{ji})\) and the adjoint \(A^{*}\) of \(A\) is the matrix \((\overline{a_{ji}})\). Any \(A\in M_{\mathbb{Z}}(R)\) corresponds to an \(R\)-linear map. Consider thereto the space of all \(1\times\mathbb{Z}\)-matrices with coefficients from \(R\)
\[V=R^{\mathbb{Z}}=\{\vec{x}=(x_{n})=\big{(}\quad\ldots\quad x_{n-1}\quad x_{n} \quad x_{n+1}\quad\ldots\big{)}\mid x_{n}\in R\}\]
and its subspace
\[V_{\mathrm{fin}}=\{\vec{x}=(x_{n})\in V\mid x_{n}\neq 0\text{ for only a finite number of }n\}.\]
Define for each \(i\in\mathbb{Z}\) the vector \(\vec{e}\,(i)\) in \(V_{\mathrm{fin}}\) by requiring its \(i\)-th coordinate to be equal to one and its remaining coordinates to be zero. Then \(V_{\mathrm{fin}}\) is a free \(R\)-module with basis the \(\{\vec{e}\,(i)\mid i\in\mathbb{Z}\}\). On \(V_{\mathrm{fin}}\) we can define an \(R\)-linear action \(M_{A}\) of \(A=(a_{nm})\) by
\[M_{A}(\vec{x}):=\vec{x}A. \tag{1}\]
Hence, the matrix \(A\) determines the \(R\)-linear map \(M_{A}\in\mathrm{Hom}_{R}(V_{\mathrm{fin}},V)\).
Next we present two types of matrices \(A\) that generate the algebra \(\mathrm{Ps}\Delta\) and for which \(M_{A}\) is even defined on \(V\). The first class is that of the diagonal matrices. Given a collection of elements \(\{d(s)\mid s\in\mathbb{Z}\}\) from \(R\), this defines the diagonal matrix \(\mathrm{diag}(d(s))\) in \(M_{\mathbb{Z}}(R)\) with \(d(s)\) as its \((s,s)\)-entry. The algebra of all diagonal matrices in \(M_{\mathbb{Z}}(R)\) is denoted by \(\mathcal{D}_{1}(R)\) and its group of units by \(\mathcal{D}_{1}(R)^{*}\), i.e. all \(\mathrm{diag}(d(s))\) with \(d(s)\in R^{*}\) for all \(s\in\mathbb{Z}.\) We get an embedding \(j_{1}:R\to\mathcal{D}_{1}(R)\) by putting \(j_{1}(r)=r\,\mathrm{Id}\) for all \(r\in R\).
The second class of examples form the shift matrix \(\Lambda\), its inverse \(\Lambda^{-1}\) and their powers, where the first corresponds to \(M_{\Lambda}(\vec{e}\,(i))=\vec{e}\,(i+1)\). The group \(\{\Lambda^{m}\mid m\in\mathbb{Z}\}\) normalizes \(\mathcal{D}_{1}(R)\), for there holds for all \(d\in\mathcal{D}_{1}(R)\)
\[\Lambda^{m}d\Lambda^{-m}=\Lambda^{m}\mathrm{diag}(d(s))\Lambda^{-m}=\mathrm{ diag}(d(s+m)). \tag{2}\]
A convenient tool in \(M_{\mathbb{Z}}(R)\) is decomposing a matrix \(A=(a_{ij})\in M_{\mathbb{Z}}(R)\) in its diagonals. If \(m\in\mathbb{Z}\), then the \(m\)-th _diagonal_ of \(A\) is by definition
\[d_{m}(A)\Lambda^{m},\text{ with }d_{m}(A)=\mathrm{diag}(a_{(s,s+m)})\in \mathcal{D}_{1}(R),\]
and \(m\) determines if the diagonal is positive or negative. Then each matrix can be split as
\[A=\sum_{m\in\mathbb{Z}}d_{m}(A)\Lambda^{m} \tag{3}\]
Let \(LT_{\mathbb{Z}}(R)\) be the collection of all matrices in \(M_{\mathbb{Z}}(R)\) that have only a finite number of nonzero positive diagonals. Relation (2) implies now the following property
**Lemma 2.1**.: _If \(A\in LT_{\mathbb{Z}}(R)\) is equal to its \(\ell\)-th diagonal and \(B\in LT_{\mathbb{Z}}(R)\) is equal to its \(n\)-the diagonal, then \(AB\) is equal to its \(\ell+n\)-th diagonal. In particular, \(LT_{\mathbb{Z}}(R)\) is an algebra w.r.t. matrix multiplication._
We use the decomposition in (3) to assign a degree to elements of \(LT_{\mathbb{Z}}(R)\). For a nonzero \(A\) in \(LT_{\mathbb{Z}}(R)\) the degree is equal to \(m\) if its highest nonzero diagonal is the \(m\)-th and the degree of the zero element is \(-\infty\).
The algebra \(LT_{\mathbb{Z}}(R)\) possesses a large collection of invertible elements. For, let \(V\in LT_{\mathbb{Z}}(R)\) have the form \(V=\sum_{i\leqslant m}v_{i}\Lambda^{i}\), with \(v_{m}\in\mathcal{D}_{1}(R)^{*}\). Then one shows recursively
**Lemma 2.2**.: _Each element \(V\) in \(LT_{\mathbb{Z}}(R)\) with an invertible leading coefficient is invertible in \(LT_{\mathbb{Z}}(R)\). This class of invertible elements in \(LT_{\mathbb{Z}}(R)\) forms the group \(I(LT_{\mathbb{Z}}(R))\)._
Next we discuss the relevant decompositions of \(LT_{\mathbb{Z}}(R)\). In the complex case we consider the real Lie subalgebra \(\mathcal{SH}(R)\) of skew hermitian matrices in \(LT_{\mathbb{Z}}(R)\), i.e.
\[\mathcal{SH}(R)=\{A\mid A\in LT_{\mathbb{Z}}(R),A^{*}=-A\}.\]
A general element \(A\) in \(\mathcal{SH}(R)\) has the form
\[A=\sum_{j=1}^{N}d_{j}(A)\Lambda^{j}+d_{0}(A)-\sum_{j=1}^{N}\Lambda^{-j}d_{j}(A) ^{*}, \tag{4}\]
with \(d_{0}(A)\in i\mathcal{D}_{1}(R(\mathbb{R}))\) and all remaining \(d_{j}(A)\in\mathcal{D}_{1}(R)\). Inside \(\mathcal{SH}(\mathbb{C})\) we consider the real Lie subalgebra \(\mathcal{C}_{sh}(\mathbb{C})\) spanned by the elements
\[G_{j1}=\Lambda^{j}-\Lambda^{-j},j\geqslant 1,G_{02}=i\,\mathrm{Id},G_{j2}=i( \Lambda^{j}+\Lambda^{-j}),j\geqslant 1. \tag{5}\]
It is convenient to introduce a notation for the index set of the basis (5) of \(\mathcal{C}_{sh}(\mathbb{C})\). We write \(\Sigma_{1}=\{j1\mid j\geqslant 1\}\), \(\Sigma_{2}=\{j2\mid j\geqslant 0\}\) and \(\Sigma=\Sigma_{1}\cup\Sigma_{2}\). The Lie algebra \(\mathcal{C}_{sh}(\mathbb{C})\) is clearly commutative. Moreover, it is maximal in \(\mathcal{SH}(\mathbb{C})\) with respect to this property. For, let \(A\) in \(\mathcal{SH}(\mathbb{C})\) be an element that commutes with all the \(\{G_{\sigma}\mid\sigma\in\Sigma\}\). The matrix \(A\) has the form (4), where all diagonal matrices \(\{d_{j}(A)\mid j\geqslant 0\}\) are written as \(c_{j}+id_{j}\) with \(c_{j}\) and \(d_{j}\in\mathcal{D}_{1}(\mathbb{R})\) and \(c_{0}=0\). Because of Lemma 2.1, the fact that \(A\) and \(G_{11}\) commute implies that their leading terms commute. If the leading term in \(\Lambda\) is \(d_{0}(A)\), then \(d_{0}(A)=ij_{1}(s_{0}),s_{0}\in\mathbb{R}\)
and \(A=s_{0}G_{02}\in\mathcal{C}_{sh}(\mathbb{C})\). If the leading term in \(\Lambda\) is \(d_{N}(A)\Lambda^{N},N\geqslant 1\), then this implies that \(d_{N}(A)\) has the form \(d_{N}(A)=j_{1}(r_{N})+ij_{1}(s_{N})\) with \(r_{N}\) and \(s_{N}\in\mathbb{R}\). Consider now the element \(A-r_{N}G_{N1}-s_{N}G_{N2}\). It still commutes with all the \(\{G_{\sigma}\mid\sigma\in\Sigma\}\) and has a leading term in \(\Lambda\) of degree lower than \(N\). Thus we have shown by induction with respect to \(N\)
**Lemma 2.3**.: _The Lie algebra \(\mathcal{C}_{sh}(\mathbb{C})\) is a maximal commutative Lie subalgebra of \(\mathcal{SH}(\mathbb{C})\)._
So, \(\mathcal{C}_{sh}(\mathbb{C})\) is optimal as for commutativity. We call the basis \(\{G_{\sigma}\mid\sigma\in\Sigma\}\) of \(\mathcal{C}_{sh}(\mathbb{C})\) the _basic directions_ of this space. From the multiplication rules for diagonals and the general form of the elements of \(\mathcal{SH}(R)\), one sees that the space
\[P_{-}(\mathbb{R})=\{P=\sum_{j\leqslant 0}d_{j}(P)\Lambda^{j}\mid d_{0}(P)\in \mathcal{D}_{1}(R(\mathbb{R})),d_{j}(P)\in\mathcal{D}_{1}(R)\text{ for }j<0\}\]
is a real Lie subalgebra of \(LT_{\mathbb{Z}}(R)\) that complements \(\mathcal{SH}(R)\), i.e.
\[LT_{\mathbb{Z}}(R)=P_{-}(\mathbb{R})\oplus\mathcal{SH}(R). \tag{6}\]
Let \(P=\sum_{j\leqslant N}d_{j}(P)\Lambda^{j}\) be a general element of \(LT_{\mathbb{Z}}(R)\). Decompose each \(d_{j}(P)\) as \(d_{j}(P)=a_{j}+ib_{j}\) with \(a_{j}\) and \(b_{j}\in\mathcal{D}_{1}(R(\mathbb{R}))\). Then the projection \(\pi_{-}\) from \(LT_{\mathbb{Z}}(R)\) onto \(P_{-}(\mathbb{R})\) in the decomposition (6) is given by
\[\pi_{-}(P)=a_{0}+\sum_{j<0}d_{j}(P)\Lambda^{j}+\sum_{j<0}\Lambda^{j}d_{-j}(P)^ {*}\]
and the projection \(\pi_{sh}\) from \(LT_{\mathbb{Z}}(R)\) onto the second component in the decomposition (6) is given by
\[\pi_{sh}(P)=P-\pi_{-}(P)=ib_{0}+\sum_{j>0}d_{j}(P)\Lambda^{j}-\sum_{j>0} \Lambda^{-j}d_{j}(P)^{*}\]
Next we assign a group to the Lie algebra \(P_{-}(\mathbb{R})\). For, if the exponential map is well defined on \(P_{-}(\mathbb{R})\), then the image under \(\exp\) of a \(P=\sum_{j\leqslant 0}d_{j}(P)\Lambda^{j}\in P_{-}(\mathbb{R})\) is a lower triangular matrix with leading term \(\exp(d_{0}(P))\), which is invertible in \(\mathcal{D}_{1}(R(\mathbb{R}))\) with inverse \(\exp(-d_{0}(P))\) and moreover \(\exp(d_{0}(P))>0\), i.e., if \(\exp(d_{0}(P))=\)diag\((d(s))\), then \(d(s)>0\) for all \(s\in\mathbb{Z}\). In particular the image of \(\exp\) belongs to the group inside \(LT_{\mathbb{Z}}(R)\) given by
\[\mathcal{P}_{-}(\mathbb{R})=\{G=\sum_{j\leqslant 0}d_{j}(G)\Lambda^{j}\in P_{-} (\mathbb{R})\mid d_{0}(G)\in\mathcal{D}_{1}(R(\mathbb{R}))^{*}\text{ and }d_{0}(G)>0\}\]
Therefore we see \(\mathcal{P}_{-}(\mathbb{R})\) as the group associated with the Lie algebra \(P_{-}(\mathbb{R})\). In the sequel we will look at deformations of \(\mathcal{C}_{sh}(\mathbb{C})\) by conjugating with elements from \(\mathcal{P}_{-}(\mathbb{R})\). Therefore we introduce now the following notion:
**Definition 2.4**.: A \(\mathcal{P}_{-}(\mathbb{R})\)-deformation of the \(\{G_{\sigma}\mid\sigma\in\Sigma\}\) is a collection of matrices \(\{\mathcal{G}_{\sigma}\mid\sigma\in\Sigma\}\) in \(LT_{\mathbb{Z}}(R)\) such that for all \(\sigma\in\Sigma\), \(\mathcal{G}_{\sigma}=gG_{\sigma}g^{-1}\), for some \(g\in\mathcal{P}_{-}(\mathbb{R})\). We call \(g\) the _dressing operator_ of the deformation.
Since the element \(G_{02}\) remains the same at this type of deformations, it suffices to focus on the deformation of the remaining basic directions \(\{G_{\sigma}\mid\sigma\in\Sigma_{0}\}\) of \(\mathcal{C}_{sh}(\mathbb{C})\), where \(\Sigma_{0}=\Sigma_{1}\cup\{\sigma=j2\mid j\geqslant 1\}\).
_Remark 2.5_.: There is a real analogue of the deformation described above. It takes place in \(LT_{\mathbb{Z}}(R_{as})\), where \(R_{as}\) is a commutative algebra of real-valued functions. Now we consider the real Lie subalgebra \(\mathcal{AS}(R_{as})\) of antisymmetric matrices in \(LT_{\mathbb{Z}}(R_{as})\), i.e.
\[\mathcal{AS}(R_{as})=\{A\mid A\in LT_{\mathbb{Z}}(R_{as}),A^{T}=-A\}.\]
A general element \(A\) in \(\mathcal{AS}(R_{as})\) has the form
\[A=\sum_{j=1}^{N}d_{j}(A)\Lambda^{j}-\sum_{j=1}^{N}\Lambda^{-j}d_{j}(A), \tag{7}\]
with all \(d_{j}(A)\in\mathcal{D}_{1}(R_{as})\). Inside \(\mathcal{AS}(\mathbb{R})\) we consider the real Lie subalgebra \(\mathcal{C}_{as}(\mathbb{R})\) spanned by the elements
\[F_{j}=\Lambda^{j}-\Lambda^{-j},j\geqslant 1. \tag{8}\]
We write \(\mathcal{J}=\{j\in\mathbb{N}\mid j\geqslant 1\}\). Similarly to \(\mathcal{C}_{sh}(\mathbb{C})\), the Lie algebra \(\mathcal{C}_{as}(\mathbb{R})\) is a maximal commutative Lie subalgebra of \(\mathcal{AS}(\mathbb{R})\). We call the basis \(\{F_{j}\mid j\in\mathcal{J}\}\) also the _basic directions_ of \(\mathcal{C}_{as}(\mathbb{R})\). From (7) one sees directly that the space of lower triangular matrices in \(LT_{\mathbb{Z}}(R_{as})\)
\[P_{\leqslant 0}=\{P=\sum_{j\leqslant 0}d_{j}(P)\Lambda^{j}\mid d_{j}(P)\in \mathcal{D}_{1}(R_{as})\text{ for all }j\leqslant 0\}\]
is a real Lie subalgebra of \(LT_{\mathbb{Z}}(R_{as})\) that complements \(\mathcal{AS}(R_{as})\), i.e.
\[LT_{\mathbb{Z}}(R_{as})=P_{\leqslant 0}\oplus\mathcal{AS}(R_{as}). \tag{9}\]
Let \(P=\sum_{j\leqslant N}d_{j}(P)\Lambda^{j}\) be a general element of \(LT_{\mathbb{Z}}(R_{as})\). Then the projection \(\pi_{\leqslant 0}\) from \(LT_{\mathbb{Z}}(R_{as})\) onto \(P_{\leqslant 0}\) in the decomposition (9) is given by
\[\pi_{\leqslant 0}(P)=\sum_{j\leqslant 0}d_{j}(P)\Lambda^{j}+\sum_{j<0} \Lambda^{j}d_{-j}(P)\]
and the projection \(\pi_{as}\) from \(LT_{\mathbb{Z}}(R)\) onto the second component in the decomposition (9) is given by
\[\pi_{as}(P)=P-\pi_{\leqslant 0}(P)=\sum_{j>0}d_{j}(P)\Lambda^{j}-\sum_{j>0} \Lambda^{-j}d_{j}(P)\]
Inside \(LT_{\mathbb{Z}}(R_{as})\) we have the group
\[\mathcal{P}_{\leqslant 0}=\{P=\sum_{j\leqslant 0}d_{j}(P)\Lambda^{j}\in P_{-} \mid d_{0}(P)\in\mathcal{D}_{1}(R_{as})^{*}\text{ and }d_{0}(P)>0\}\]
that we see as the group associated with the Lie algebra \(P_{\leqslant 0}\) for a similar reason as for \(\mathcal{P}_{-}(\mathbb{R})\). Our interest is again in deformations of \(\mathcal{C}_{as}(\mathbb{R})\) by conjugating with elements from \(\mathcal{P}_{\leqslant 0}\) and we call a set of matrices \(\{\mathcal{F}_{j}\mid j\in\mathcal{J}\}\) in \(LT_{\mathbb{Z}}(R)\) a \(\mathcal{P}_{\leqslant 0}\)-_deformation_ of the \(\{F_{j}\mid j\in\mathcal{J}\}\), if there is a \(g\in\mathcal{P}_{\leqslant 0}\), such that for all \(j\in\mathcal{J}\), \(\mathcal{F}_{j}=gF_{j}g^{-1}.\) The element \(g\) is called again the _dressing operator_ of the deformation.
## 3. The integrable hierarchies
In this section we discuss first the evolution equations that we require of a \(\mathcal{P}_{-}(\mathbb{R})\)-deformation of the basis \(\{G_{\sigma}\mid\sigma\in\Sigma\}\) of \(\mathcal{C}_{sh}(\mathbb{C})\). We need then that \(R(\mathbb{R})\) is equipped with a set of commuting \(\mathbb{R}\)-linear derivations \(\{\partial_{\sigma}\mid\sigma\in\Sigma\}\). Each \(\partial_{\sigma}\) is an algebraic substitute for differentiation with respect to the flow parameter of the flow corresponding to \(G_{\sigma}\). We extend all \(\{\partial_{\sigma}\}\) to \(\mathbb{C}\)-linear derivations of \(R\) by putting \(\partial_{\sigma}(\alpha f):=\alpha\partial_{\sigma}(f)\) for all \(\alpha\in\mathbb{C}\) and \(f\in R(\mathbb{R})\). By letting each \(\partial_{\sigma}\) act on the matrix coefficients of an element of \(LT_{\mathbb{Z}}(R)\) one gets a \(\mathbb{C}\)-linear derivation of this algebra which is denoted by the same symbol. The evolution equations we require of a \(\mathcal{P}_{-}(\mathbb{R})\)-deformation \(\{\mathcal{G}_{\sigma}\mid\sigma\in\Sigma\}\) of the basic directions of \(\mathcal{C}_{sh}(\mathbb{C})\), are determined by the decomposition (6) and consist of all equations
\[\partial_{\sigma_{1}}(\mathcal{G}_{\sigma_{2}})=[\pi_{sh}(\mathcal{G}_{\sigma_{ 1}}),\mathcal{G}_{\sigma_{2}}]=[\mathcal{G}_{\sigma_{2}},\pi_{-}(\mathcal{G}_{ \sigma_{1}})] \tag{10}\]
for all \(\sigma_{1}\) and \(\sigma_{2}\) from \(\Sigma\). The second equality in (10) is a direct consequence of the fact that all \(\mathcal{G}_{\sigma_{1}}\) and \(\mathcal{G}_{\sigma_{2}}\) commute. The data \((R,\{\partial_{\sigma}\})\) we call a _setting_ in which we can consider these deformations and their evolution equations. A \(\mathcal{P}_{-}(\mathbb{R})\)-deformation of the \(\{G_{\sigma}\mid\sigma\in\Sigma\}\) that satisfies all the equations (10), is called a _solution_ in the setting \((R,\{\partial_{\sigma}\})\) of the \(\mathcal{C}_{sh}(\mathbb{C})\)-hierarchy after the commutative Lie algebra that gets deformed. The equations (10) are called the _Lax equations_ of the \(\mathcal{C}_{sh}(\mathbb{C})\)-hierarchy. These equations always have the trivial solution \(\{\mathcal{G}_{\sigma}=G_{\sigma}\mid\sigma\in\Sigma\}\). Since at each \(\mathcal{P}_{-}(\mathbb{R})\)-deformation \(\{\mathcal{G}_{\sigma}\}\) there holds
\(\mathcal{G}_{02}=G_{02}=i\operatorname{Id}\), the derivation \(\partial_{02}\) is zero on all \(\{\mathcal{G}_{\sigma}\}\) and it suffices to prove the equations (10) for all \(\sigma_{1}\) and \(\sigma_{2}\) from \(\Sigma_{0}=\{\sigma\in\Sigma\mid\sigma\neq 02\}\).
_Remark 3.1_.: For the \(\mathcal{P}_{\leqslant 0}\)-deformation of the basic directions of \(\mathcal{C}_{as}(\mathbb{R})\) from Remark (2.5) there is also a natural set of evolution equations that one can consider. We need now that \(R_{as}\) is equipped with a set of commuting \(\mathbb{R}\)-linear derivations \(\{\partial_{j}\mid j\in\mathcal{J}\}\). Each \(\partial_{j}\) is an algebraic substitute for differentiation with respect to the flow parameter of the flow corresponding to \(F_{j}\). By letting each \(\partial_{j}\) act on the matrix coefficients of an element of \(LT_{\mathbb{Z}}(R_{as})\) one gets a \(\mathbb{R}\)-linear derivation of this algebra which is also denoted by \(\partial_{j}\). The evolution equations we require of a \(\mathcal{P}_{\leqslant 0}\)-deformation \(\{\mathcal{F}_{\sigma}\mid\sigma\in\Sigma\}\) of the basic directions of \(\mathcal{C}_{as}(\mathbb{R})\), are determined by the decomposition (9) and consist of all equations
\[\partial_{j_{1}}(\mathcal{F}_{j_{2}})=[\pi_{as}(\mathcal{F}_{j_{1}}), \mathcal{F}_{j_{2}}]=[\mathcal{F}_{j_{2}},\pi_{\leqslant 0}(\mathcal{F}_{j_{1}})] \tag{11}\]
for all \(j_{1}\) and \(j_{2}\) from \(\mathcal{J}\). The second equality in (11) follows again directly from the fact that all \(\mathcal{F}_{j_{1}}\) and \(\mathcal{F}_{j_{2}}\) commute. Also the data \((R_{as},\{\partial_{j}\})\) we call a _setting_ in which we can consider the present deformations and their evolution equations. A \(\mathcal{P}_{\leqslant 0}\)-deformation of the \(\{F_{j}\mid j\in\mathcal{J}\}\) that satisfies all the equations (11), is called a _solution_ in the setting \((R_{as},\{\partial_{j}\})\) of the \(\mathcal{C}_{as}(\mathbb{R})\)-hierarchy after the commutative Lie algebra that gets deformed. The equations (11) are called the _Lax equations_ of the \(\mathcal{C}_{as}(\mathbb{R})\)-hierarchy. Also these equations always have a trivial solution \(\{\mathcal{F}_{j}=F_{j}\mid j\in\mathcal{J}\}\).
_Example 3.2_.: To give an idea of what to expect of Lax equations like (10) and (11) we present here an example of a simple system that can be put in this form. Recall that the so-called _infinite Toda chain_ consists of an infinite number of particles on a straight line labeled by \(\mathbb{Z}\). We assume for simplicity that they all have the same mass equal to one and that their equations of motion are given by
\[\frac{dq_{n}}{dt}=p_{n}\;\;\text{and}\;\;\frac{dp_{n}}{dt}=2e^{2(q_{n-1}-q_{n })}-2e^{2(q_{n}-q_{n+1})},\;\;n\in\mathbb{Z}. \tag{12}\]
Here \(q_{n}\) is the displacement of the \(n\)-th particle, \(p_{n}\) its momentum and the two exponential factors in equation (12) describe the forces exerted on the \(n\)-th particle by each of its neighbors. These equations can be rewritten as an equality between \(\mathbb{Z}\times\mathbb{Z}\)-matrices. Thereto we put
\[a_{n}:=e^{q_{n}-q_{n+1}}.\]
The equations (12) get then the following form
\[\frac{da_{n}}{dt}=a_{n}(p_{n}-p_{n+1})\;\;\text{and}\;\;\frac{dp_{n}}{dt}=2(a _{n-1}^{2}-a_{n}^{2}),\;\;n\in\mathbb{Z}. \tag{13}\]
Consider now the \(\mathbb{Z}\times\mathbb{Z}\)-matrices \(L\) and \(M\) of the form
\[L=\begin{pmatrix}\ddots&\ddots&\ddots&&0\\ \ddots&p_{n-1}&a_{n-1}&0&\ddots\\ \ddots&a_{n-1}&p_{n}&a_{n}&\ddots\\ &&0&a_{n}&p_{n+1}&\ddots\\ 0&&\ddots&\ddots&\ddots\end{pmatrix}\;\;\text{and}\;\;M=\begin{pmatrix}\ddots& \ddots&\ddots&&0\\ \ddots&0&a_{n-1}&0&\ddots\\ \ddots&-a_{n-1}&0&a_{n}&\ddots\\ &&0&-a_{n}&0&\ddots\\ 0&&\ddots&\ddots&\ddots\end{pmatrix}.\]
The matrices \(L\) and \(M\) decompose as follows in diagonal matrices and powers of \(\Lambda\):
\[L =\operatorname{diag}(a_{n})\Lambda+\operatorname{diag}(p_{n}) \operatorname{Id}+\Lambda^{-1}\operatorname{diag}(a_{n})\] \[=M+\operatorname{diag}(p_{n})\operatorname{Id}+\Lambda^{-1} \operatorname{diag}(2a_{n}), \tag{14}\]
where \(M=\operatorname{diag}(a_{n})\Lambda-\Lambda^{-1}\operatorname{diag}(a_{n})= \pi_{as}(L)\). Since all \(a_{n}>0\) the diagonal matrix \(\operatorname{diag}(a_{n})\) belongs to \(\mathcal{D}_{1}(R)^{*}\) and it was shown in [14] that any matrix in \(LT_{\mathbb{Z}}(R)\) of degree
one with an invertible leading diagonal component can be obtained by dressing \(\Lambda\) with an element of \(\mathcal{P}_{\leqslant 0}\). This holds also for \(F_{11}\). Hence \(L\) can be obtained by dressing \(F_{11}\) with an element of \(\mathcal{P}_{\leqslant 0}\). One can express the equations (13) in terms of relations for the diagonal components before the different powers of \(\Lambda\) in \(L\) in (14). Thus we get:
\[\frac{d}{dt}(\operatorname{diag}(a_{n})) =\operatorname{diag}(a_{n})(\operatorname{diag}(p_{n})-\Lambda \operatorname{diag}(p_{n})\Lambda^{-1})\] \[=\operatorname{diag}(a_{n})(\operatorname{diag}(p_{n})- \operatorname{diag}(p_{n+1}))\] \[\frac{d}{dt}(\operatorname{diag}(p_{n})) =2(\Lambda^{-1}\operatorname{diag}(a_{n}^{2})\Lambda- \operatorname{diag}(a_{n}^{2}))\] \[=2(\operatorname{diag}(a_{n-1}^{2})-\operatorname{diag}(a_{n}^{2}))\]
and comparing these formulas with the expression of
\[[L,M]=[\operatorname{diag}(p_{n})\operatorname{Id}+\Lambda^{-1}\operatorname {diag}(2a_{n}),\operatorname{diag}(a_{n})\Lambda-\Lambda^{-1}\operatorname{ diag}(a_{n})]\]
in diagonal matrices and powers of \(\Lambda\) yields that the following Lax equation for \(L\):
\[\frac{dL}{dt}=[L,M], \tag{15}\]
is equivalent with the equations (13) of the infinite Toda chain.
The solutions to both hierarchies possess still another useful property. There holds namely
**Proposition 3.3**.: _Both the Lax equations (10) of the \(\mathcal{C}_{sh}(\mathbb{C})\)-hierarchy and those (11) of the \(\mathcal{C}_{as}(\mathbb{R})\)-hierarchy are so-called compatible systems, i.e. the projections \(\{\mathcal{B}_{\sigma}:=\pi_{sh}(\mathcal{G}_{\sigma})\mid\sigma\in\Sigma\}\) of a solution \(\{\mathcal{G}_{\sigma}\}\) of the \(\mathcal{C}_{sh}(\mathbb{C})\)-hierarchy satisfy the zero curvature relations_
\[\partial_{\sigma_{1}}(\mathcal{B}_{\sigma_{2}})-\partial_{\sigma_{2}}( \mathcal{B}_{\sigma_{1}})-[\mathcal{B}_{\sigma_{1}},\mathcal{B}_{\sigma_{2}}]=0 \tag{16}\]
_and the projections \(\{\mathcal{C}_{j}:=\pi_{as}(\mathcal{F}_{j})\mid j\in\mathcal{J}\}\) of a solution \(\{\mathcal{F}_{j}\}\) of the \(\mathcal{C}_{as}(\mathbb{R})\)-hierarchy satisfy the zero curvature relations_
\[\partial_{j_{1}}(\mathcal{C}_{j_{2}})-\partial_{j_{2}}(\mathcal{C}_{j_{1}})-[ \mathcal{C}_{j_{1}},\mathcal{C}_{j_{2}}]=0. \tag{17}\]
Proof.: The idea is to show that the left hand side of (16) resp. (17) belongs to
\[\pi_{sh}(LT_{\mathbb{Z}}(R))\cap\pi_{-}(LT_{\mathbb{Z}}(R))\text{ resp. }\pi_{as}(LT_{\mathbb{Z}}(R))\cap\pi_{\leqslant 0}(LT_{ \mathbb{Z}}(R))\]
and thus has to be zero. We give the proof for the \(\{\mathcal{B}_{\sigma}\}\), that for the \(\{\mathcal{C}_{j}\}\) is similar and is left to the reader. The inclusion in the first factor is clear as both \(\mathcal{B}_{\sigma}\) and \(\partial_{\sigma_{1}}(\mathcal{B}_{\sigma_{2}})\) belong to the Lie subalgebra \(\pi_{sh}(LT_{\mathbb{Z}}(R)).\) To show the other one, we use the Lax equations (10). By substituting \(\mathcal{B}_{\sigma_{k}}=\mathcal{G}_{\sigma_{k}}-\pi_{-}(\mathcal{G}_{\sigma _{k}}),k=1,2\), we get for
\[\partial_{\sigma_{1}}(\mathcal{B}_{\sigma_{2}})-\partial_{\sigma_ {2}}(\mathcal{B}_{\sigma_{1}}) =\partial_{\sigma_{1}}(\mathcal{G}_{\sigma_{2}})-\partial_{\sigma _{1}}(\pi_{-}(\mathcal{G}_{\sigma_{2}}))\] \[\quad-\partial_{\sigma_{2}}(\mathcal{G}_{\sigma_{1}})+\partial_{ \sigma_{2}}(\pi_{-}(\mathcal{G}_{\sigma_{1}}))\] \[=[\mathcal{B}_{\sigma_{1}},\mathcal{G}_{\sigma_{2}}]-[\mathcal{B}_ {\sigma_{2}},\mathcal{G}_{\sigma_{1}}]\] \[\quad-\partial_{\sigma_{1}}(\pi_{-}(\mathcal{G}_{\sigma_{2}}))+ \partial_{\sigma_{2}}(\pi_{-}(\mathcal{G}_{\sigma_{1}}\mathcal{G}_{\sigma_{k}}))\]
and for
\[[\mathcal{B}_{\sigma_{1}},\mathcal{B}_{\sigma_{2}}] =[\mathcal{G}_{\sigma_{1}}-\pi_{-}(\mathcal{G}_{\sigma_{1}}), \mathcal{G}_{\sigma_{2}}-\pi_{-}(\mathcal{G}_{\sigma_{2}})]\] \[=-\left[\pi_{-}(\mathcal{G}_{\sigma_{1}}),\mathcal{G}_{\sigma_{2}} \right]+\left[\pi_{-}(\mathcal{G}_{\sigma_{2}}),\mathcal{G}_{\sigma_{1}}\right]\] \[\quad+\left[\pi_{-}(\mathcal{G}_{\sigma_{1}}),\pi_{-}(\mathcal{G }_{\sigma_{2}})\right].\]
Taking into account the second identity in (10), we see that the left hand side of (16) is equal to
\[-\partial_{\sigma_{1}}(\pi_{-}(\mathcal{G}_{\sigma_{2}}))+\partial_{\sigma_{2}}( \pi_{-}(\mathcal{G}_{\sigma_{1}}))-[\pi_{-}(\mathcal{G}_{\sigma_{1}}),\pi_{-}( \mathcal{G}_{\sigma_{2}})].\]
This element belongs to the Lie subalgebra \(\pi_{-}(LT_{\mathbb{Z}}(R))\) and that proves the claim.
Besides the zero curvature relations for the projections \(\{\mathcal{B}_{\sigma}\}\) resp. \(\{\mathcal{C}_{j}\}\) corresponding to respectively a solution \(\{\mathcal{G}_{\sigma}\}\) of the \(\mathcal{C}_{sh}(\mathbb{C})\)-hierarchy and a solution \(\{\mathcal{F}_{j}\}\) of the \(\mathcal{C}_{as}(\mathbb{R})\)-hierarchy, also other parts satisfy such relations. Introduce for any \(\mathcal{P}_{-}(\mathbb{R})\)-deformation \(\{\mathcal{G}_{\sigma}\}\) of the \(\{G_{\sigma}\}\) and any \(\mathcal{P}_{\leqslant 0}\)-deformation \(\{\mathcal{F}_{j}\}\) of the \(\{F_{j}\}\) in \(LT_{\mathbb{Z}}(R)\) the notations
\[\mathcal{A}_{\sigma}:=\mathcal{B}_{\sigma}-\mathcal{G}_{\sigma}=-\pi_{-}( \mathcal{G}_{\sigma}),\sigma\in\Sigma,\text{ and }\mathcal{D}_{j}:=\mathcal{C}_{j}- \mathcal{F}_{j}=-\pi_{\leqslant 0}(\mathcal{F}_{j}),j\in\mathcal{J}. \tag{18}\]
Then we can say
**Corollary 3.4**.: _The following relations hold:_
* _The parts_ \(\{\mathcal{A}_{\sigma}\mid\sigma\in\Sigma\}\) _of a solution_ \(\{\mathcal{G}_{\sigma}\}\) _of the_ \(\mathcal{C}_{sh}(\mathbb{C})\)_-hierarchy satisfy_ \[\partial_{\sigma_{1}}(\mathcal{A}_{\sigma_{2}})-\partial_{\sigma_{2}}( \mathcal{A}_{\sigma_{1}})-[\mathcal{A}_{\sigma_{1}},\mathcal{A}_{\sigma_{2}}]=0.\]
* _The parts_ \(\{\mathcal{D}_{j}\mid j\in\mathcal{J}\}\) _of a solution_ \(\{\mathcal{F}_{j}\}\) _of the_ \(\mathcal{C}_{as}(\mathbb{R})\)_-hierarchy satisfy_ \[\partial_{j_{1}}(\mathcal{D}_{j_{2}})-\partial_{j_{2}}(\mathcal{D}_{j_{1}})- [\mathcal{D}_{j_{1}},\mathcal{D}_{j_{2}}]=0\]
Proof.: Again we show the result only for the \(\mathcal{C}_{sh}(\mathbb{C})\)-hierarchy. Now we substitute in the zero curvature relations for the \(\{\mathcal{B}_{\sigma}\}\) everywhere the relation \(\mathcal{B}_{\sigma}=\mathcal{A}_{\sigma}+\mathcal{G}_{\sigma}\), use the second equality in the Lax equations (10) and the fact that all the \(\{\mathcal{G}_{\sigma}\}\) commute. This gives the desired result.
Let \((R,\{\partial_{\sigma}\})\) denote a setting for the \(\mathcal{C}_{sh}(\mathbb{C})\)-hierarchy and we write \((R_{as},\{\partial_{j}\})\) for a setting of the \(\mathcal{C}_{as}(\mathbb{R})\)-hierarchy. For both hierarchies there also holds an analogue of the Sato-Wilson equations for the KP hierarchy
**Proposition 3.5**.: _Let \(\{\mathcal{G}_{\sigma}=KG_{\sigma}K^{-1}\}\) be a \(\mathcal{P}_{-}(\mathbb{R})\)-deformation of the \(\{G_{\sigma}\}\) in \(LT_{\mathbb{Z}}(R)\) and likewise let \(\{\mathcal{F}_{j}=PF_{j}P^{-1}\}\) be a \(\mathcal{P}_{\leqslant 0}\)-deformation of the \(\{F_{j}\}\) in \(LT_{\mathbb{Z}}(R_{as})\). Define the \(\{\mathcal{A}_{\sigma}\}\) and the \(\{\mathcal{D}_{j}\}\) as in (18). Then there holds:_
* _If the dressing operator_ \(K\) _of the_ \(\{\mathcal{G}_{\sigma}\}\) _satisfies the equations_ (19) \[\partial_{\sigma}(K)=\mathcal{A}_{\sigma}K,\text{ for all }\sigma\in\Sigma,\] _then_ \(\{\mathcal{G}_{\sigma}\}\) _is a solution of the_ \(\mathcal{C}_{sh}(\mathbb{C})\)_-hierarchy. Note that it makes sense to consider equations (_19_), because both sides have degree zero in_ \(\Lambda\) _or lower._
* _Similarly, if the dressing operator_ \(P\) _of the_ \(\{\mathcal{F}_{j}\}\) _satisfies the equations_ (20) \[\partial_{j}(P)=\mathcal{D}_{j}P,\text{ for all }j\in\mathcal{J},\] _then_ \(\{\mathcal{F}_{j}\}\) _is a solution of the_ \(\mathcal{C}_{as}(\mathbb{R})\)_-hierarchy. Note that both sides of the equations (_20_) are of degree zero in_ \(\Lambda\) _or lower._
_The equations (19) resp. (20) are called the Sato-Wilson equations of the \(\mathcal{C}_{sh}(\mathbb{C})\)-hierarchy resp. the \(\mathcal{C}_{as}(\mathbb{R})\)-hierarchy._
Proof.: We present again the proof for the case (19) that for (20) is similar. Since \(G_{\sigma_{1}}\) is constant w.r.t. the \(\{\partial_{\sigma}\}\), we have in general
\[\partial_{\sigma_{1}}(KG_{\sigma_{2}}K^{-1})=\partial_{\sigma_{1}}(K)K^{-1}KG_{ \sigma_{2}}K^{-1}-KG_{\sigma_{2}}K^{-1}\partial_{\sigma_{1}}(K)K^{-1}=[ \partial_{\sigma_{1}}(K)K^{-1},\mathcal{G}_{\sigma_{2}}].\]
According to (19) each \(\partial_{\sigma_{1}}(K)K^{-1}\) equals \(\mathcal{A}_{\sigma_{1}}\) and substitution in the last equation results in
\[\partial_{\sigma_{1}}(\mathcal{G}_{\sigma_{2}})=[\mathcal{A}_{\sigma_{1}}, \mathcal{G}_{\sigma_{2}}]=[\pi_{sh}(\mathcal{G}_{\sigma_{1}})-\mathcal{G}_{ \sigma_{1}},\mathcal{G}_{\sigma_{2}}]=[\pi_{sh}(\mathcal{G}_{\sigma_{1}}), \mathcal{G}_{\sigma_{2}}],\]
which proves the claim.
In both cases solutions of the same system differ by an element of the dressing group that is constant. We illustrate that for system (19). If \(K_{1}\) is another solution of (19) for the same set of \(\{\mathcal{A}_{\sigma}\}\), then \(K_{1}=KK_{0}\), where \(K_{0}\) is a matrix in \(LT_{\mathbb{Z}}(R)\) that is constant for all the \(\{\partial_{\sigma}\}\), i.e. \(\partial_{\sigma}(K_{0})=0\), for all \(\sigma\in\Sigma\). There holds namely,
\[\partial_{\sigma}(K_{1})=\mathcal{A}_{\sigma}K_{1}=\partial_{\sigma}(K)K^{-1}K_{ 1}+K\partial_{\sigma}(K_{0})=\mathcal{A}_{\sigma}K_{1}+K\partial_{\sigma}(K_{0}).\]
This implies \(K\partial_{\sigma}(K_{0})=0\) and, as \(K\) is invertible, the desired identity holds. Reversely, for any \(K_{0}\in P_{-}(\mathbb{R})\) satisfying \(\partial_{\sigma}(K_{0})=0\) for all \(\sigma\in\Sigma\) and any solution \(K\) of system (19), the operator \(KK_{0}\) is another solution of (19). Proposition 3.5 offers the possibility to construct solutions of respectively the \(\mathcal{C}_{sh}(\mathbb{C})\)-hierarchy and the \(\mathcal{C}_{as}(\mathbb{R})\)-hierarchy by finding dressing matrices that satisfy the equations (19) resp. (20). This can be achieved for each of the two hierarchies by constructing special vectors, called wave matrices, in appropriate left \(LT_{\mathbb{Z}}(R)\)- and \(LT_{\mathbb{Z}}(R_{as})\)-modules that satisfy a set of relations, the linearization of the hierarchy in question, in this module. This will be the topic of the next section.
## 4. Linearizations and wave matrices
We start out in this section with a \(\mathcal{P}_{-}(\mathbb{R})\)-deformation \(\{\mathcal{G}_{\sigma}\}\) of the \(\{G_{\sigma}\}\) together with the projections \(\{\mathcal{B}_{\sigma}:=\pi_{sh}(\mathcal{G}_{\sigma})\mid\sigma\in\Sigma\}\) in the setting \((R,\{\partial_{\sigma}\})\) and a \(\mathcal{P}_{\leqslant 0}\)-deformation \(\{\mathcal{F}_{j}\}\) of the \(\{F_{j}\}\) together with the projections \(\{\mathcal{C}_{j}:=\pi_{as}(\mathcal{F}_{j})\mid j\in\mathcal{J}\}\) in the setting \((R_{as},\{\partial_{j}\})\). Before giving the formal description of each of the two modules, we first present two sets of equations for respectively the \(\{\mathcal{G}_{\sigma}\}\) and the \(\{\mathcal{F}_{j}\}\) and a number of manipulations with these equations that lead to the Lax equations (10) respectively (11) for them. Like this it will be clear that these sets of equations are connected to the hierarchy for which they deliver the Lax equations. Later on, we present the formal framework in which these manipulations are justified. For the \(\{\mathcal{G}_{\sigma}\}\) one searches an appropriate \(\varphi\) in the module such that the following set of equations holds in the module
\[\mathcal{G}_{\sigma}\varphi=\varphi G_{\sigma}\text{ for all } \sigma\in\Sigma, \tag{22}\] \[\partial_{\sigma}(\varphi)=\pi_{sh}(\mathcal{G}_{\sigma})\varphi \text{ for all }\sigma\in\Sigma. \tag{21}\]
This set is called the _linearization of the \(\mathcal{C}_{sh}(\mathbb{C})\)-hierarchy_. To get the Lax equations of the \(\mathcal{C}_{sh}(\mathbb{C})\)-hierarchy we apply \(\partial_{\sigma_{1}}\) to the equation (21) for \(\sigma_{2}\), use a Leibnitz rule for products and substitute twice equation (22). This yields
\[\partial_{\sigma_{1}}(\mathcal{G}_{\sigma_{2}}\varphi-\varphi G_{ \sigma_{2}})=\partial_{\sigma_{1}}(\mathcal{G}_{\sigma_{2}})\varphi+ \mathcal{G}_{\sigma_{2}}(\partial_{\sigma_{1}}(\varphi))-(\partial_{\sigma_{ 1}}(\varphi))G_{\sigma_{2}}\] \[=\partial_{\sigma_{1}}(\mathcal{G}_{\sigma_{2}})\varphi+ \mathcal{G}_{\sigma_{2}}\pi_{as}(\mathcal{G}_{\sigma_{1}})\varphi-\pi_{as}( \mathcal{G}_{\sigma_{1}})\varphi G_{\sigma_{2}}\] \[=\{\partial_{\sigma_{1}}(\mathcal{G}_{\sigma_{2}})-[\pi_{as}( \mathcal{G}_{\sigma_{1}}),\mathcal{G}_{\sigma_{2}}\;]\}\varphi=0. \tag{23}\]
Hence, if the annihilator of \(\varphi\) in the \(LT_{\mathbb{Z}}(R)\)-module is equal to zero, then the \(\{\mathcal{G}_{\sigma}\}\) satisfy the Lax equations of the \(\mathcal{C}_{sh}(\mathbb{C})\)-hierarchy.
_Remark 4.1_.: The form of the _linearization of the \(\mathcal{C}_{as}(\mathbb{R})\)-hierarchy_ is as follows:
\[\mathcal{F}_{j}\psi=\psi F_{j}\text{ for all }j\in\mathcal{J}, \tag{25}\] \[\partial_{j}(\psi)=\pi_{as}(\mathcal{F}_{j})\psi\text{ for all }j\in \mathcal{J}. \tag{24}\]
Here \(\psi\) is a vector in the left \(LT_{\mathbb{Z}}(R_{as})\)-module corresponding to the \(\mathcal{C}_{as}(\mathbb{R})\)-hierarchy. A similar set of manipulations as above yields then the Lax equations for the \(\{\mathcal{F}_{j}\}\), if the annihilator of \(\psi\) is zero.
Now we discuss the \(LT_{\mathbb{Z}}(R)\)-module for the \(\mathcal{C}_{sh}(\mathbb{C})\)-hierarchy. Recall from the manipulations that we need in (21) and (22) that one can multiply \(\varphi\) from the left with elements from \(LT_{\mathbb{Z}}(R)\) like \(\mathcal{G}_{\sigma}\) and \(\pi_{sh}(\mathcal{G}_{\sigma})\) and from the right with all the basic matrices \(\{G_{\sigma}\}\). Further, there should be a left action of each \(\partial_{\sigma}\) on \(\varphi\) that satisfies Leibnitz with respect to the left \(LT_{\mathbb{Z}}(R)\)-action and finally the annihilator of \(\varphi\) should be zero. To realize the first, one builds a suitable left \(LT_{\mathbb{Z}}(R)\)-module, where also the other actions can be given sense. The actual form of the elements in the module is guided by the \(\varphi_{0}\) corresponding to the trivial solution \(\{\mathcal{G}_{\sigma}=G_{\sigma}\}\) of the hierarchy. In that case the equations (21) and (22) reduce to
\[G_{\sigma}\varphi_{0}=\varphi_{0}G_{\sigma},\text{ and }\partial_{\sigma}( \varphi_{0})=G_{\sigma}\varphi_{0}. \tag{26}\]
If one thinks of \(\varphi_{0}\) as a \(\mathbb{Z}\times\mathbb{Z}\)-matrix then the first equation in (26) tells you that \(\varphi_{0}\) commutes with all \(\{G_{\sigma}\}\) and, since \(\partial_{\sigma}\) is the algebraic substitute for differentiation w.r.t. the flow parameter corresponding to \(G_{\sigma}\), the second equation of (26) yields for all \(\sigma\in\Sigma\) that \(\tilde{\varphi}_{0}:=\exp(-t_{\sigma}G_{\sigma})\varphi_{0}\) is constant for \(\partial_{\sigma}\), i.e. \(\partial_{\sigma}(\tilde{\varphi}_{0})=0\). This leads one to consider the formal series
\[\varphi_{0}:=\exp(\sum_{\sigma\in\Sigma}t_{\sigma}G_{\sigma}). \tag{27}\]
Under suitable convergence conditions, see Section 5, this series corresponds to a well-defined \(\mathbb{Z}\times\mathbb{Z}\)-matrix, it commutes with all the \(\{G_{\sigma}\}\) and, if one lets \(\partial_{\sigma}\) act on \(\varphi_{0}\) as \(\frac{\partial}{\partial t_{\sigma}}\), then it satisfies the second equation in (26). The module for the linearization will consist of formal perturbations of this trivial solution \(\varphi_{0}\) by formal multiplication with elements from \(LT_{\mathbb{Z}}(R)\) from the left. Consider namely the collection \(\mathcal{O}_{sh}\) of formal products
\[\{\sum_{r=-\infty}^{N}d_{r}\Lambda^{r}\}\exp(\sum_{\sigma\in\Sigma}t_{\sigma} G_{\sigma}),\text{ where all }d_{r}\in\mathcal{D}_{1}(R). \tag{28}\]
Notice that, even if \(\varphi_{0}\) is a well-defined \(\mathbb{Z}\times\mathbb{Z}\)-matrix, then the product in (28) of the perturbation factor from \(LT_{\mathbb{Z}}(R)\) and \(\varphi_{0}\) is in general not a well-defined \(\mathbb{Z}\times\mathbb{Z}\)-matrix. So, it is necessary to keep the two factors separate. Analogously to the terminology used at the KP hierarchy, we define
**Definition 4.2**.: The elements of \(\mathcal{O}_{sh}\) are called _oscillating matrices_ for the \(\mathcal{C}_{sh}(\mathbb{C})\)-hierarchy.
Despite the fact that the product in (28) is formal, there is a well-defined left action of \(LT_{\mathbb{Z}}(R)\) on it. For all \(\ell_{1}\) and \(\ell_{2}\in LT_{\mathbb{Z}}(R)\) one puts namely
\[\ell_{1}\{\ell_{2}\}\varphi_{0}=\{\ell_{1}\ell_{2}\}\varphi_{0}. \tag{29}\]
Also the right multiplication with \(\{G_{\sigma}\}\) is well-defined on elements of \(\mathcal{O}_{sh}\)
\[\{\ell\}\varphi_{0}G_{\sigma}:=\{\ell G_{\sigma}\}\varphi_{0}. \tag{30}\]
An action of the derivations \(\partial_{\sigma}\) on \(\mathcal{O}_{sh}\) is defined as if the product in the module \(\mathcal{O}_{sh}\) is a real one
\[\partial_{\sigma}(\{\sum_{j=-\infty}^{N}d_{j}\Lambda^{j}\}\varphi_{0})=\{\sum _{j=-\infty}^{N}\partial_{\sigma}(d_{j})\Lambda^{j}+\sum_{j=-\infty}^{N}d_{j} \Lambda^{j}G_{\sigma}\}\varphi_{0}. \tag{31}\]
It is a direct verification that this action of \(\partial_{\sigma}\) satisfies the Leibnitz rule used in the manipulations in (23). Note that \(\mathcal{O}_{sh}\) is a free \(LT_{\mathbb{Z}}(R)\)-module with generator \(\varphi_{0}\). Hence scratching \(\varphi\) from the equations (21) and (22) is permitted as soon as one knows that \(\varphi=\hat{\varphi}\varphi_{0}\) with \(\hat{\varphi}\in I(LT_{\mathbb{Z}}(R))\). Moreover, the equations \(\mathcal{G}_{\sigma}\varphi=\varphi G_{\sigma}\) imply then that \(\mathcal{G}_{\sigma}=\hat{\varphi}G_{\sigma}\hat{\varphi}^{-1}\). Since we are interested in \(\mathcal{P}_{-}(\mathbb{R})\)-deformations of the \(\{G_{\sigma}\}\), this brings us to the following
**Definition 4.3**.: An oscillating matrix \(\varphi=\hat{\varphi}\varphi_{0}\), with \(\hat{\varphi}\in\mathcal{P}_{-}(R)\), is called _a wave matrix_ for the matrices \(\{\mathcal{G}_{\sigma}\}\), if \(\varphi\) and the \(\{\mathcal{G}_{\sigma}\}\) satisfy the equations (21) and (22).
Since the manipulations to get the Lax equations are well-defined on such a \(\varphi\), the \(\{\mathcal{G}_{\sigma}\}\) form a solution of the \(\mathcal{C}_{sh}(\mathbb{C})\)-hierarchy. This follows also from Proposition 3.5, as \(\hat{\varphi}\) satisfies the Sato-Wilson equations (19) of this hierarchy. Substitute namely \(\varphi=\hat{\varphi}\varphi_{0}\) in equation (22) and one gets
\[\partial_{\sigma}(\varphi)=(\partial_{\sigma}(\hat{\varphi})\hat{\varphi}^{-1}+ \hat{\varphi}G_{\sigma}\hat{\varphi}^{-1})\varphi=\mathcal{B}_{\sigma}\varphi\]
and, since \(\mathcal{O}_{sh}\) is a free \(LT_{\mathbb{Z}}(R)\)-module with generator \(\varphi\), one obtains \(\partial_{\sigma}(\hat{\varphi})\hat{\varphi}^{-1}=\mathcal{A}_{\sigma}\).
_Remark 4.4_.: For the form of the \(LT_{\mathbb{Z}}(R_{as})\)-module one looks also first at the solution \(\psi_{0}\) of the linearization of the \(\mathcal{C}_{as}(\mathbb{R})\)-hierarchy corresponding to the trivial solution \(\{\mathcal{F}_{j}=F_{j}\}\). Thus one obtains the series
\[\psi_{0}=\exp(\sum_{j\in\mathcal{J}}t_{j}F_{j}),\]
which determines under suitable convergence conditions a well-defined \(\mathbb{Z}\times\mathbb{Z}\)-matrix. The module \(\mathcal{O}_{as}\) of _oscillating matrices_ for the \(\mathcal{C}_{as}(\mathbb{R})\)-hierarchy consists of all formal products of a perturbation factor from \(LT_{\mathbb{Z}}(R_{as})\) and \(\psi_{0}\). If you replace in (29) everywhere \(\varphi_{0}\) by \(\psi_{0}\) and \(LT_{\mathbb{Z}}(R)\) by \(LT_{\mathbb{Z}}(R_{as})\), then you get the \(LT_{\mathbb{Z}}(R_{as})\)-module structure on \(\mathcal{O}_{as}\). The right action of the basis of \(\mathcal{C}_{as}(\mathbb{R})\) you obtain by altering in (30) each \(\varphi_{0}\) into a \(\psi_{0}\) and each \(G_{\sigma}\) into an \(F_{j}\). Finally, the action of the derivations \(\{\partial_{j}\}\) on \(\mathcal{O}_{as}\) you obtain if you replace in (31) each \(\partial_{\sigma}\) by a \(\partial_{j}\), each \(\varphi_{0}\) by a \(\psi_{0}\) and each \(G_{\sigma}\) by an \(F_{j}\). \(\mathcal{O}_{as}\) is a free \(LT_{\mathbb{Z}}(R_{as})\)-module with generator \(\psi_{0}\). The elements \(\psi=\hat{\psi}\psi_{0}\) in \(\mathcal{O}_{as}\), where \(\hat{\psi}\) is invertible in \(LT_{\mathbb{Z}}(R_{as})\), have a zero annihilator. Let \(\psi=\hat{\psi}\psi_{0}\) be an element of \(\mathcal{O}_{as}\) such that \(\hat{\psi}\in\mathcal{P}_{\leqslant 0}\). Then \(\psi\) is called a _wave matrix_ for the \(\{\mathcal{F}_{j}=\hat{\psi}F_{j}\hat{\psi}^{-1}\}\), if \(\psi\) and the \(\{\mathcal{F}_{j}\}\) satisfy the equations (24) and (25). In particular, the \(\{\mathcal{F}_{j}\}\) are then a solution of the \(\mathcal{C}_{as}(\mathbb{R})\)-hierarchy. This is once more confirmed by the fact that the perturbation factor \(\hat{\psi}\) of the wave matrix of the \(\{\mathcal{F}_{j}\}\) satisfies the Sato-Wilson equations (20).
If one wants to prove the equations (21) and (22) for an oscillating matrix \(\varphi\) for the \(\mathcal{C}_{sh}(\mathbb{C})\)-hierarchy of the right form or the equations (24) and (25) for an oscillating matrix \(\psi\) for the \(\mathcal{C}_{as}(\mathbb{R})\)-hierarchy of the appropriate shape, it suffices to prove weaker results, as the next Proposition demonstrates
**Proposition 4.5**.: _For each hierarchy we have_
* _Let_ \(\varphi=\hat{\varphi}\varphi_{0}\)_, with_ \(\hat{\varphi}\in\mathcal{P}(\mathbb{R})\)_, be an oscillating matrix for the_ \(\mathcal{C}_{sh}(\mathbb{C})\)_-hierarchy. If it satisfies for all_ \(\sigma\in\Sigma\)__ \[\partial_{\sigma}(\varphi)=S_{\sigma}\phi,\text{ with }S_{\sigma}\in \mathcal{SH}(R),\] _then_ \(S_{\sigma}=\pi_{sh}(\mathcal{G}_{\sigma})\)_, where_ \(\mathcal{G}_{\sigma}:=\hat{\varphi}(G_{\sigma})\hat{\varphi}^{-1}\)_. In particular the_ \(\{\mathcal{G}_{\sigma}\}\) _form a solution to the_ \(\mathcal{C}_{sh}(\mathbb{C})\)_-hierarchy and_ \(\varphi\) _is a wave matrix for this solution._
* _Let_ \(\psi=\hat{\psi}\psi_{0}\)_, with_ \(\hat{\varphi}\in\mathcal{P}_{\leqslant 0}\)_, be an oscillating matrix for the_ \(\mathcal{C}_{as}(\mathbb{R})\)_-hierarchy. If it satisfies for all_ \(j\in\mathcal{J}\)__ \[\partial_{j}(\psi)=A_{j}\psi,\text{ with }A_{j}\in\mathcal{AS}(R_{as}),\] _then_ \(A_{j}=\pi_{as}(\mathcal{F}_{j})\)_, where_ \(\mathcal{F}_{j}:=\hat{\psi}(F_{j})\hat{\psi}^{-1}\) _In particular the_ \(\{\mathcal{F}_{j}\}\) _form a solution to the_ \(\mathcal{C}_{as}(\mathbb{R})\)_-hierarchy and_ \(\psi\) _is a wave matrix for this solution._
Proof.: We just give the proof for part (a), that of (b) is similar and left to the reader. From the definition of the action of \(\partial_{\sigma}\) on \(\mathcal{O}_{sh}\) and the fact that \(\mathcal{O}_{sh}\) is a free \(LT_{\mathbb{Z}}(R)\)-module with generator \(\varphi_{0}\), one gets the operator equation
\[\partial_{\sigma}(\hat{\varphi})-\hat{\varphi}G_{\sigma}=S_{\sigma}\hat{\varphi}. \tag{32}\]
Multiplying this equation from the right with \(\hat{\varphi}^{-1}\) and applying the projection \(\pi_{sh}\) gives the desired result.
_Remark 4.6_.: The wave matrices for the \(\mathcal{C}_{sh}(\mathbb{C})\)-hierarchy and those for the \(\mathcal{C}_{as}(\mathbb{R})\)-hierarchy are in their context the analogues of the Baker-Akhiezer functions for the \(KP\)-hierarchy.
It might happen for both the wave matrices for the \(\mathcal{C}_{sh}(\mathbb{C})\)-hierarchy as for those of the \(\mathcal{C}_{as}(\mathbb{R})\)-hierarchy that different wave matrices give the same solution of the hierarchy. We discuss here the freedom one has and we start with two wave matrices \(\varphi_{1}=\hat{\varphi}_{1}\varphi_{0}\) and
\(\varphi_{2}=\hat{\varphi}_{2}\varphi_{0}\) that give the same solution of the \(\mathcal{C}_{sh}(\mathbb{C})\)-hierarchy. So both \(\hat{\phi}_{1}\) and \(\hat{\phi}_{2}\) belong to \(\mathcal{P}_{-}(\mathbb{R})\) and there holds for all \(\sigma\in\Sigma\) that
\[\mathcal{G}_{\sigma}=\hat{\varphi}_{1}G_{\sigma}\hat{\varphi}_{1}^{-1}=\hat{ \varphi}_{2}G_{\sigma}\hat{\phi_{2}}^{-1}.\]
Then one has first of all that \(\hat{\varphi}_{1}^{-1}\hat{\varphi}_{2}\) commutes with all the \(\{G_{\sigma}\},\sigma\in\Sigma.\) As we have seen in the proof of Lemma 2.3, commuting in \(LT_{\mathbb{Z}}(R)\) with the basis of \(\mathcal{C}_{sh}(\mathbb{C})\) is equivalent with commuting with \(\Lambda\). Therefore we get
\[\hat{\varphi}_{1}^{-1}\hat{\varphi}_{2}=\sum_{i\leqslant 0}j_{1}(a_{i}) \Lambda^{i}\in\mathcal{P}_{-}(\mathbb{R}).\]
Hence \(a_{0}\in R_{as}^{*},a_{0}>0,\) and all other \(a_{i}\in R\). One has seen in the proof of Proposition 4.5 that for all \(\sigma\in\Sigma\) and \(i=1,2\), there holds
\[\partial_{\sigma}(\hat{\varphi}_{i})=\pi_{sh}(\mathcal{G}_{\sigma})\hat{ \varphi}_{i}-\hat{\varphi}_{i}G_{\sigma}.\]
Hence, if one applies the operator \(\partial_{\sigma}\) to the equality \(\hat{\varphi}_{2}=\hat{\varphi}_{1}\sum_{i\leqslant 0}j_{1}(a_{i})\Lambda^{i}\), then one obtains
\[\partial_{\sigma}(\hat{\varphi}_{2})=\partial_{\sigma}(\hat{ \varphi}_{1})\sum_{i\leqslant 0}j_{1}(a_{i})\Lambda^{i}+\hat{\phi}_{1}\sum_{i \leqslant 0}i_{1}(\partial_{j}(a_{i}))\Lambda^{i}= \tag{34}\] \[\pi_{sh}(\mathcal{G}_{\sigma})\hat{\varphi}_{1}-\hat{\varphi}_{1 }G_{\sigma}\sum_{i\leqslant 0}i_{1}(a_{i})\Lambda^{i}+\hat{\varphi}_{1}\sum_{i \leqslant 0}j_{1}(\partial_{\sigma}(a_{i}))\Lambda^{i}=\] (35) \[\pi_{sh}(\mathcal{G}_{\sigma})\hat{\varphi}_{2}-\hat{\varphi}_{2 }G_{\sigma}+\hat{\varphi}_{1}\sum_{i\leqslant 0}j_{1}(\partial_{\sigma}(a_{i})) \Lambda^{i}=\] (36) \[\partial_{\sigma}(\hat{\varphi}_{2})+\hat{\varphi}_{1}\sum_{i \leqslant 0}j_{1}(\partial_{\sigma}(a_{i}))\Lambda^{i} \tag{33}\]
Hence, the expression \(\hat{\varphi}_{1}\sum_{i\leqslant 0}j_{1}(\partial_{\sigma}(a_{i}))\Lambda^{i}\) has to be zero. Since \(\hat{\phi_{1}}\) is invertible in \(LT_{\mathbb{Z}}(R)\), one must have for all \(i\leqslant 0\) and all \(\sigma\in\Sigma\) that \(\partial_{j}(a_{i})=0.\) One proceeds in the same way with two wave matrices \(\psi_{1}=\hat{\psi}_{1}\psi_{0}\) and \(\psi_{2}=\hat{\psi}_{2}\psi_{0}\) that give the same solution of the \(\mathcal{C}_{as}(\mathbb{R})\)-hierarchy. We state the results in the following Corollary:
**Corollary 4.7**.: _The freedom for each hierarchy in the wave matrices that yield the same solution is given by_
* _Assume_ \(\varphi_{1}\) _and_ \(\varphi_{2}\) _are wave matrices corresponding to the same solution_ \(\{\mathcal{G}_{\sigma}\}\) _of the_ \(\mathcal{C}_{sh}(\mathbb{C})\)_-hierarchy. Then there is an element_ \(\sum_{i\leqslant 0}j_{1}(a_{i})\Lambda^{i}\) _in_ \(\mathcal{P}_{-}(\mathbb{R})\) _such that_ \[\hat{\varphi}_{2}=\hat{\varphi}_{1}\sum_{i\leqslant 0}j_{1}(a_{i}) \Lambda^{i},\text{ with }a_{0}\in R(\mathbb{R})^{*},a_{0}>0,\text{ and all other }a_{i}\in R.\] _Moreover, all_ \(a_{i}\) _are constant for the derivations_ \(\partial_{\sigma}\)_, i.e._ \(\partial_{\sigma}(a_{i})=0.\)__
* _Assume that_ \(\psi_{1}\) _and_ \(\psi_{2}\) _are wave matrices corresponding to the same solution_ \(\{\mathcal{F}_{j}\}\) _of the_ \(\mathcal{C}_{as}(\mathbb{R})\)_-hierarchy. Then there is an element_ \(\sum_{i\leqslant 0}j_{1}(b_{i})\Lambda^{i}\) _in_ \(\mathcal{P}_{\leqslant 0}\) _such that_ \[\hat{\psi}_{2}=\hat{\psi}_{1}\sum_{i\leqslant 0}j_{1}(b_{i})\Lambda^{i}, \text{ with }b_{0}\in R_{as}^{*},b_{0}>0,\text{ and all other }b_{i}\in R_{as}.\] _Moreover, all the_ \(b_{i}\) _are constant for the derivations_ \(\partial_{j}\)_, i.e._ \(\partial_{j}(b_{i})=0.\)__
If one wants to use the set-up of oscillating matrices, linearizations and wave matrices, to construct concrete solutions of both hierarchies, then one has to take care that \(\varphi_{0}\) and \(\psi_{0}\) are well-defined \(\mathbb{Z}\times\mathbb{Z}\)-matrices and that the product between the perturbation factor and \(\varphi_{0}\) or \(\psi_{0}\) is also well-defined and yields a \(\mathbb{Z}\times\mathbb{Z}\)-matrix. In the next section we present for both hierarchies such a convergent framework in the style of [21].
## 5. The construction of solutions
A natural way to obtain real or complex \(\mathbb{Z}\times\mathbb{Z}\)-matrices is the following: take a real or complex Hilbert space \(H\) with a Hilbert basis \(\{e_{i}\mid i\in\mathbb{Z}\}\). Then there corresponds to each bounded operator \(b:H\to H\), a real or complex \(\mathbb{Z}\times\mathbb{Z}\)-matrix \([b]=(b_{ij})\) by the formula
\[b(e_{j})=\sum_{i\in\mathbb{Z}}b_{ij}e_{i}.\]
The wave matrices for both hierarchies that we will produce in this section, are constructed from a geometric context by using this principle. The relevant Hilbert spaces for this paper consist of the spaces \(\mathcal{H}(k),k=\mathbb{R}\) or \(\mathbb{C}\), of \(\mathbb{Z}\times 1\)-matrices with coefficients from \(k\) given by
\[\mathcal{H}(k)=\{\vec{x}=\sum_{n\in\mathbb{Z}}x_{n}\vec{e}\,(n)\mid x_{n}\in k,\,\sum_{n\in\mathbb{Z}}|x_{n}|^{2}<\infty\}.\]
We put on \(\mathcal{H}(k)\) the standard inner product
\[<\vec{x}\mid\vec{y}>=\sum_{n\in\mathbb{Z}}x_{n}\overline{y}_{n}\]
so that the \(\{\vec{e}\,(n)\mid n\in\mathbb{Z}\}\) form an orthonormal basis of \(\mathcal{H}(k)\) and for each \(b\in B(\mathcal{H}(k))\), the bounded \(k\)-linear operators from \(\mathcal{H}(k)\) to itself, the matrix \([b]\) is taken w.r.t. this Hilbert basis. In particular, the action of \(b\) on \(\vec{x}\) is multiplying from the left, \(M_{[b]}\), with \([b]\). The commuting flows that play a role in the \(\mathcal{C}_{sh}(\mathbb{C})\)-hierarchy resp. \(\mathcal{C}_{as}(\mathbb{R})\)-hierarchy come from the commuting directions \(\{G_{\sigma}\}\) resp. \(\{F_{j}\}\) we started with and they appeared in the \(LT_{\mathbb{Z}}(R)\)-module \(\mathcal{O}_{sh}\) resp. the \(LT_{\mathbb{Z}}(R_{as})\)- module \(\mathcal{O}_{as}\) as the formal exponential factor. We will now both give them a convergent footing. Note that for all \(\sigma\in\Sigma\) and all \(j\in\mathcal{J}\), the operator norms of the \(M_{G_{\sigma}}\) and the \(M_{F_{j}}\) satisfy
\[||M_{G_{\sigma}}||\leqslant 2\text{ and }||M_{F_{j}}||\leqslant 2.\]
Therefore we choose our parameters \(t_{\Sigma}=(t_{\sigma})\) and \(t_{\mathcal{J}}=(t_{j}\) in respectively the spaces \(\ell_{1}(\Sigma)\) and \(\ell_{1}(\mathcal{J})\) defined by
\[\ell_{1}(\Sigma)=\{t_{\Sigma}=(t_{\sigma})\mid\text{ all }t_{\sigma}\in \mathbb{R}\text{ and }\sum_{\sigma\in\Sigma}|t_{\sigma}|<\infty\}\]
and
\[\ell_{1}(\mathcal{J})=\{t_{\mathcal{J}}=(t_{j})\mid\text{ all }t_{j}\in \mathbb{R}\text{ and }\sum_{j\in\mathcal{J}}|t_{j}|<\infty\}.\]
We equip these two spaces respectively with the norms
\[||t_{\Sigma}||_{1}=\sum_{\sigma\in\Sigma}|t_{\sigma}|\text{ and }||t_{\mathcal{J}}|| _{1}=\sum_{j\in\mathcal{J}}|t_{j}|.\]
Now we define two analytic maps. The first is \(t_{\Sigma}\to\gamma_{\Sigma}(t_{\Sigma})\), with
\[\gamma_{\Sigma}(t_{\Sigma})=\exp(\sum_{\sigma\in\Sigma}t_{\sigma}M_{G_{\sigma }})\]
and maps \(\ell_{1}(\Sigma)\) to \(\operatorname{GL}(\mathcal{H}(\mathbb{C}))\). The second map is \(t_{\mathcal{J}}\to\gamma_{\Sigma}(t_{\Sigma})\), where
\[\gamma_{\mathcal{J}}(t_{\mathcal{J}})=\exp(\sum_{j\in\mathcal{J}}t_{j}M_{F_{j}})\]
and maps \(\ell_{1}(\mathcal{J})\) to \(\operatorname{GL}(\mathcal{H}(\mathbb{R}))\). The images of the maps \(\gamma_{\Sigma}\) and \(\gamma_{\mathcal{J}}\) we denote respectively by \(\Gamma_{\Sigma}\) and \(\Gamma_{\mathcal{J}}\). Now we specify for both hierarchies the settings we work in. For the \(\mathcal{C}_{sh}(\mathbb{C})\)-hierarchy we choose for the algebra \(R(\mathbb{R})\) the \(C^{\infty}\)-functions on the space \(\ell_{1}(\Sigma)\) with values
in \(\mathbb{R}\). Then \(R\) consists of all \(\mathbb{C}\)-valued \(C^{\infty}\)-functions on \(\ell_{1}(\Sigma)\), \([\gamma_{\Sigma}(t_{\Sigma})]\) is well-defined and all the coefficients of
\[[\gamma_{\Sigma}(t_{\Sigma})]=\exp(\sum_{\sigma\in\Sigma}t_{\sigma}G_{\sigma})\]
belong to \(R\). For each derivation \(\partial_{\sigma},\sigma\in\Sigma\), we choose \(\partial_{\sigma}=\frac{\partial}{\partial t_{\sigma}}\). In the case of the \(\mathcal{C}_{as}(\mathbb{R})\)-hierarchy we choose for the algebra \(R_{as}\) the \(\mathbb{R}\)-valued \(C^{\infty}\)-functions on the space \(\ell_{1}(\mathcal{J})\). The matrix \([\gamma_{\mathcal{J}}(t_{\mathcal{J}})]\) is now well-defined and all the coefficients of
\[[\gamma_{\mathcal{J}}(t_{\mathcal{J}})]=\exp(\sum_{j\in\mathcal{J}}t_{j}F_{j})\]
belong to \(R_{as}\). For each derivation \(\partial_{j},j\in\mathcal{J}\), we choose \(\partial_{j}=\frac{\partial}{\partial t_{j}}\).
The next step is the description of the group of transformations of the \(\mathcal{H}(k)\), whose homogeneous space leads to solutions of the \(\mathcal{C}_{sh}(\mathbb{C})\)-hierarchy and the \(\mathcal{C}_{as}(\mathbb{R})\)-hierarchy. Our choice of the class of groups is similar to the ones that worked for the KP hierarchy [21], its strict version [13] and the discrete KP hierarchy and its strict version [15] and makes use of the two-sided ideal \(S_{2}(\mathcal{H}(k))\) of Hilbert-Schmidt operators on \(\mathcal{H}(k)\). Recall it is defined by
\[S_{2}(\mathcal{H}(k))=\{A\in B(\mathcal{H}(k))\mid[A]=(a_{ij}),\sum_{i\in \mathbb{Z}}\sum_{j\in\mathbb{Z}}|a_{ij}|^{2}<\infty\}.\]
By taking the transpose or the adjoint \(S_{2}(\mathcal{H}(k))\) is clearly mapped bijectively onto itself and becomes a Hilbert space w.r.t. the inner product
\[(A,B)_{2}:=\sum_{n\in\mathbb{Z}}<\vec{e}\,(n)\mid A^{*}B\vec{e}\,(n)>.\]
From the fact that the Hilbert-Schmidt operators form a two-sided ideal of compact operators, follows that one can introduce the group \(G(2)(k)\) by
\[G(2)(k)=\left\{g\in\operatorname{GL}(\mathcal{H}(k))\,\middle|\,\,g- \operatorname{Id}\in S_{2}(\mathcal{H}(k))\right\}.\]
It is a normal subgroup of \(\operatorname{GL}(\mathcal{H}(k))\), since \(S_{2}(\mathcal{H}(k))\) is a two-sided ideal of \(B(\mathcal{H}(k))\) One verifies directly that the space \(S_{2}(\mathcal{H}(k))\) can be identified with the Lie algebra \(\mathcal{G}(2)(k)\) of \(G(2)(k)\) and that makes \(G(2)(k)\) a Hilbert Lie group. Each \(S_{2}(\mathcal{H}(k))\), \(k=\mathbb{R}\text{ or }\mathbb{C}\), can be split into the direct sum of two Lie subalgebras corresponding to the decompositions of \(LT_{\mathbb{Z}}(R)\) and \(LT_{\mathbb{Z}}(R_{as})\) as treated in Section 2. For \(k=\mathbb{C}\), the two subalgebras are
\[S_{2}(\mathcal{H}(\mathbb{C}))_{-}=\{A\mid[A]=\sum_{i\leqslant 0}d_{i }\Lambda^{i},d_{0}\in\mathcal{D}_{1}(\mathbb{R}),d_{i}\in\mathcal{D}_{1}( \mathbb{C})\text{ for }i<0\}. \tag{37}\]
and for \(k=\mathbb{R}\), the two subalgebras are
\[S_{2}(\mathcal{H}(\mathbb{R}))_{as}=\{A\mid A^{T}=-A\}\text{ and}\] \[S_{2}(\mathcal{H}(\mathbb{R}))_{\leqslant 0}=\{A\mid[A]=\sum_{i \leqslant 0}d_{i}\Lambda^{i},\text{ for all }i\leqslant 0,d_{i}\in\mathcal{D}_{1}(\mathbb{R})\}. \tag{38}\]
To the first of the two Lie subalgebras in (37) corresponds the subgroup \(U(2)\) of all unitary operators in \(G(2)(\mathbb{C})\) and to the second the group \(P_{-}\) of all lower triangular matrices in \(G(2)(\mathbb{C})\) that have on the central diagonal only real strict positive entries. Since \(U(2)\cap P_{-}=\operatorname{Id}\), the map \(I_{\mathbb{C}}:P_{-}\times U(2)\to G(2)(\mathbb{C})\) defined by \(I_{\mathbb{C}}(P_{-},u))=p_{-}u\) is injective and locally a diffeomorphism. Beltita has shown in [2] that this map is also surjective. The decomposition of \(G(2)(\mathbb{C})\) determined by the inverse of \(I_{\mathbb{C}}\) is called its _Iwasawa decomposition_. The result of Beltita also implies that \(\tilde{I}_{\mathbb{C}}(p_{-},u))=p_{-}^{-1}u\) is a diffeomorphism between between \(P_{-}\times U(2)\)
and \(G(2)(\mathbb{C})\). We will use the twisted Iwasawa decompostion determined by the inverse of \(\tilde{I}_{\mathbb{C}}\): for each \(g\in G(2)(\mathbb{C})\) there exist a \(p_{-}(g)\in P_{-}\) and a \(u(g)\in U(2)\) so that \(g=p_{-}(g)^{-1}u(g)\).
Also both Lie subalgebras in (38) correspond to Lie subgroups of \(G(2)(\mathbb{R})\). The first is the Lie algebra of the orthogonal transformations \(O(2)\) in \(G(2)(\mathbb{R})\) and the second is the Lie algebra of the transformations \(P_{\leqslant 0}\) in \(G(2)(\mathbb{R})\) possessing a lower triangular matrix with a strict positive central diagonal. Also in this case we have \(O(2)\cap P_{\leqslant 0}=\mathrm{Id}\) and thus the map \(I_{\mathbb{R}}:P_{\leqslant 0}\times O(2)\to G(2)(\mathbb{R})\) defined by \(I_{\mathbb{R}}(p_{\leqslant 0},o))=p_{-}o\) is injective and locally a diffeomorphism. In the same paper Beltita proved the surjectivity of \(I_{\mathbb{R}}\) and the inverse of \(I_{\mathbb{R}}\) is the Iwasawa decomposition of \(G(2)(\mathbb{R})\). As in the complex case this implies that \(\tilde{I}_{\mathbb{R}}(p_{\leqslant 0},o))=p_{-}^{-1}o\) is a diffeomorphism between the same varieties and we use the twisted Iwasasawa decomposition determined by the inverse of \(\tilde{I}_{\mathbb{R}}\): for each \(g\in G(2)(\mathbb{R})\) there exist a \(p_{\leqslant 0}(g)\in P_{\leqslant 0}\) and a \(o(g)\in O(2)\) so that \(g=p_{\leqslant 0}(g)^{-1}o(g)\).
Now we come to the construction of the wave functions for both hierarchies. We start with the skew hermitian case. Take a \(g\in G(2)(\mathbb{C})\), then conjugation of \(g\) with an operator \(\gamma_{\Sigma}(t_{\Sigma})\in\Gamma_{\Sigma}\) yields a new element in \(G(2)(\mathbb{C})\) to which we apply the twisted Iwasawa decomposition. Then we have for all \(t_{\Sigma}\in\ell_{1}(\Sigma)\)
\[\gamma_{\Sigma}(t_{\Sigma})g\gamma_{\Sigma}(t_{\Sigma})^{-1}=p_{-}(g)(t_{ \Sigma})^{-1}u(g)(t_{\Sigma})\]
and both \(p_{-}(g)(t_{\Sigma})\) and \(u(g)(t_{\Sigma})\) depend in a \(C^{\infty}-\)fashion of \(t_{\Sigma}\). On the matrix level this identity yields
\[\Phi:=[p_{-}(g)(t_{\Sigma})][\gamma_{\Sigma}(t_{\Sigma})]=[p_{-}(g)(t_{ \Sigma})]\varphi_{0}=[u(g)(t_{\Sigma})][\gamma_{\Sigma}(t_{\Sigma})][g]^{-1}. \tag{39}\]
This shows that \(\Phi\) is a candidate wave matrix for the \(\mathcal{C}_{sh}(\mathbb{C})\)-hierarchy, where all the products in \(\Phi\) are no longer formal but real. Thanks to Proposition 4.5 we know that it suffices to show for all \(\sigma\in\Sigma\) that
\[\partial_{\sigma}(\Phi)=S_{\sigma}\Phi,\text{ with }S_{\sigma}\in\mathcal{SH}(R).\]
On one hand we have that
\[\partial_{\sigma}(\Phi) =\{\partial_{\sigma}([p(g)(t_{\Sigma})])[p(g)(t_{\Sigma})^{-1}]+[ p(g)(t_{\Sigma})](G_{\sigma})[p(g)(t_{\Sigma})^{-1}]\}\Phi\] \[=S_{\sigma}\Phi,\text{ with }S_{\sigma}\in LT_{\mathbb{Z}}(R).\]
On the other hand, if one applies \(\partial_{\sigma}\) to the right hand side of equation (39), then we get
\[\partial_{\sigma}(\Phi) =\{\partial_{\sigma}([u(g)(t_{\Sigma})])[u(g)(t_{\Sigma})^{-1}]- [u(g)(t)]G_{\sigma}[u(g)(t_{\Sigma})]^{-1}\}\Phi\] \[=\{\partial_{\sigma}([u(g)(t_{\Sigma})])[u(g)(t_{\Sigma})]^{*}-[u (g)(t_{\Sigma})]G_{\sigma}[u(g)(t_{\Sigma})]^{-1}\}\Phi \tag{40}\]
Since the matrix \([u(g)(t_{\Sigma})]\) is unitary, applying \(\partial_{\sigma}\) to the identity
\[[u(g)(t_{\Sigma})][u(g)(t_{\Sigma})]^{*}=\mathrm{Id}\]
shows that the matrix \(\partial_{\sigma}([u(g)(t_{\Sigma})])[u(g)(t_{\Sigma})]^{*}\) is skew hermitian. Further, conjugating an skew hermitian matrix with a unitary one, yields again a skew hermitian matrix. Hence, also \([u(g)(t_{\Sigma})]G_{\sigma}[u(g)(t_{\Sigma})]^{-1}\) is skew hermitian and thus the right hand side of equation (40) is the product of a skew hermitian matrix and \(\Phi\). So, all the \(S_{\sigma}\) are in \(\mathcal{SH}(R)\) and \(\Phi\) is a wave matrix for the \(\mathcal{C}_{sh}(\mathbb{C})\)-hierarchy. If we apply the construction to the element \(gu\), with \(u\in U(2)\), then we get
\[\gamma_{\Sigma}(t_{\Sigma})gu\gamma_{\Sigma}(t_{\Sigma})^{-1} =\gamma_{\Sigma}(t_{\Sigma})g\gamma_{\Sigma}(t_{\Sigma})^{-1} \gamma_{\Sigma}(t_{\Sigma})u\gamma_{\Sigma}(t_{\Sigma})^{-1}\] \[=p(g)(t_{\Sigma})^{-1}u(g)(t_{\Sigma})\gamma_{\Sigma}(t_{\Sigma} )u\gamma_{\Sigma}(t_{\Sigma})^{-1}.\]
Now each \(\gamma_{\Sigma}(t_{\Sigma})\) is a unitary matrix in \(\mathrm{GL}(\mathcal{H}(\mathbb{C}))\) and \(S_{2}(\mathcal{H}(\mathbb{C}))\) is a two sided ideal in \(B(\mathcal{H}(\mathbb{C}))\), so conjugating \(u\) with \(\gamma_{\Sigma}(t_{\Sigma})\) results in an element of \(U(2)\). Hence we may conclude \(p(gu)(t_{\Sigma})=p(g)(t_{\Sigma})\) and \(u(gu)(t_{\Sigma})=u(g)(t_{\Sigma})\gamma_{\Sigma}(t_{\Sigma})u\gamma_{\Sigma}(t_ {\Sigma})^{-1}\). So, both \(g\) and \(gu\) generate the same solution of the \(\mathcal{C}_{sh}(\mathbb{C})\)-hierarchy.
The construction of wave matrices for the \(\mathcal{C}_{as}(\mathbb{R})\)-hierarchy starts with an element \(g\in G(2)(\mathbb{R})\) and follows the same line of approach as in the skew hermitian case. We conjugate \(g\) with an operator \(\gamma_{\mathcal{J}}(t_{\mathcal{J}})\) from \(\Gamma_{\mathcal{J}}\) in \(G(2)(\mathbb{R})\) to which we apply the twisted Iwasawa decomposition of this group. Then we have for all \(t_{\mathcal{J}}\in\ell_{1}(\mathcal{J})\)
\[\gamma_{\mathcal{J}}(t_{\mathcal{J}})g\gamma_{\mathcal{J}}(t_{\mathcal{J}})^{- 1}=p_{\leqslant 0}(g)(t_{\mathcal{J}})^{-1}o(g)(t_{\mathcal{J}})\]
and both \(p_{\leqslant 0}(g)(t_{\mathcal{J}})\) and \(o(g)(t_{\mathcal{J}})\) depend in a \(C^{\infty}-\)fashion of \(t_{\mathcal{J}}\). On the matrix level this identity yields
\[\Psi:=[p_{\leqslant 0}(g)(t_{\mathcal{J}})][\gamma_{\mathcal{J}}(t_{\mathcal{J} })]=[p_{\leqslant 0(g)}(t_{\mathcal{J}})]\psi_{0}=[o(g)(t_{\mathcal{J}})][ \gamma_{\mathcal{J}}(t_{\mathcal{J}})][g]^{-1}. \tag{41}\]
This shows that \(\Psi\) is a candidate wave matrix for the \(\mathcal{C}_{as}(\mathbb{R})\)-hierarchy, where all the products in both expressions for \(\Psi\) are real. Thanks to Proposition 4.5 we know that it suffices to show for all \(j\in\mathcal{J}\) that
\[\partial_{j}(\Psi)=A_{j}\Psi,\text{ with }A_{j}\in\mathcal{AS}(R_{as}).\]
On one hand we have that
\[\partial_{j}(\Psi) =\{\partial_{j}([p_{\leqslant 0}(g)(t_{\mathcal{J}})])[p_{ \leqslant 0}(g)(t_{\mathcal{J}})]^{-1}+[p_{\leqslant 0}(g)(t_{\mathcal{J}})]F_{j}[p_{ \leqslant 0}(g)(t_{\mathcal{J}})]^{-1}\}\Psi\] \[=A_{j}\Psi,\text{ with }A_{j}\in LT_{\mathbb{Z}}(R).\]
On the other hand, if one applies \(\partial_{j}\) to the right hand side of equation (41), then we get
\[\partial_{j}(\Psi) =\{\partial_{j}([o(g)(t_{\mathcal{J}})])[o(g)(t_{\mathcal{J}})^{- 1}]-[o(g)(t_{\mathcal{J}})]F_{j}[o(g)(t_{\mathcal{J}})]^{-1}\}\Psi\] \[=\{\partial_{j}([o(g)(t_{\mathcal{J}})])[o(g)(t_{\mathcal{J}})]^{T }-[o(g)(t_{\mathcal{J}})]F_{j}[o(g)(t_{\mathcal{J}})]^{-1}\}\Psi \tag{42}\]
Since the matrix \([o(g)(t)]\) is orthogonal, applying \(\partial_{j}\) to the identity
\[[o(g)(t_{\mathcal{J}})][o(g)(t_{\mathcal{J}})]^{T}=\text{Id}\]
shows that the matrix \(\partial_{j}([o(g)(t_{\mathcal{J}})])[o(g)(t_{\mathcal{J}})]^{T}\) is anti-symmetric. Further, conjugating an anti-symmetric matrix with an orthogonal one, yields again an anti-symmetric matrix. Hence, also \([o(g)(t_{\mathcal{J}})]F_{j}[o(g)(t_{\mathcal{J}})]^{-1}\) is anti-symmetric and thus the right hand side of equation (42) is the product of an anti-symmetric matrix and \(\Psi\). So, all the \(A_{j}\) are in \(\mathcal{AS}(R_{as})\) and \(\Psi\) is a wave matrix for the \(\mathcal{C}_{as}(\mathbb{R})\)-hierarchy. If we apply the construction to the element \(go\), with \(o\in O(2)\), then we get
\[\gamma_{\mathcal{J}}(t_{\mathcal{J}})go\gamma_{\mathcal{J}}(t_{ \mathcal{J}})^{-1} =\gamma_{\mathcal{J}}(t_{\mathcal{J}})g\gamma_{\mathcal{J}}(t_{ \mathcal{J}})^{-1}\gamma_{\mathcal{J}}(t_{\mathcal{J}})0\gamma_{\mathcal{J}}( t_{\mathcal{J}})^{-1}\] \[=p(g)(t_{\mathcal{J}})o(g)(t_{\mathcal{J}})\gamma_{\mathcal{J}}( t_{\mathcal{J}})o\gamma_{\mathcal{J}}(t_{\mathcal{J}})^{-1}.\]
Since \(\gamma_{\mathcal{J}}(t_{\mathcal{J}})\) is an orthogonal element of \(\text{GL}(\mathcal{H}(\mathbb{R}))\) and \(S_{2}(\mathcal{H}(\mathbb{R}))\) is a two sided ideal in \(B(\mathcal{H}(\mathbb{R}))\), conjugation of \(o\) with \(\gamma_{\mathcal{J}}(t_{\mathcal{J}})\) yields an element of \(O(2)\). Hence there follows \(p_{\leqslant 0}(go)(t_{\mathcal{J}})=p_{\leqslant 0}(g)(t_{\mathcal{J}})\) and \(o(go)(t_{\mathcal{J}})=o(g)(t_{\mathcal{J}})\gamma_{\mathcal{J}}(t_{\mathcal{J} })o\gamma_{\mathcal{J}}(t_{\mathcal{J}})^{-1}\). So, the elements \(g\) and \(go\) generate the same solution of the \(\mathcal{C}_{as}(\mathbb{R})\)-hierarchy. We summarize the results in a
**Theorem 5.1**.: _Solutions of the \(\mathcal{C}_{sh}(\mathbb{C})\)-hierarchy and the \(\mathcal{C}_{as}(\mathbb{R})\)-hierarchy can be obtained as follows:_
* _For each_ \(g\in G(2)(\mathbb{C})\) _and each_ \(\sigma\in\Sigma\)_, consider the perturbation_ \[\mathcal{G}_{\sigma}(g):=[p_{-}(g)(t_{\Sigma})]G_{\sigma}[p_{-}(g)(t_{\Sigma}) ]^{-1}\] _of the basic direction_ \(G_{\sigma}\)_, with the_ \(\mathbb{Z}\times\mathbb{Z}\)_-matrix_ \([p_{-}(g)(t_{\Sigma})]\) _from (_39_). Then the_ \(\{\mathcal{G}_{\sigma}(g)\}\) _are a solution of the_ \(\mathcal{C}_{sh}(\mathbb{C})\)_-hierarchy. This solution does not change if one replaces_ \(g\) _by_ \(gu\)_, with_ \(u\in U(2)\)_. Hence the symmetric space_ \(G(2)(\mathbb{C})/U(2)\) _determines the solutions of the_ \(\mathcal{C}_{sh}(\mathbb{C})\)_-hierarchy._
_._
* _For each_ \(g\in G(2)(\mathbb{R})\) _and each_ \(j\in\mathcal{J}\)_, consider the perturbation_ \[\mathcal{F}_{j}(g):=[p_{\leqslant 0}(g)(t_{\mathcal{J}})]F_{j}[p_{\leqslant 0}(g)(t_{ \mathcal{J}})]^{-1}\] _of the basic direction_ \(F_{j}\)_, with the_ \(\mathbb{Z}\times\mathbb{Z}\)_-matrix_ \([p_{\leqslant 0}(g)(t_{\mathcal{J}})]\) _from (_41_). Then the_ \(\{\mathcal{F}_{j}(g)\}\) _are a solution of the_ \(\mathcal{C}_{\text{as}}(\mathbb{R})\)_-hierarchy. This solution does not change if one replaces_ \(g\) _by go, with_ \(o\in O(2)\)_. Hence the symmetric space_ \(G(2)(\mathbb{R})/O(2)\) _determines the solutions of the_ \(\mathcal{C}_{\text{sh}}(\mathbb{R})\)_-hierarchy._
|
2306.00154 | Dynamics of vortex cap solutions on the rotating unit sphere | In this work, we analytically study the existence of periodic vortex cap
solutions for the homogeneous and incompressible Euler equations on the
rotating unit 2-sphere, which was numerically conjectured by Dritschel-Polvani
and Kim-Sakajo-Sohn. Such solutions are piecewise constant vorticity
distributions, subject to the Gauss constraint and rotating uniformly around
the vertical axis. The proof is based on the bifurcation from zonal solutions
given by spherical caps. For the one--interface case, the bifurcation
eigenvalues correspond to Burbea's frequencies obtained in the planar case but
shifted by the rotation speed of the sphere. The two--interfaces case (also
called band type or strip type solutions) is more delicate. Though, for any
fixed large enough symmetry, and under some non-degeneracy conditions to avoid
spectral collisions, we achieve the existence of at most two branches of
bifurcation. | Claudia Garcia, Zineb Hassainia, Emeric Roulley | 2023-05-31T19:53:28Z | http://arxiv.org/abs/2306.00154v1 | # Dynamics of vortex cap solutions on the rotating unit sphere
###### Abstract
In this work, we analytically study the existence of periodic vortex cap solutions for the homogeneous and incompressible Euler equations on the rotating unit 2-sphere, which was numerically conjectured in [28, 29, 60, 61]. Such solutions are piecewise constant vorticity distributions, subject to the Gauss constraint and rotating uniformly around the vertical axis. The proof is based on the bifurcation from zonal solutions given by spherical caps. For the one-interface case, the bifurcation eigenvalues correspond to Burbea's frequencies obtained in the planar case but shifted by the rotation speed of the sphere. The two-interfaces case (also called band type or strip type solutions) is more delicate. Though, for any fixed large enough symmetry, and under some non-degeneracy conditions to avoid spectral collisions, we achieve the existence of at most two branches of bifurcation.
###### Contents
* 1 Introduction
* 1.1 Euler equations on the rotating unit sphere
* 1.2 Historical discussion
* 1.3 Dynamics of vortex cap solutions
* 1.4 Main results
* 1.5 Organization of the paper
* 2 The one-interface case
* 2.1 Equation of interest
* 2.2 Bifurcation study
* 3 The two-interfaces case: vorticity bands
* 3.1 Equations of interest
* 3.2 Spectral properties and proof of the main result
* A Appendix
* A.1 An integral
* A.2 Potential theory
* A.3 Crandall-Rabinowitz theorem
* A.4 Proof of Lemma 1.2
## 1 Introduction
The purpose of this paper is to give an analytical proof of the emergence of uniformly rotating (around the \(z\)-axis) vortex caps with \(\mathbf{m}\)-fold symmetries (\(\mathbf{m}\in\mathbb{N}^{*}\)) for the incompressible Euler equations set on the rotating unit sphere \(\mathbb{S}^{2}\) defined by
\[\mathbb{S}^{2}\triangleq\Big{\{}(x,y,z)\in\mathbb{R}^{3}\quad\text{s.t.}\quad x ^{2}+y^{2}+z^{2}=1\Big{\}}.\]
In particular, we shall implement bifurcation techniques in order to find non-trivial vortex cap solutions close to the trivial stationary flat (spherical) ones, which was numerically conjectured in [28, 29, 61, 60]. In this introduction we present the model of interest, discuss some historical background, derive the contour dynamics equations, expose our results and give the organization of this work.
### Euler equations on the rotating unit sphere
This study deals with the homogeneous incompressible Euler equations on the two dimensional unit sphere in rotation around the vertical axis. Such a model, sometimes called _barotropic_, is commonly used in geophysical fluid dynamics for meteorological predictions or to study the motion of planets' atmosphere. We may refer the reader to [43, Sec. 13.4.1] and [66] for a rather complete introduction to these equations. In order to present the model, we shall first recall some basic notions in differential calculus/geometry. The set \(\mathbb{S}^{2}\) is endowed with a smooth manifold structure described by the following two charts
\[\begin{array}{rcl}C_{1}:(0,\pi)\times(0,2\pi)&\to&\mathbb{R}^{3}\\ (\theta,\varphi)&\mapsto&\big{(}\sin(\theta)\cos(\varphi)\,,\,\sin(\theta)\sin (\varphi)\,,\,\cos(\theta)\big{)},\\ C_{2}:(0,\pi)\times(0,2\pi)&\to&\mathbb{R}^{3}\\ (\vartheta,\phi)&\mapsto&\big{(}-\sin(\vartheta)\cos(\phi)\,,\,-\cos(\vartheta )\,,\,-\sin(\vartheta)\sin(\phi)\big{)}.\end{array}\]
For our purpose, we shall mainly work in the chart \(C_{1}\) where the variables \(\theta\) and \(\varphi\) are called _colatitude_ and _longitude_, respectively. Notice that the physical literature mentioned above rather considers the latitude/longitude convention, but we found more convenient to work with the other one. In the colatitude/longitude chart \(C_{1}\), the Riemannian metric of \(\mathbb{S}^{2}\) is given by
\[\mathsf{g}_{\mathbb{S}^{2}}(\theta,\varphi)\triangleq d\theta^{2}+\sin^{2}( \theta)d\varphi^{2}. \tag{1.1}\]
Therefore, denoting \(N\) and \(S\) the north and south poles, we have that for any \(p\in\mathbb{S}^{2}\setminus\{N,S\}\), an orthonormal basis of the tangent space \(T_{p}\mathbb{S}^{2}\) is given by
\[\mathbf{e}_{\theta}\triangleq\partial_{\theta},\qquad\mathbf{e}_{\varphi} \triangleq\frac{1}{\sin(\theta)}\partial_{\varphi}.\]
We have used the classical identification between tangent vectors and directional differentiations. In these coordinates, the Riemannian volume is given by
\[d\sigma=\sin(\theta)d\theta d\varphi.\]
Therefore, for any function \(\mathtt{f}:\mathbb{S}^{2}\to\mathbb{R}\), we define
\[f(\theta,\varphi)\triangleq\mathtt{f}\big{(}C_{1}(\theta,\varphi)\big{)}, \qquad\int_{\mathbb{S}^{2}}\mathtt{f}(\xi)d\sigma(\xi)\triangleq\int_{0}^{2 \pi}\int_{0}^{\pi}f(\theta,\varphi)\sin(\theta)d\theta d\varphi. \tag{1.2}\]
In the sequel, with a small abuse of notation, we shall denote \(f\) for both \(\mathtt{f}\) or \(f\) with no possible confusion according to the context, since one is in the cartesian variables \(\xi\) and the other one is in the spherical coordinates \((\theta,\varphi).\) The gradient of \(f\) is defined as follows
\[\nabla f(\theta,\varphi)\triangleq\partial_{\theta}f(\theta,\varphi)\mathbf{e }_{\theta}+\frac{\partial_{\varphi}f(\theta,\varphi)}{\sin(\theta)}\mathbf{e} _{\varphi}.\]
Similarly, we define its orthogonal as
\[\nabla^{\perp}f(\theta,\varphi)\triangleq J\nabla f(\theta,\varphi),\qquad \operatorname*{Mat}_{(\mathbf{e}_{\theta},\mathbf{e}_{\varphi})}(J)=\begin{pmatrix} 0&1\\ -1&0\end{pmatrix}.\]
The Laplace-Beltrami operator applied to \(f\) is defined by
\[\Delta f(\theta,\varphi)\triangleq\frac{1}{\sin(\theta)}\partial_{\theta}\big{[} \sin(\theta)\partial_{\theta}f(\theta,\varphi)\big{]}+\frac{1}{\sin^{2}(\theta )}\partial_{\varphi}^{2}f(\theta,\varphi).\]
For a vector field \(U(\theta,\varphi)=U_{\theta}(\theta,\varphi)\mathbf{e}_{\theta}+U_{\varphi}( \theta,\varphi)\mathbf{e}_{\varphi}\), the divergence expresses as
\[(\nabla\cdot U)(\theta,\varphi)\triangleq\frac{1}{\sin(\theta)}\partial_{ \theta}\big{[}\sin(\theta)U_{\theta}(\theta,\varphi)\big{]}+\frac{1}{\sin( \theta)}\partial_{\varphi}U_{\varphi}(\theta,\varphi).\]
The incompressible Euler equations on the rotating sphere \(\mathbb{S}^{2}\) with angular rotation speed \(\widetilde{\gamma}\) are given by
\[(E_{\widetilde{\gamma}})\begin{cases}\partial_{\mathtt{f}}\Omega(t,\theta, \varphi)+U(t,\theta,\varphi)\cdot\nabla\big{(}\Omega(t,\theta,\varphi)-2 \widetilde{\gamma}\cos(\theta)\big{)}=0,\\ U(t,\theta,\varphi)=\nabla^{\perp}\Psi(t,\theta,\varphi),\\ \Delta\Psi(t,\theta,\varphi)=\Omega(t,\theta,\varphi).\end{cases} \tag{1.3}\]
We mention that the term \(-2\widetilde{\gamma}\,U(t,\theta,\varphi)\cdot\nabla\cos(\theta)\) corresponds to the Coriolis force coming from the rotation of the sphere. In the sequel, we shall work with the following quantity called _absolute vorticity_
\[\overline{\Omega}(t,\theta,\varphi)\triangleq\Omega(t,\theta,\varphi)-2 \widetilde{\gamma}\cos(\theta). \tag{1.4}\]
The second equation in (1.3) states that the velocity field \(U\) is divergence-free. Then, the divergence theorem implies the following so-called _Gauss constraint_
\[\forall t\geqslant 0,\quad\int_{\mathbb{S}^{2}}\overline{\Omega}(t,\xi)d \sigma(\xi)=\int_{\mathbb{S}^{2}}\Omega(t,\xi)d\sigma(\xi)=0. \tag{1.5}\]
Notice that the first equality above is justified by
\[\int_{\mathbb{S}^{2}}\big{[}\overline{\Omega}(t,\xi)-\Omega(t,\xi)\big{]}d \sigma(\xi)=-4\pi\widetilde{\gamma}\int_{0}^{\pi}\cos(\theta)\sin(\theta)d \theta=\big{[}\pi\widetilde{\gamma}\cos(2\theta)\big{]}_{0}^{\pi}=0.\]
According to [4], the stream function \(\Psi\) can be computed from the vorticity \(\Omega\) through the following integral representation
\[\Psi(t,\xi)=\int_{\mathbb{S}^{2}}G(\xi,\xi^{\prime})\Omega(t,\xi^{\prime})d \sigma(\xi^{\prime}),\qquad G(\xi,\xi^{\prime})\triangleq\frac{1}{2\pi}\log \left(\frac{|\xi-\xi^{\prime}|_{\mathbb{R}^{3}}}{2}\right), \tag{1.6}\]
where \(|\cdot|_{\mathbb{R}^{3}}\) is the usual Euclidean norm in \(\mathbb{R}^{3}.\) In the colatitude/longitude coordinates, we have
\[G(\theta,\varphi,\theta^{\prime},\varphi^{\prime})=\frac{1}{4\pi}\log\Big{(}1 -\cos(\theta)\cos(\theta^{\prime})-\sin(\theta)\sin(\theta^{\prime})\cos( \varphi-\varphi^{\prime})\Big{)}-\frac{\log(2)}{4\pi}. \tag{1.7}\]
In what follows, we shall denote for simplicity
\[D(\theta,\theta^{\prime},\varphi,\varphi^{\prime})\triangleq 1-\cos(\theta) \cos(\theta^{\prime})-\sin(\theta)\sin(\theta^{\prime})\cos(\varphi-\varphi^{ \prime}). \tag{1.8}\]
Observe that we can write
\[D(\theta,\theta^{\prime},\varphi,\varphi^{\prime}) =1-\cos(\theta-\theta^{\prime})+\sin(\theta)\sin(\theta^{\prime} )\big{(}1-\cos(\varphi-\varphi^{\prime})\big{)}\] \[=2\Big{[}\sin^{2}\left(\frac{\theta-\theta^{\prime}}{2}\right)+ \sin(\theta)\sin(\theta^{\prime})\sin^{2}\left(\frac{\varphi-\varphi^{\prime} }{2}\right)\Big{]}. \tag{1.9}\]
The above function \(D\) will play an important role since it describes the singularity of the integral operator defining the stream function. Indeed, with this last expression, we recover that
\[D(\theta,\theta^{\prime},\varphi,\varphi^{\prime})\geqslant 0\qquad\text{and} \qquad D(\theta,\theta^{\prime},\varphi,\varphi^{\prime})=0\quad\Leftrightarrow \quad\theta=\theta^{\prime}\text{ and }\big{(}\varphi=\varphi^{\prime}\text{ or }\theta\in\{0,\pi\}\text{ or }\theta^{\prime}\in\{0,\pi\}\big{)}.\]
Notice that once one works with the colatitude/longitude coordinates instead of the physical ones, the Euclidean norm is deformed implying that the Green kernel is anisotropic and the north and south poles are degenerating points. This will generate extra complexity later when dealing with the regularity of the stream function in the new coordinates.
Figure 1: Convention colatitude/longitude for spherical coordinates.
### Historical discussion
We shall expose here some relevant results linked to our work.
**Vortex patches in the plane**
Recall that the planar homogeneous incompressible Euler equations write
\[\begin{cases}\partial_{t}\mathbf{\omega}(t,x_{1},x_{2})+\mathbf{v}(t,x_{1},x_{2}) \cdot\nabla\mathbf{\omega}(t,x_{1},x_{2})=0,\\ \mathbf{v}(t,x_{1},x_{2})=\nabla^{\perp}_{\mathbb{R}^{2}}\mathbf{\Psi}(t,x_{1},x_ {2}),\\ \Delta_{\mathbb{R}^{2}}\mathbf{\Psi}(t,x_{1},x_{2})=\mathbf{\omega}(t,x_{1},x_{2}), \end{cases}\nabla^{\perp}_{\mathbb{R}^{2}}\triangleq\begin{pmatrix}-\partial_{x _{2}}\\ \partial_{x_{1}}\end{pmatrix},\qquad\Delta_{\mathbb{R}^{2}}\triangleq\partial^{2 }_{x_{1}}+\partial^{2}_{x_{2}}. \tag{1.10}\]
Vortex patches are weak solutions to (1.10) in the Yudovich class and taking the form \(t\mapsto\mathbf{1}_{D_{t}}\) where \(D_{t}\) is a bounded planar domain. The vorticity jump here is normalized to \(1\). The dynamics is described by the evolution of the boundary \(\partial D_{t}\). More precisely, according to [55, p. 174], if \(z(t,\cdot):\mathbb{T}\to\partial D_{t}\) is a parametrization of (one connected component of) the boundary at time \(t\), then it must solve the following contour dynamics equation
\[\mathrm{Im}\Big{(}\partial_{t}z(t,x)\overline{\partial_{x}z(t,x)}\Big{)}= \partial_{x}\Big{(}\mathbf{\Psi}\big{(}t,z(t,x)\big{)}\Big{)}. \tag{1.11}\]
The solutions that we construct in this study are the analogous to the V-states obtained in the planar case. These later are particular vortex patches where the dynamics is given by a uniform rotation of the initial domain around its center of mass (fixed to be the origin), namely \(D_{t}=e^{\mathrm{i}\Omega t}D_{0}\) with \(\Omega\in\mathbb{R}.\) As remarked by Rankine in 1858, any radial domain is a V-state rotating with any angular velocity \(\Omega\in\mathbb{R}\). Later, Kirchhoff [62] discovered non-trivial explicit examples of V-states with elliptic shapes. Close to the unit disc, such solutions were first obtained numerically by Deem and Zabusky [26] and then analytically by Burkea [5] through bifurcation techniques. He constructed a countable family of local curves of V-states with \(\mathbf{m}\)-fold symmetries (i.e. invariance by \(\frac{2\pi}{\mathbf{m}}\) angular rotation) bifurcating from the disc at the angular velocities
\[\Omega_{\mathbf{m}}=\frac{\mathbf{m}-1}{2\mathbf{m}}. \tag{1.12}\]
The global continuation of the Burbea's bifurcation branches was found in [48] through global bifurcation arguments. Some rigidity results giving necessary conditions on the angular velocity to obtain non-trivial V-states can be found in [30, 51, 41]. In the last decade, there have been intensive rigorous studies on the subject, giving more properties of the V-states and exploring their existence around different topological structures. In particular, the bifurcation from the annulus
\[A_{b}\triangleq\big{\{}z\in\mathbb{C}\quad\text{s.t.}\quad b<|z|<1\big{\}}, \quad b\in(0,1),\]
has been studied by Hmidi-de la Hoz-Mateu-Verdera [25]. They proved that under the condition
\[\mathfrak{p}(b,\mathbf{m})\triangleq 1+b^{\mathbf{m}}-\frac{1-b^{2}}{2} \mathbf{m}<0,\]
there are exactly two branches of periodic vortex patches, with two interfaces, emerging at the angular velocities
\[\Omega^{\pm}_{\mathbf{m}}(b)\triangleq\frac{1-b^{2}}{4}\pm\frac{1}{2\mathbf{ m}}\sqrt{\left(\frac{1-b^{2}}{2}\mathbf{m}-1\right)^{2}-b^{2\mathbf{m}}}. \tag{1.13}\]
The degenerate case \(\mathfrak{p}(b,\mathbf{m})=0\) has been discussed in [53, 68]. Periodic multiple patches were found via a desingularization of an appropriate distribution of point vortices. The first work studying the desingularization of two point vortices using contour dynamics equations is due to Hmidi-Mateu [54]. Other extensions, in the same spirit, have been achieved in [31, 32, 45, 50]. Similar results were obtained using variational arguments in [6, 7, 10, 67] or via gluing techniques in [21, 22]. Recently, the authors in [33] studied the global continuation of the desingularization of the vortex pairs. The existence of a non-uniform rotating vorticity distribution has been treated in [16, 37, 36]. In [40], Gomez-Serrano, Park and Shi have constructed stationary configurations of multi-layered patches with finite kinetic energy using Nash-Moser techniques. Some additional references can be found in [15, 24, 55, 56, 52].
Many of the results, mentioned above, apply not only to the planar Euler equations but also to other active scalar equations such as the generalized surface quasi-geostrophic equation or the quasi-geostrophic shallow-water equations, see [1, 8, 9, 11, 14, 16, 17, 23, 27, 31, 32, 33, 34, 35, 37, 38, 44, 45, 48, 50, 52, 53, 54, 56, 58, 64, 65].
We end this discussion by very recent new perspectives concerning the existence of quasi-periodic patches using the Nash-Moser scheme together with KAM theory, see [3, 39, 46, 47, 49, 57, 64].
**Around stationary solutions on the rotating unit sphere**
Looking for stationary solutions to (1.3) is equivalent to solving the following equation on the stream function
\[\nabla^{\perp}\Psi(\theta,\varphi)\cdot\nabla\Big{(}\Delta\Psi(\theta,\varphi)-2 \widetilde{\gamma}\cos(\theta)\Big{)}=0. \tag{1.14}\]
If \(\Psi(\theta,\varphi)=\Psi(\theta)\) is longitude independant, then it automatically solves (1.14) because in this case, \(\nabla^{\perp}\Psi(\theta)\) is colinear to \(\mathbf{e}_{\varphi}\) and \(\nabla\Big{(}\Delta\Psi(\theta)-2\widetilde{\gamma}\cos(\theta)\Big{)}\) is colinear to \(\mathbf{e}_{\theta}.\) Such type of solutions are called _zonal solutions_. Also, one can easily check that any solution of the semilinear elliptic problem
\[\Delta\Psi(\theta,\varphi)-2\widetilde{\gamma}\cos(\theta)=F\big{(}\Psi(\theta,\varphi)\big{)}, \tag{1.15}\]
with \(F\in C^{1}(\mathbb{R},\mathbb{R}),\) solves (1.14), but the converse is not true in general. Constantin and Germain [19] showed that any solution to (1.15) with \(F^{\prime}>-6\) must be zonal (modulo rotation) and stable in \(H^{2}(\mathbb{S}^{2})\) provided the additional constraint \(F^{\prime}<0.\) Notice that the \(-6\) corresponds to the second eigenvalues of the Laplace-Beltrami operator. The zonal Rossby-Haurwitz stream functions of degree \(n\in\mathbb{N}\) are special stationary solutions in the form
\[\Psi_{n}(\theta)=\beta Y_{n}^{0}(\theta)+\frac{2\widetilde{\gamma}}{n(n+1)-2} \cos(\theta),\qquad\beta\in\mathbb{R}^{*},\]
where \(Y_{n}^{0}\) is the spherical harmonic. We refer the reader to [2] for an introduction to spherical harmonics. In [19], the authors also discussed the local and global bifurcation of non-zonal solutions to (1.15) from Rossby-Haurwitz waves. They also proved the stability in \(H^{2}(\mathbb{S}^{2})\) of Rossby-Haurwitz zonal solutions of degree \(2\) as well as the instability in \(H^{2}(\mathbb{S}^{2})\) of more general non-zonal Rossby-Haurwitz type solutions. Very recently, the stability of the degree \(2\) Rossby-Haurwitz waves in \(L^{p}(\mathbb{S}^{2})\) spaces with \(p\in(1,\infty)\) has been obtained by Cao-Wang-Zuo [12]. Recently, Nualart [63] proved the existence of non-zonal stationary solutions Gevrey-close to the zonal Rossby-Haurwitz stream functions of degree \(2\). The Lyapunov stability of zonal monotone vorticities in \(L^{p}(\mathbb{S}^{2})\) for \(p\in(2,\infty)\) was discussed by Caprino-Marchioro [13]. We mention that stationary solutions to (1.3) correspond to traveling solutions of the non-rotating case (\(E_{0}\)). More generally, we have the following result, see [19, 63].
**Lemma 1.1**.: _We consider two vorticities \(\Omega,\widetilde{\Omega}\) related through_
\[\Omega(t,\theta,\varphi)=-2c\cos(\theta)+\widetilde{\Omega}(t,\theta,\varphi- ct),\]
_with \(c\in\mathbb{R}\), and associated to stream functions \(\Psi,\widetilde{\Psi}\) and velocity fields \(U,\widetilde{U}\):_
\[\Psi(t,\theta,\varphi)=c\cos(\theta)+\widetilde{\Psi}(t,\theta,\varphi-ct), \qquad U(t,\theta,\varphi)=c\sin(\theta)\mathbf{e}_{\varphi}+\widetilde{U}(t, \theta,\varphi-ct).\]
_Then, the following are equivalent_
1. \((\Omega,\Psi,U)\) _is a solution to_ \((E_{\widetilde{\gamma}}).\)__
2. \((\widetilde{\Omega},\widetilde{\Psi},\widetilde{U})\) _is a solution to_ \((E_{\widetilde{\gamma}+c}).\)__
For later purposes, we shall prove that a longitude independent (absolute) vorticity generates a zonal (so stationary) flow. This is given by the following lemma, whose proof is postponed to Appendix A.4.
**Lemma 1.2**.: _For any \(\alpha\in\mathbb{R},\) we introduce the rotation of angle \(\alpha\) around the \(z\) axis_
\[\mathcal{R}(\alpha)\triangleq\begin{pmatrix}\cos(\alpha)&-\sin(\alpha)&0\\ \sin(\alpha)&\cos(\alpha)&0\\ 0&0&1\end{pmatrix}\in SO_{3}(\mathbb{R}).\]
_Assume that_
\[\forall\alpha\in\mathbb{R},\quad\forall\xi\in\mathbb{S}^{2},\quad\Omega \big{(}\mathcal{R}(\alpha)\xi\big{)}=\Omega(\xi),\]
_or equivalently_
\[\forall\alpha\in\mathbb{R},\quad\forall\xi\in\mathbb{S}^{2},\quad\overline{ \Omega}\big{(}\mathcal{R}(\alpha)\xi\big{)}=\overline{\Omega}(\xi),\]
_then_
\[\forall\alpha\in\mathbb{R},\quad\forall\xi\in\mathbb{S}^{2},\quad\Psi\big{(} \mathcal{R}(\alpha)\xi\big{)}=\Psi(\xi).\]
_This means that the flow is zonal (and thus stationary)._
**Patch type solutions on the rotating \(2\)-sphere**
Our analysis is strongly motivated by previous numerical works concerning the existence of patch type solutions for the rotating \(2\)-sphere. The pioneering results are due to Dritschel-Polvani [28, 29] where they considered the sphere at rest \((\widetilde{\gamma}=0)\) and found, numerically, vortex cap solutions with one and two interfaces. They also studied the numerical nonlinear stability. Later Kim [59] described the free boundary problem for patch type solutions in the rotating \(2\)-sphere \((\widetilde{\gamma}\neq 0)\) by using the stereographic projection. In [61], Kim, Sakajo and Sohn numerically observed the existence of vortex caps (only one interface) and vortex bands (two interfaces) [60]. They also showed the linear stability of those solutions.
### Dynamics of vortex cap solutions
Here, we introduce the notion of vortex cap solutions on the sphere, which are the analogous to vortex patches in the plane. Then, we derive the fundamental contour dynamics equations, introduced in this work to track the evolution of the caps interfaces. Our formulation (1.21) is derived by following the ideas of [55, p. 174], implemented in the context of vortex patches, but adapted to fit with the non Euclidean geometry of the sphere.
Fix \(M\in\mathbb{N}\setminus\{0,1\}\) and \((\omega_{k})_{1\leqslant k\leqslant M}\in\mathbb{R}^{M}\) such that
\[\forall k\in\llbracket 1,M-1\rrbracket,\quad\omega_{k}\neq\omega_{k+1}. \tag{1.16}\]
Consider a partition of the unit sphere in the form
\[\mathbb{S}^{2}=\bigsqcup_{k=1}^{M}\mathscr{C}_{k}(0),\]
where for any \(k\in\llbracket 1,M-1\rrbracket\), the boundary \(\Gamma_{k}(0)\triangleq\partial\mathscr{C}_{k}(0)\cap\partial\mathscr{C}_{k+1 }(0)\) is diffeomorphic to a circle. Take an initial condition in the form
\[\overline{\Omega}(0,\cdot)=\sum_{k=1}^{M}\omega_{k}\mathbf{1}_{\mathscr{C}_{k} (0)}. \tag{1.17}\]
The Gauss constraint (1.5) requires the following additional condition
\[\sum_{k=1}^{M}\omega_{k}\sigma\big{(}\mathscr{C}_{k}(0)\big{)}=0. \tag{1.18}\]
Observe that, by virtue of (1.3)-(1.4), the absolute vorticity \(\overline{\Omega}\) solves the nonlinear transport equation
\[\partial_{t}\overline{\Omega}+U\cdot\nabla\overline{\Omega}=0.\]
Since the singularity of the Green function in (1.6) is logarithmic, then, similarly to Yudovich's theory [69] in the planar case, one can expect to obtain existence and uniqueness of a global in time weak solution which is Lagrangian, namely
\[\forall t\geqslant 0,\quad\forall\xi\in\mathbb{S}^{2},\quad\overline{\Omega} (t,\xi)=\overline{\Omega}\big{(}0,\Phi_{t}^{-1}(\xi)\big{)},\]
where
\[\forall\xi\in\mathbb{S}^{2},\quad\Phi_{t}(\xi)=\xi+\int_{0}^{t}U\big{(}s,\Phi _{s}(\xi)\big{)}ds.\]
Applying this remark with the initial condition (1.17) gives the following structure of the solution at any later time \(t\geqslant 0\)
\[\overline{\Omega}(t,\cdot)=\sum_{k=1}^{M}\omega_{k}\mathbf{1}_{\mathscr{C}_{k }(t)},\qquad\text{with}\qquad\forall k\in\llbracket 1,M\rrbracket,\quad \mathscr{C}_{k}(t)\triangleq\Phi_{t}\big{(}\mathscr{C}_{k}(0)\big{)}. \tag{1.19}\]
Since \(U\) is solenoidal, then the flow \(t\mapsto\Phi_{t}\) is measure preserving
\[\forall k\in\llbracket 1,M\rrbracket,\quad\sigma\big{(}\mathscr{C}_{k}(t) \big{)}=\sigma\big{(}\mathscr{C}_{k}(0)\big{)}.\]
Any solution in the form (1.19) satisfying the conditions (1.16) and (1.18) is called a _vortex cap solution_. From now on, we fix a \(k\in\llbracket 1,M-1\rrbracket\). Assume that the initial boundary \(\Gamma_{k}(0)\) can be described as the zero level set of a certain \(C^{1}\) regular function \(\mathtt{h}_{k}:\mathbb{S}^{2}\to\mathbb{R}\), namely
\[\Gamma_{k}(0)=\big{\{}\xi\in\mathbb{S}^{2}\quad\text{s.t.}\quad\mathtt{h}_{k} (\xi)=0\big{\}}.\]
Let us consider the following quantity
\[\forall t\geqslant 0,\quad\forall\xi\in\mathbb{S}^{2},\quad F_{k}(t,\xi) \triangleq\mathtt{h}_{k}\big{(}\Phi_{t}^{-1}(\xi)\big{)}. \tag{1.20}\]
Then, by construction \(F_{k}(t,\cdot)\) describes the boundary \(\Gamma_{k}(t)\triangleq\partial\mathscr{C}_{k}(t)\cap\partial\mathscr{C}_{k+1 }(t)\). More precisely,
\[\Gamma_{k}(t)=\big{\{}\xi\in\mathbb{S}^{2}\quad\text{s.t.}\quad F_{k}(t,\xi)= 0\big{\}}.\]
Differentiating the relation (1.20) with respect to time yields
\[\forall t\geqslant 0,\quad\forall\xi\in\mathbb{S}^{2},\quad\partial_{t}F_{k} \big{(}t,\Phi_{t}(\xi)\big{)}+U\big{(}t,\Phi_{t}(\xi)\big{)}\cdot\nabla F\big{(} t,\Phi_{t}(\xi)\big{)}=0.\]
Now, take a parametrization \(z_{k}(t,\cdot):\mathbb{T}\to\Gamma_{k}(t).\) We have
\[\forall t\geqslant 0,\quad\forall x\in\mathbb{T},\quad F_{k}\big{(}t,z_{k}(t,x) \big{)}=0.\]
Differentiating the previous relation with respect to time implies
\[\forall t\geqslant 0,\quad\forall x\in\mathbb{T},\quad\partial_{t}F_{k}\big{(}t, z_{k}(t,x)\big{)}+\partial_{t}z_{k}(t,x)\cdot\nabla F_{k}\big{(}t,z_{k}(t,x) \big{)}=0.\]
Putting together the foregoing calculations, we deduce that
\[\Big{[}\partial_{t}z_{k}(t,x)-U\big{(}t,z_{k}(t,x)\big{)}\Big{]}\cdot\nabla F_ {k}\big{(}t,z_{k}(t,x)\big{)}=0.\]
The above scalar products can be understood either as taken in the tangent space \(T_{z_{k}(t,x)}\mathbb{S}^{2}\cong\mathbb{R}^{2}\) or in the classical Euclidean space \(\mathbb{R}^{3}\), both are equivalent. Indeed, in the spherical coordinates the Euclidean metric writes
\[\mathtt{g}_{\mathbb{R}^{3}}(r,\theta,\varphi)\triangleq dr^{2}+r^{2}\big{(}d \theta^{2}+\sin^{2}(\theta)d\varphi^{2}\big{)}.\]
Since the sphere is described by the equation \(r=1\), then the induced metric of \(\mathtt{g}_{\mathbb{R}^{3}}\) on \(\mathbb{S}^{2}\) is indeed \(\mathtt{g}_{\mathbb{S}^{2}}\) as defined in (1.1). On \(T_{z_{k}(t,x)}\mathbb{S}^{2}\), the operator \(J\) acts as a rotation of \(-\frac{\pi}{2}.\) Consequently, since \(\partial_{x}z_{k}(t,x)\) is tangential to \(\Gamma_{k}(t)\) and contained in \(T_{z_{k}(t,x)}\mathbb{S}^{2}\), then \(J\partial_{x}z_{k}(t,x)\) is orthogonal to \(\Gamma_{k}(t)\) and contained in \(T_{z_{k}(t,x)}\mathbb{S}^{2}.\) In addition, since \(\Gamma_{k}(t)\) is a level set of \(F_{k}(t,\cdot)\), then \(\nabla F_{k}\big{(}t,z_{k}(t,x)\big{)}\) is also orthogonal to \(\Gamma_{k}(t)\) and contained in \(T_{z_{k}(t,x)}\mathbb{S}^{2}.\) We deduce that \(J\partial_{x}z_{k}(t,x)\) and \(\nabla F_{k}\big{(}t,z_{k}(t,x)\big{)}\) are proportional, which leads to
\[\forall t\geqslant 0,\quad\forall x\in\mathbb{T},\quad\Big{[}\partial_{t}z_{k} (t,x)-U\big{(}t,z_{k}(t,x)\big{)}\Big{]}\cdot\big{(}J\partial_{x}z_{k}(t,x) \big{)}=0.\]
But, using that \(J^{T}=J^{-1}=-J\) and \(\nabla^{\perp}=J\nabla\), we obtain
\[U\big{(}t,z_{k}(t,x)\big{)}\cdot\big{(}J\partial_{x}z_{k}(t,x) \big{)} =\nabla^{\perp}\Psi\big{(}t,z_{k}(t,x)\big{)}\cdot\big{(}J\partial _{x}z_{k}(t,x)\big{)}\] \[=-\Big{(}J\nabla^{\perp}\Psi\big{(}t,z_{k}(t,x)\big{)}\Big{)} \cdot\partial_{x}z_{k}(t,x)\] \[=\nabla\Psi\big{(}t,z_{k}(t,x)\big{)}\cdot\partial_{x}z_{k}(t,x)\] \[=\partial_{x}\Big{(}\Psi\big{(}t,z_{k}(t,x)\big{)}\Big{)}.\]
Hence the contour dynamics equations for the vortex cap solutions are
\[\forall k\in\llbracket 1,M-1\rrbracket,\quad\forall t\geqslant 0,\quad\forall x \in\mathbb{T},\quad\partial_{t}z_{k}(t,x)\cdot\big{(}J\partial_{x}z_{k}(t,x) \big{)}=\partial_{x}\Big{(}\Psi\big{(}t,z_{k}(t,x)\big{)}\Big{)}, \tag{1.21}\]
which are comparable to (1.11).
### Main results
Let us now present our main results. First observe that Lemma 1.2 implies that any spherical vortex cap in the form
\[\overline{\Omega}(\theta)=\omega_{1}\mathbf{1}_{0<\theta<\theta_{1}}+\omega_{ 2}\mathbf{1}_{\theta_{1}\leqslant\theta<\theta_{2}}+\ldots+\omega_{M-1} \mathbf{1}_{\theta_{M-2}\leqslant\theta<\theta_{M-1}}+\omega_{M}\mathbf{1}_{ \theta_{M-1}\leqslant\theta<\pi},\]
with
\[M\in\mathbb{N}\setminus\{0,1\},\qquad\theta_{0}\triangleq 0<\theta_{1}<\ldots< \theta_{M-1}<\pi\triangleq\theta_{M},\qquad\forall k\in\llbracket 1,M-1\rrbracket,\quad \omega_{k}\neq\omega_{k+1},\]
and supplemented by the Gauss condition
\[\sum_{k=1}^{M}\omega_{k}\big{(}\cos(\theta_{k-1})-\cos(\theta_{k})\big{)}=0,\]
generates a stationary vortex cap solution to (1.3). In the sequel, we shall focus on the cases \(M=2\) and \(M=3\) (the latter will also be referred to as _vortex bands_ or _vortex strips_). More precisely, we study the existence non-trivial periodic solutions living close to these structures. Due to the symmetry of the stationary vortex caps, we will look for rotating solutions around the vertical axis at uniform velocity \(c\), that is
\[\forall t\geqslant 0,\quad\forall\xi\in\mathbb{S}^{2},\quad\overline{\Omega}(t, \xi)=\overline{\Omega}\big{(}0,\mathcal{R}(ct)\xi\big{)},\]
and satisfy for some fixed \(\mathbf{m}\in\mathbb{N}^{*}\) the \(\mathbf{m}\)-fold property
\[\forall\xi\in\mathbb{S}^{2},\quad\overline{\Omega}\big{(}0,\mathcal{R}( \tfrac{2\pi}{\mathbf{m}})\xi\big{)}=\overline{\Omega}(0,\xi).\]
Our first result concerns the one-interface case \(M=2\) and reads as follows.
**Theorem 1.1**.: _Let \(\widetilde{\gamma}\in\mathbb{R}\), \(\mathbf{m}\in\mathbb{N}^{*}\) and \(\theta_{0}\in(0,\pi).\) Consider \(\omega_{N},\omega_{S}\in\mathbb{R}\) such that_
\[\frac{\omega_{N}+\omega_{S}}{\omega_{N}-\omega_{S}}=\cos(\theta_{0}).\]
_There exists a branch of \(\mathbf{m}\)-fold uniformly rotating vortex cap solutions to (1.3) with one interface bifurcating from_
\[\overline{\Omega}_{\mathrm{rc}}(\theta)\triangleq\omega_{N}\mathbf{1}_{0< \theta<\theta_{0}}+\omega_{S}\mathbf{1}_{\theta_{0}\leqslant\theta<\pi},\]
_at the velocity_
\[c_{\mathbf{m}}(\widetilde{\gamma})\triangleq\widetilde{\gamma}-(\omega_{N}- \omega_{S})\frac{\mathbf{m}-1}{2\mathbf{m}}.\]
**Remark 1.1**.: _Let us make the following remarks._
1. _The bifurcation points_ \(c_{\mathbf{m}}(\widetilde{\gamma})\) _correspond to a shift by the rotation speed_ \(\widetilde{\gamma}\) _of Burbea's frequencies (_1.12_) with a vorticity jump_ \([\overline{\Omega}]\triangleq\omega_{N}-\omega_{S}=-1\)_. Hence, in the rotation frame of the sphere, the solutions bifurcate at Burbea's frequencies._
2. _The local curve is parametrized in particular with_ \(\varepsilon\in(-\varepsilon_{0},\varepsilon_{0})\mapsto c_{\mathbf{m}}^{ \varepsilon}(\widetilde{\gamma})\in\mathbb{R}\) _for some small_ \(\varepsilon_{0}>0\)_. Denoting_ \(\widetilde{\gamma}_{\mathbf{m}}\triangleq(\omega_{N}-\omega_{S})\frac{ \mathbf{m}-1}{2\mathbf{m}},\) _we have_ \(c_{\mathbf{m}}^{0}(\widetilde{\gamma}_{\mathbf{m}})=0.\) _Since the dependence of_ \(c_{\mathbf{m}}(\widetilde{\gamma})\) _in_ \(\widetilde{\gamma}\) _is affine, an application of the implicit function theorem allows to construct a curve_ \(\varepsilon\in(-\varepsilon_{0},\varepsilon_{0})\mapsto\widetilde{\gamma}_{ \mathbf{m}}^{\varepsilon}\in\mathbb{R}\) _such that (up to reducing the size of_ \(\varepsilon_{0}\)_)_ \[\forall\varepsilon\in(-\varepsilon_{0},\varepsilon_{0}),\quad c_{\mathbf{m}}^ {\varepsilon}\left(\widetilde{\gamma}_{\mathbf{m}}^{\varepsilon}\right)=0.\] _This means that we can construct a branch of non-trivial_ \(\mathbf{m}\)_-fold solutions which are stationary in the geocentric frame. One could also obtain these solutions by implementing bifurcation theory with the parameter_ \(\widetilde{\gamma}.\)__
3. _The bifurcation analysis is performed in Holder spaces, but similarly to the planar case, we expect our solutions to be analytic._
Next, we shall present our second result dealing with two interfaces (\(M=3\)). In this case, the computations in the spectral study are much more involved. Our result is the following.
**Theorem 1.2**.: _Let \(\widetilde{\gamma}\in\mathbb{R}\) and \(0<\theta_{1}<\theta_{2}<\pi.\) Fix \(\omega_{N},\omega_{C},\omega_{S}\in\mathbb{R}\) such that_
\[\omega_{N}+\omega_{S}=(\omega_{N}-\omega_{C})\cos(\theta_{1})+(\omega_{C}- \omega_{S})\cos(\theta_{2}). \tag{1.22}\]
_Consider the following non-degeneracy conditions_
1. \(\omega_{S}\cos^{2}\left(\frac{\theta_{1}}{2}\right)+\omega_{N}\sin^{2}\left( \frac{\theta_{2}}{2}\right)\neq 0;\)__
2. \(\omega_{S}\cos^{2}\left(\frac{\theta_{1}}{2}\right)+\omega_{N}\sin^{2}\left( \frac{\theta_{2}}{2}\right)=0\) _supplemented by one of the following properties_ \[\begin{array}{llllll}\mathbf{(H1+)}&\omega_{C}>0,&\omega_{N}>0,&\omega_{S}< 0,\\ \mathbf{(H2+)}&\omega_{C}>0,&\omega_{N}<0,&\omega_{S}>0\qquad\text{and}&2\cos^{ 2}\left(\frac{\theta_{1}}{2}\right)>\sin^{2}\left(\frac{\theta_{2}}{2}\right),\\ \mathbf{(H3+)}&\omega_{C}<0,&\omega_{N}>0,&\omega_{S}<0,&\\ \mathbf{(H4+)}&\omega_{C}<0,&\omega_{N}<0,&\omega_{S}>0\qquad\text{and}&2\sin^{ 2}\left(\frac{\theta_{2}}{2}\right)>\cos^{2}\left(\frac{\theta_{1}}{2}\right),\\ \mathbf{(H1-)}&\omega_{C}>0,&\omega_{N}<0,&\omega_{S}>0,&\\ \mathbf{(H2-)}&\omega_{C}>0,&\omega_{N}>0,&\omega_{S}<0\qquad\text{and}&2\sin^{ 2}\left(\frac{\theta_{2}}{2}\right)>\cos^{2}\left(\frac{\theta_{1}}{2}\right),\\ \mathbf{(H3-)}&\omega_{C}<0,&\omega_{N}<0,&\omega_{S}>0,&\\ \mathbf{(H4-)}&\omega_{C}<0,&\omega_{N}>0,&\omega_{S}<0\qquad\text{and}&2\cos^{ 2}\left(\frac{\theta_{1}}{2}\right)>\sin^{2}\left(\frac{\theta_{2}}{2}\right).\end{array}\]
_Let \(\kappa\in\{+,-\}.\) Assume that either the condition 1 holds or the condition 2 supplemented with \((\mathbf{Hk}\kappa)\) for some \(k\in[\![1,4]\!]\) hold. There exists \(N(\theta_{1},\theta_{2})\triangleq N(\theta_{1},\theta_{2},\omega_{N},\omega_{S},\omega_{C})\in\mathbb{N}^{*}\) such that for any \(\mathbf{m}\in\mathbb{N}^{*}\) with \(\mathbf{m}\geqslant N(\theta_{1},\theta_{2})\), there exists a branch of \(\mathbf{m}\)-fold uniformly rotating vortex strips for (1.3) bifurcating from_
\[\overline{\Omega}_{\mathrm{rc}2}(\theta)\triangleq\omega_{N}\mathbf{1}_{0< \theta<\theta_{1}}+\omega_{C}\mathbf{1}_{\theta_{1}\leqslant\theta<\theta_{2}}+ \omega_{S}\mathbf{1}_{\theta_{2}\leqslant\theta<\pi},\]
_at the velocity_
\[c_{\mathbf{m}}^{\kappa}(\widetilde{\gamma},\theta_{1},\theta_{2}) \triangleq\widetilde{\gamma}+\frac{\omega_{S}}{4\sin^{2}\left(\frac{\theta_{1}}{2} \right)}-\frac{\omega_{N}}{4\cos^{2}\left(\frac{\theta_{1}}{2}\right)}+\frac{ \omega_{N}-\omega_{S}}{4\mathbf{m}}\] \[\qquad+\frac{\kappa}{4}\sqrt{\left(\frac{\omega_{S}}{\sin^{2}\left( \frac{\theta_{2}}{2}\right)}+\frac{\omega_{N}}{\cos^{2}\left(\frac{\theta_{1}}{2} \right)}-\frac{\omega_{N}+\omega_{S}-2\omega_{C}}{\mathbf{m}}\right)^{2}+ \frac{1}{\mathbf{m}^{2}}(\omega_{N}-\omega_{C})(\omega_{C}-\omega_{S})\tan^{2 \mathbf{m}}\left(\frac{\theta_{1}}{2}\right)\cot^{2\mathbf{m}}\left(\frac{ \theta_{2}}{2}\right)}.\]
**Remark 1.2**.: _Let us make the following remarks._
1. _The non-degeneracy conditions are required to avoid spectral collisions between both spectral components_ \((c_{n}^{+})_{n\geq N(\theta_{1},\theta_{2})}\) _and_ \((c_{n}^{-})_{n\geq N(\theta_{1},\theta_{2})}\) _when applying bifurcation techniques. This situation does not appear in the planar case_ _[_25_]__, where both part of the spectrum are well-separated._
2. _A priori, in this case the bifurcation points of Theorem_ 1.2 _are independent from the planar case (_1.13_) even if they globally have a similar structure._
3. _Similarly to Remark_ 1.1_-_2_, we can construct, by implicit function theorem, at most two curves of non-trivial_ \(\mathbf{m}\)_-fold solutions which are stationary in the geocentric frame._
4. _The cases with more interfaces (_\(M\geqslant 4\)_) seem much more involved to study._
### Organization of the paper
This work is organized as follows. In Section 2 we provide the proof of Theorem 1.1 showing the analytical existence of non-trivial vortex caps with one interface bifurcating from the trivial one. First, we characterize the existence of such solutions with the non-trivial roots of a nonlinear and nonlocal functional. Later, we give the spectral properties of such functional and conlude the proof of the first main result. In Section 3 we analyze the two-interfaces problem (Theorem 1.2), which is more delicate since one has to study a coupled nonlinear system. The spectral study is more complex in this case, and we can show the existence of non-trivial vortex strips with large \(\mathbf{m}\)-fold symmetries under non-degeneracy conditions. Finally, the Appendix gives the proof of some technical lemmas and states the Crandall-Rabinowitz theorem.
### Acknowledgments
The authors would like to thank David G. Dritschel for suggesting the problem. The work of Claudia Garcia has been supported by the MINECO-Feder (Spain) research grant number RTI2018-098850-B-I00, the Junta de Andalucia (Spain) Project FQM 954, the Severo Ochoa Programme for Centres of Excellence in R&D(CEX2019-000904-S) and by the PID2021-124195NB-C32. The work of Zineb Hassainia has been supported by Tamkeen under the NYU Abu Dhabi Research Institute grant of the center SITE. The work of Emeric Roulley has been supported by PRIN 2020XB3EFL, "Hamiltonain and Dispersive PDEs".
## 2 The one-interface case
This section is devoted to the proof of Theorem 1.1 dealing with the case of one interface (\(M=2\)). We first discuss the stationary flat cap solution. Then, by choosing a suitable ansatz, we can rewrite the vortex cap equation (1.21) which leads to reformulate our problem in looking for the zeros of a nonlinear and nonlocal functional, see (2.4). Finally, we implement bifurcation arguments in order to show the existence of non-trivial roots of such functional. Hence, the proof relies on checking all the hypothesis of Crandall-Rabinowitz Theorem, see Appendix A.3.
Figure 2: Representation of admissible couples \((\theta_{1},\theta_{2})\in(0,\pi)^{2}\) with \(\theta_{1}<\theta_{2}\).
### Equation of interest
First, we shall discuss some properties of the spherical stationary solutions with one interface.
**Lemma 2.1**.: _Let \(\theta_{0}\in(0,\pi).\) For any \(\omega_{N},\omega_{S}\in\mathbb{R}\) such that_
\[\frac{\omega_{N}+\omega_{S}}{\omega_{N}-\omega_{S}}=\cos(\theta_{0}), \tag{2.1}\]
_the following function describing the flat cap (FC)_
\[\overline{\Omega}_{\textsc{FC}}(\theta)\triangleq\omega_{N}\mathbf{1}_{0< \theta<\theta_{0}}+\omega_{S}\mathbf{1}_{\theta_{0}\leqslant\theta<x},\]
_is a stationary solution to Euler equations. In addition,_
\[\partial_{\theta}\Psi_{\textsc{FC}}(\theta_{0})=\left(\frac{\omega_{N}-\omega _{S}}{2}-\widetilde{\gamma}\right)\sin(\theta_{0}). \tag{2.2}\]
Proof.: \(\blacktriangleright\) Observe that
\[\forall\alpha\in\mathbb{R},\quad\forall\xi\in\mathbb{S}^{2},\quad\overline{ \Omega}_{\textsc{FC}}\big{(}\mathcal{R}(\alpha)\xi\big{)}=\overline{\Omega}_ {\textsc{FC}}(\xi).\]
Hence, Lemma 1.2 applies and proves that this is a stationary solution.
\(\blacktriangleright\) Notice that the constraint (2.1) is required since (1.5) and (1.2) imply
\[0=\int_{\mathbb{S}^{2}}\Omega_{\textsc{FC}}(t,\xi)d\sigma(\xi) =\int_{0}^{2\pi}\int_{0}^{\pi}\Omega_{\textsc{FC}}(t,\theta, \varphi)\sin(\theta)d\theta d\varphi\] \[=2\pi\left(\omega_{N}\int_{0}^{\theta_{0}}\sin(\theta)d\theta+ \omega_{S}\int_{\theta_{0}}^{\pi}\sin(\theta)d\theta\right)\] \[=2\pi\Big{[}\omega_{N}\big{(}1-\cos(\theta_{0})\big{)}+\omega_{S} \big{(}1+\cos(\theta_{0})\big{)}\Big{]}.\]
\(\blacktriangleright\) The potential velocity solves the elliptic equation
\[\Delta\Psi_{\textsc{FC}}=\overline{\Omega}_{\textsc{FC}}+2\widetilde{\gamma} \cos(\theta),\qquad\text{i.e.}\qquad\partial_{\theta}\big{[}\sin(\theta) \partial_{\theta}\Psi_{\textsc{FC}}(\theta)\big{]}=\sin(\theta)\Big{(}\omega _{N}\mathbf{1}_{0<\theta<\theta_{0}}+\omega_{S}\mathbf{1}_{\theta_{0}\leqslant \theta<x}\Big{)}+\widetilde{\gamma}\sin(2\theta).\]
Integrating the previous relation gives
\[\partial_{\theta}\Psi_{\textsc{FC}}(\theta)=\begin{cases}\frac{\omega_{N}}{ \sin(\theta)}\big{(}1-\cos(\theta)\big{)}-\frac{\widetilde{\gamma}\cos(2\theta )}{2\sin(\theta)}+\frac{c}{\sin(\theta)},&\text{if }\theta\in(0,\theta_{0}),\\ \frac{\omega_{N}}{\sin(\theta)}\big{(}1-\cos(\theta_{0})\big{)}+\frac{\omega_ {S}}{\sin(\theta)}\big{(}\cos(\theta_{0})-\cos(\theta)\big{)}-\frac{\widetilde {\gamma}\cos(2\theta)}{2\sin(\theta)}+\frac{c}{\sin(\theta)},&\text{if }\theta\in[ \theta_{0},\pi).\end{cases}\]
Since the flow is zonal, there is no velocity at the pole. Which implies that
\[\lim_{\theta\to 0^{+}}\partial_{\theta}\Psi_{\textsc{FC}}(\theta)=0.\]
As a consequence, we must take the constant of integration \(c\) as follows
\[c\triangleq\frac{\widetilde{\gamma}}{2}.\]
Finally, using (2.1), we can write
\[\partial_{\theta}\Psi_{\textsc{FC}}(\theta)=\begin{cases}\frac{\omega_{N}}{ \sin(\theta)}\big{(}1-\cos(\theta)\big{)}-\widetilde{\gamma}\sin(\theta),& \text{if }\theta\in(0,\theta_{0}),\\ -\frac{\omega_{S}}{\sin(\theta)}\big{(}1+\cos(\theta)\big{)}-\widetilde{\gamma} \sin(\theta),&\text{if }\theta\in[\theta_{0},\pi).\end{cases}\]
At \(\theta=\theta_{0}\), using again (2.1), we find
\[\partial_{\theta}\Psi(\theta_{0})=\frac{1}{2\sin(\theta_{0})}\Big{[}\omega_{N} \big{(}1-\cos(\theta_{0})\big{)}-\omega_{S}\big{(}1+\cos(\theta_{0})\big{)} \Big{]}-\widetilde{\gamma}\sin(\theta_{0})=\left(\frac{\omega_{N}-\omega_{S}}{ 2}-\widetilde{\gamma}\right)\sin(\theta_{0}).\]
The proof of Lemma 2.1 is now complete.
Now, fix \(\theta_{0}\in(0,\pi)\) and let us consider a vortex cap solution close to \(\overline{\Omega}_{\textsc{FC}}\) in the form
\[\overline{\Omega}(t,\theta,\varphi)=\omega_{N}\mathbf{1}_{0<\theta<\theta_{0}+f (t,\varphi)}+\omega_{S}\mathbf{1}_{\theta_{0}+f(t,\varphi)\leqslant\theta<\pi },\qquad|f(t,\varphi)|\ll 1,\]
with \(\omega_{N},\omega_{S}\in\mathbb{R}\) satisfying (2.1). The evolution is given by the dynamics of the interface which can be described (in the colatitude/longitude chart) through the following parametrization
\[z(t,\varphi)=C_{1}\big{(}\theta_{0}+f(t,\varphi),\varphi\big{)}=\begin{pmatrix} \sin\big{(}\theta_{0}+f(t,\varphi)\big{)}\cos(\varphi)\\ \sin\big{(}\theta_{0}+f(t,\varphi)\big{)}\sin(\varphi)\\ \cos\big{(}\theta_{0}+f(t,\varphi)\big{)}\end{pmatrix}.\]
Differentiating in time amounts to
\[\partial_{t}z(t,\varphi)=\partial_{t}f(t,\varphi)\begin{pmatrix}\cos\big{(} \theta_{0}+f(t,\varphi)\big{)}\cos(\varphi)\\ \cos\big{(}\theta_{0}+f(t,\varphi)\big{)}\sin(\varphi)\\ -\sin\big{(}\theta_{0}+f(t,\varphi)\big{)}\end{pmatrix},\]
and the derivation in the longitude variable gives
\[\partial_{\varphi}z(t,\varphi)=\partial_{\varphi}f(t,\varphi)\begin{pmatrix} \cos\big{(}\theta_{0}+f(t,\varphi)\big{)}\cos(\varphi)\\ \cos\big{(}\theta_{0}+f(t,\varphi)\big{)}\sin(\varphi)\\ -\sin\big{(}\theta_{0}+f(t,\varphi)\big{)}\end{pmatrix}+\begin{pmatrix}-\sin \big{(}\theta_{0}+f(t,\varphi)\big{)}\sin(\varphi)\\ \sin\big{(}\theta_{0}+f(t,\varphi)\big{)}\cos(\varphi)\\ 0\end{pmatrix}.\]
The vector \(J\partial_{\varphi}z(t,\varphi)\) can be obtained using the cross product
\[J\partial_{\varphi}z(t,\varphi) =\partial_{\varphi}z(t,\varphi)\times z(t,\varphi)\] \[=\partial_{\varphi}f(t,\varphi)\begin{pmatrix}\sin(\varphi)\\ -\cos(\varphi)\\ 0\end{pmatrix}+\begin{pmatrix}\cos\big{(}\theta_{0}+f(t,\varphi)\big{)}\sin \big{(}\theta_{0}+f(t,\varphi)\big{)}\cos(\varphi)\\ \cos\big{(}\theta_{0}+f(t,\varphi)\big{)}\sin\big{(}\theta_{0}+f(t,\varphi) \big{)}\sin(\varphi)\\ -\sin^{2}\big{(}\theta_{0}+f(t,\varphi)\big{)}\end{pmatrix}.\]
Therefore,
\[\partial_{t}z(t,\varphi)\cdot\big{(}J\partial_{\varphi}z(t,\varphi)\big{)}= \sin\big{(}\theta_{0}+f(t,\varphi)\big{)}\partial_{t}f(t,\varphi).\]
Our ansatz corresponds to a vorticity in the form
\[\Omega(t,\theta,\varphi)=\omega_{N}\mathbf{1}_{0<\theta<\theta_{0}+f(t, \varphi)}+\omega_{S}\mathbf{1}_{\theta_{0}+f(t,\varphi)\leqslant\theta<\pi}+2 \widetilde{\gamma}\cos(\theta).\]
Consequently, according to (1.6), (1.7), (1.8) and (1.5), we have
\[\Psi\big{(}t,z(t,\varphi)\big{)} =\frac{1}{4\pi}\int_{0}^{2\pi}\int_{0}^{\pi}\log\Big{(}D\big{(} \theta_{0}+f(t,\varphi),\theta^{\prime},\varphi,\varphi^{\prime}\big{)}\Big{)} \Omega(t,\theta^{\prime},\varphi^{\prime})\sin(\theta^{\prime})d\theta^{\prime}d \varphi^{\prime}\] \[=\frac{\omega_{N}}{4\pi}\int_{0}^{2\pi}\int_{0}^{\theta_{0}+f(t, \varphi^{\prime})}\log\Big{(}D\big{(}\theta_{0}+f(t,\varphi),\theta^{\prime}, \varphi,\varphi^{\prime}\big{)}\Big{)}\sin(\theta^{\prime})d\theta^{\prime}d \varphi^{\prime}\] \[\quad+\frac{\widetilde{\gamma}}{4\pi}\int_{0}^{2\pi}\int_{0}^{\pi }\log\Big{(}D\big{(}\theta_{0}+f(t,\varphi),\theta^{\prime},\varphi,\varphi^{ \prime}\big{)}\Big{)}\sin(2\theta^{\prime})d\theta^{\prime}d\varphi^{\prime}.\]
Figure 3: Representation of one interface (in red) vortex cap solutions with 6-fold symmetry.
Remark that the unperturbed stream function can be written as follows
\[\Psi_{\text{rc}}(\theta) =\frac{1}{4\pi}\int_{0}^{2\pi}\int_{0}^{\pi}\log\Big{(}D\big{(} \theta,\theta^{\prime},0,\varphi^{\prime}\big{)}\Big{)}\Omega_{\text{rc}}( \theta^{\prime})\sin(\theta^{\prime})d\theta^{\prime}d\varphi^{\prime}\] \[=\frac{\omega_{N}}{4\pi}\int_{0}^{2\pi}\int_{0}^{\theta_{0}}\log \Big{(}D\big{(}\theta,\theta^{\prime},0,\varphi^{\prime}\big{)}\Big{)}\sin( \theta^{\prime})d\theta^{\prime}d\varphi^{\prime}\] \[\quad+\frac{\omega_{S}}{4\pi}\int_{0}^{2\pi}\int_{\theta_{0}}^{ \pi}\log\Big{(}D\big{(}\theta,\theta^{\prime},0,\varphi^{\prime}\big{)}\Big{)} \sin(\theta^{\prime})d\theta^{\prime}d\varphi^{\prime}\] \[\quad+\frac{\widetilde{\gamma}}{4\pi}\int_{0}^{2\pi}\int_{0}^{ \pi}\log\Big{(}D\big{(}\theta,\theta^{\prime},0,\varphi^{\prime}\big{)}\Big{)} \sin(2\theta^{\prime})d\theta^{\prime}d\varphi^{\prime}.\]
Thus, making appeal to Chasles' relation, we can write
\[\Psi\big{(}t,z(t,\varphi)\big{)} =\Psi_{\text{rc}}\big{(}\theta_{0}+f(t,\varphi)\big{)}+\Psi_{p}\{ f\}\big{(}\theta_{0}+f(t,\varphi),\varphi\big{)}\triangleq\Psi\{f\}\big{(} \theta_{0}+f(t,\varphi),\varphi\big{)},\] \[\Psi_{p}\{f\}(\theta,\varphi) \triangleq\frac{\omega_{N}-\omega_{S}}{4\pi}\int_{0}^{2\pi}\int_{ \theta_{0}}^{\theta_{0}+f(t,\varphi^{\prime})}\log\Big{(}D\big{(}\theta, \theta^{\prime},\varphi,\varphi^{\prime}\big{)}\Big{)}\sin(\theta^{\prime})d \theta^{\prime}d\varphi^{\prime}.\]
Therefore, the vortex cap equation (1.21) becomes
\[\partial_{t}f(t,\varphi)=\frac{\partial_{\varphi}\Big{(}\Psi_{\text{rc}} \big{(}\theta_{0}+f(t,\varphi)\big{)}+\Psi_{p}\{f\}\big{(}\theta_{0}+f(t, \varphi),\varphi\big{)}\Big{)}}{\sin\big{(}\theta_{0}+f(t,\varphi)\big{)}}. \tag{2.3}\]
Looking for traveling solutions at speed \(c\in\mathbb{R}\) leads to consider
\[f(t,\varphi)=f(\varphi-ct).\]
Inserting this into (2.3) gives
\[\mathscr{F}(c,f)(\varphi)\triangleq c\,\partial_{\varphi}f(\varphi)+\frac{ \partial_{\varphi}\Big{(}\Psi_{\text{rc}}\big{(}\theta_{0}+f(\varphi)\big{)}+ \Psi_{p}\{f\}\big{(}\theta_{0}+f(\varphi),\varphi\big{)}\Big{)}}{\sin\big{(} \theta_{0}+f(\varphi)\big{)}}=0. \tag{2.4}\]
Observe that
\[\forall c\in\mathbb{R},\quad\mathscr{F}(c,0)=0.\]
That corresponds to a trivial line of roots of \(\mathscr{F}\), corresponding to the flat cap stationary solution associated with the angle \(\theta_{0}\). In order to look for non-trivial roots, we shall use bifurcation arguments in terms of the Crandall-Rabinowitz Theorem. For that goal, we shall study the regularity of \(\mathscr{F}\) and the spectral properties of its linearized operator.
### Bifurcation study
In this section, we shall check the hypothesis of the Crandall-Rabinowitz Theorem for the functional \(\mathscr{F}\) introduced in (2.4). Firstly, let us introduce the function spaces, in terms of Holder regularity, that will be used in the bifurcation argument. Fix \(\alpha\in(0,1)\), then the Holder space \(C^{\alpha}(\mathbb{T})\) consists in \(2\pi\)-periodic functions \(f:\mathbb{T}\to\mathbb{R}\) such that the following norm is finite:
\[\|f\|_{C^{\alpha}(\mathbb{T})}\triangleq\|f\|_{L^{\infty}(\mathbb{T})}+\sup_{ \genfrac{}{}{0.0pt}{}{(\varphi,\varphi^{\prime})\in\mathbb{R}^{2}}{\varphi \neq\varphi^{\prime}}}\frac{|f(\varphi)-f(\varphi^{\prime})|}{|\varphi-\varphi ^{\prime}|^{\alpha}}.\]
The subspace \(C^{1+\alpha}(\mathbb{T})\) of regular functions is associated with the following norm
\[\|f\|_{C^{1+\alpha}(\mathbb{T})}\triangleq\|f\|_{L^{\infty}(\mathbb{T})}+\| \partial_{\varphi}f\|_{C^{\alpha}(\mathbb{T})}.\]
Define also the following subspaces taking into account parity and symmetries
\[X_{\mathbf{m}}^{1+\alpha} \triangleq\left\{f\in C^{1+\alpha}(\mathbb{T})\quad\text{s.t.}\quad \forall\varphi\in\mathbb{T},\,f(\varphi)=\sum_{n=1}^{\infty}f_{n}\cos( \mathbf{m}n\varphi),\quad f_{n}\in\mathbb{R}\right\},\] \[Y_{\mathbf{m}}^{\alpha} \triangleq\left\{g\in C^{\alpha}(\mathbb{T})\quad\text{s.t.}\quad \forall\varphi\in\mathbb{T},\,g(\varphi)=\sum_{n=1}^{\infty}g_{n}\sin( \mathbf{m}n\varphi),\quad g_{n}\in\mathbb{R}\right\},\] \[B_{r,\mathbf{m}}^{1+\alpha} \triangleq\left\{f\in X_{\mathbf{m}}^{1+\alpha}\quad\text{s.t.}\quad \|f\|_{C^{1+\alpha}(\mathbb{T})}<r\right\},\qquad r>0.\]
The next proposition gathers the regularity properties for the functional \(\mathscr{F}\) and gives the structure of its linearized operator at the flat cap.
**Proposition 2.1**.: _Let \(\alpha\in(0,1)\) and \(\mathbf{m}\in\mathbb{N}^{*}\). There exists \(r>0\) such that_
1. _The function_ \(\mathscr{F}:\mathbb{R}\times B^{1+\alpha}_{r,\mathbf{m}}\to Y^{\alpha}_{\mathbf{ m}}\) _is well-defined and of class_ \(C^{1}\)_._
2. _The partial derivative_ \(\partial_{c}d_{f}\mathscr{F}:\mathbb{R}\times B^{1+\alpha}_{r,\mathbf{m}} \to\mathcal{L}(X^{1+\alpha}_{\mathbf{m}},Y^{\alpha}_{\mathbf{m}})\) _exists and is continuous._
3. _At the equilibrium_ \(f=0\)_, the linearized operator admits the following Fourier representation_ \[d_{f}\mathscr{F}(c,0)\left[\sum_{n=1}^{\infty}h_{n}\cos(\mathbf{m}n\varphi) \right]=\sum_{n=1}^{\infty}\mathbf{m}n\left[-c-(\omega_{N}-\omega_{S})\frac{ \mathbf{m}n-1}{2\mathbf{m}n}+\widetilde{\gamma}\right]h_{n}\sin(\mathbf{m}n \varphi).\] (2.5) _In addition, if_ \(c\neq\frac{\omega_{N}-\omega_{S}}{2}-\widetilde{\gamma}\)_, then the operator_ \(d_{f}\mathscr{F}(c,0):X^{1+\alpha}_{\mathbf{m}}\to Y^{\alpha}_{\mathbf{m}}\) _is of Fredholm type with index zero._
Proof.: **(i)** First, notice that the oddness and \(\mathbf{m}\)-fold properties follow from the evenness and \(\mathbf{m}\)-fold properties of \(f\) and changes of variables in the non-local part. Now, we need to check that \(\mathscr{F}(c,f)\) belongs to \(C^{\alpha}(\mathbb{T})\) provided that \(f\in C^{1+\alpha}(\mathbb{T})\). Then, let us write \(\mathscr{F}\) as
\[\mathscr{F}(c,f)(\varphi)=cf^{\prime}(\varphi)+f^{\prime}(\varphi)\frac{( \partial_{\theta}\Psi\{f\})\big{(}\theta_{0}+f(\varphi),\varphi\big{)}}{\sin \big{(}\theta_{0}+f(\varphi)\big{)}}+\frac{(\partial_{\varphi}\Psi\{f\}) \big{(}\theta_{0}+f(\varphi),\varphi\big{)}}{\sin\big{(}\theta_{0}+f(\varphi) \big{)}}.\]
Notice that since \(\theta_{0}\notin\{0,\pi\}\) and \(\|f\|_{C^{1+\alpha}(\mathbb{T})}<r\), hence by considering \(r\) small enough we find
\[\inf_{\varphi\in\mathbb{T}}\big{|}\sin\big{(}\theta_{0}+f(\varphi)\big{)} \big{|}\geqslant\delta_{0}>0,\qquad\delta_{0}\triangleq\inf_{x\in[\theta_{0}- r,\theta_{0}+r]}|\sin(x)|\in(0,1). \tag{2.6}\]
Thus, in order to check that \(\mathscr{F}\) is well-defined, it is enough to prove that
\[(\partial_{\theta}\Psi\{f\})\big{(}\theta_{0}+f(\cdot),\cdot\big{)}\in C^{ \alpha}(\mathbb{T})\quad\text{and}\quad(\partial_{\varphi}\Psi\{f\})\big{(} \theta_{0}+f(\cdot),\cdot\big{)}\in C^{\alpha}(\mathbb{T}). \tag{2.7}\]
Note that proving (2.7) uses the same techniques as the one used to show (2.9). Thus, we shall skip the details and only check that \(f\mapsto(d_{f}\mathscr{F})\) is continuous using the expression in (2.4). Indeed, we can compute the Gateaux derivative of \(\mathscr{F}\) and obtain
\[d_{f}\mathscr{F}(c,f)[h](\varphi) =c\,h^{\prime}(\varphi)-h(\varphi)\frac{\cos\big{(}\theta_{0}+f( \varphi)\big{)}}{\sin^{2}\big{(}\theta_{0}+f(\varphi)\big{)}}\partial_{ \varphi}\Big{(}\Psi\{f\}\big{(}\theta_{0}+f(\varphi),\varphi\big{)}\Big{)}\] \[\quad+\frac{1}{\sin\big{(}\theta_{0}+f(\varphi)\big{)}}\partial_{ \varphi}\Big{(}h(\varphi)(\partial_{\theta}\Psi\{f\})\big{(}\theta_{0}+f( \varphi),\varphi\big{)}\Big{)}\] \[\quad+\frac{1}{\sin\big{(}\theta_{0}+f(\varphi)\big{)}}\partial_{ \varphi}\Big{(}\big{(}d_{f}\Psi\{h\}[h]\big{)}\big{(}\theta_{0}+f(\varphi), \varphi\big{)}\Big{)},\]
with
\[\big{(}d_{f}\Psi\{f\}[h]\big{)}\big{(}\theta_{0}+f(\varphi), \varphi\big{)} =\big{(}d_{f}\Psi_{p}\{f\}[h]\big{)}\big{(}\theta_{0}+f(\varphi), \varphi\big{)}\] \[=\frac{\omega_{N}-\omega_{S}}{4\pi}\int_{0}^{2\pi}h(\varphi^{ \prime})\log\Big{(}D\big{(}\theta_{0}+f(\varphi),\theta_{0}+f(\varphi^{\prime}),\varphi,\varphi^{\prime}\big{)}\Big{)}\sin\big{(}\theta_{0}+f(\varphi^{ \prime})\big{)}d\varphi^{\prime},\]
and
\[(\partial_{\theta}\Psi\{f\})(\theta,\varphi)= \frac{\omega_{N}}{4\pi}\int_{0}^{2\pi}\int_{0}^{\theta_{0}+f( \varphi^{\prime})}\frac{\partial_{\theta}D(\theta,\theta^{\prime},\varphi, \varphi^{\prime})\sin(\theta^{\prime})}{D(\theta,\theta^{\prime},\varphi, \varphi^{\prime})}d\theta^{\prime}d\varphi^{\prime}\] \[+\frac{\widetilde{\gamma}}{4\pi}\int_{0}^{2\pi}\int_{0}^{\pi}\frac {\partial_{\theta}D(\theta,\theta^{\prime},\varphi,\varphi^{\prime})\sin(2 \theta^{\prime})}{D(\theta,\theta^{\prime},\varphi,\varphi^{\prime})}d\theta^{ \prime}d\varphi^{\prime}. \tag{2.8}\]
Notice that \(d_{f}\mathscr{F}(c,f)\) is continuous in \(f\) provided that the functions
\[f\mapsto\partial_{\varphi}\Big{(}(\partial_{\theta}\Psi\{f\}\big{(}\theta_{0}+f( \varphi),\varphi\big{)}\Big{)}\quad\text{and}\quad f\mapsto\partial_{\varphi} \Big{(}d_{f}\Psi_{p}\{f\}[h]\big{(}\theta_{0}+f(\varphi),\varphi\big{)}\Big{)} \tag{2.9}\]
are continuous. Let us start with the first condition in (2.9) and, since the analysis is similar, we shall only give details for just one of the terms. Note that from (2.8), we can write
\[(\partial_{\theta}\Psi\{f\})\big{(}\theta_{0}+f(\varphi),\varphi \big{)}= \frac{\omega_{N}}{4\pi}\int_{0}^{2\pi}\int_{0}^{\theta_{0}+f( \varphi^{\prime})}\frac{\partial_{\theta}D\big{(}\theta_{0}+f(\varphi),\theta^ {\prime},\varphi,\varphi^{\prime}\big{)}\sin(\theta^{\prime})}{D\big{(}\theta_ {0}+f(\varphi),\theta^{\prime},\varphi,\varphi^{\prime}\big{)}}d\theta^{\prime }d\varphi^{\prime}\] \[+\frac{\omega_{N}}{4\pi}\int_{0}^{2\pi}\int_{0}^{\pi}\frac{ \partial_{\theta}D\big{(}\theta_{0}+f(\varphi),\theta^{\prime},\varphi, \varphi^{\prime}\big{)}\sin(\theta^{\prime})}{D\big{(}\theta_{0}+f(\varphi), \theta^{\prime},\varphi,\varphi^{\prime}\big{)}}d\theta^{\prime}d\varphi^{\prime}\] \[+\frac{\widetilde{\gamma}}{4\pi}\int_{0}^{2\pi}\int_{0}^{\pi} \frac{\partial_{\theta}D\big{(}\theta_{0}+f(\varphi),\theta^{\prime},\varphi, \varphi^{\prime}\big{)}\sin(2\theta^{\prime})}{D\big{(}\theta_{0}+f(\varphi), \theta^{\prime},\varphi,\varphi^{\prime}\big{)}}d\theta^{\prime}d\varphi^{\prime}\] \[\triangleq J_{1}\{f\}(\varphi)+J_{2}\{f\}(\varphi)+J_{3}\{f\}(\varphi).\]
We focus on the first term \(J_{1}\). Note that to check that the first condition in (2.9) is satisfied, we need to compute \(\partial_{\varphi}J_{1}\{f\}\). However, let us simplify \(J_{1}\) before differentiating. Observe that taking the derivative of (1.9) leads to
\[\partial_{\theta}D(\theta,\theta^{\prime},\varphi,\varphi^{ \prime})= 2\Big{[}\sin\Big{(}\tfrac{\theta-\theta^{\prime}}{2}\Big{)} \cos\Big{(}\tfrac{\theta-\theta^{\prime}}{2}\Big{)}+\cos(\theta)\sin(\theta^{ \prime})\sin^{2}\Big{(}\tfrac{\varphi-\varphi^{\prime}}{2}\Big{)}\,\Big{]}, \tag{2.10}\] \[\partial_{\theta^{\prime}}D(\theta,\theta^{\prime},\varphi, \varphi^{\prime})= 2\Big{[}-\sin\Big{(}\tfrac{\theta-\theta^{\prime}}{2}\Big{)} \cos\Big{(}\tfrac{\theta-\theta^{\prime}}{2}\Big{)}+\sin(\theta)\cos(\theta^{ \prime})\sin^{2}\Big{(}\tfrac{\varphi-\varphi^{\prime}}{2}\Big{)}\,\Big{]}. \tag{2.11}\]
Hence,
\[(\partial_{\theta}D+\partial_{\theta^{\prime}}D)(\theta,\theta^ {\prime},\varphi,\varphi^{\prime})= 2\big{[}\cos(\theta)\sin(\theta^{\prime})+\sin(\theta)\cos( \theta^{\prime})\big{]}\sin^{2}\Big{(}\tfrac{\varphi-\varphi^{\prime}}{2} \Big{)}\] \[= 2\sin(\theta+\theta^{\prime})\sin^{2}\Big{(}\tfrac{\varphi- \varphi^{\prime}}{2}\Big{)}\,.\]
Thus, adding and substracting \(\partial_{\theta^{\prime}}D\) appropriately and integrating by parts, we find
\[J_{1}\{f\}(\varphi)= \frac{\omega_{N}}{4\pi}\int_{0}^{2\pi}\int_{0}^{\theta_{0}+f( \varphi^{\prime})}\frac{\sin^{2}\Big{(}\tfrac{\varphi-\varphi^{\prime}}{2} \Big{)}\sin\big{(}\theta^{\prime}+\theta_{0}+f(\varphi)\big{)}}{D(\theta_{0}+ f(\varphi),\theta^{\prime},\varphi,\varphi^{\prime})}\sin(\theta^{\prime})d \theta^{\prime}d\varphi^{\prime}\] \[-\frac{\omega_{N}}{4\pi}\int_{0}^{2\pi}\int_{0}^{\theta_{0}+f( \varphi^{\prime})}\frac{\partial_{\theta^{\prime}}D\big{(}\theta_{0}+f( \varphi),\theta^{\prime},\varphi,\varphi^{\prime}\big{)}}{D\big{(}\theta_{0}+ f(\varphi),\theta^{\prime},\varphi,\varphi^{\prime}\big{)}}\sin(\theta^{\prime})d \theta^{\prime}d\varphi^{\prime}\] \[= \frac{\omega_{N}}{2\pi}\int_{0}^{2\pi}\int_{0}^{\theta_{0}+f( \varphi^{\prime})}\frac{\sin^{2}\Big{(}\tfrac{\varphi-\varphi^{\prime}}{2} \Big{)}\sin\big{(}\theta^{\prime}+\theta_{0}+f(\varphi)\big{)}}{D\big{(} \theta_{0}+f(\varphi),\theta^{\prime},\varphi,\varphi^{\prime}\big{)}}\sin( \theta^{\prime})d\theta^{\prime}d\varphi^{\prime}\] \[-\frac{\omega_{N}}{4\pi}\int_{0}^{2\pi}\log\Big{(}D\big{(}\theta_ {0}+f(\varphi),\theta_{0}+f(\varphi^{\prime}),\varphi,\varphi^{\prime}\big{)} \Big{)}\sin\big{(}\theta_{0}+f(\varphi^{\prime})\big{)}d\varphi^{\prime}\] \[+\frac{\omega_{N}}{4\pi}\int_{0}^{2\pi}\int_{0}^{\theta_{0}+f( \varphi^{\prime})}\log\Big{(}D\big{(}\theta_{0}+f(\varphi),\theta^{\prime}, \varphi,\varphi^{\prime}\big{)}\Big{)}\cos(\theta^{\prime})d\theta^{\prime}d \varphi^{\prime}\] \[\triangleq J_{1,1}\{f\}(\varphi)+J_{1,2}\{f\}(\varphi)+J_{1,3}\{f\}( \varphi).\]
Let us work with \(J_{1,1}\{f\}\), which is the most singular term. Now, we differentiate in \(\varphi\) and obtain
\[\partial_{\varphi}J_{1,1}\{f\}(\varphi)= \frac{\omega_{N}}{2\pi}\int_{0}^{2\pi}\int_{0}^{\theta_{0}+f( \varphi^{\prime})}\frac{f^{\prime}(\varphi)\cos\big{(}\theta^{\prime}+\theta_{0} +f(\varphi)\big{)}\sin^{2}\Big{(}\tfrac{\varphi-\varphi^{\prime}}{2}\Big{)}}{D( \theta_{0}+f(\varphi),\theta^{\prime},\varphi,\varphi^{\prime})}\sin(\theta^{ \prime})d\theta^{\prime}d\varphi^{\prime}\] \[+\frac{\omega_{N}}{4\pi}\int_{0}^{2\pi}\int_{0}^{\theta_{0}+f( \varphi^{\prime})}\frac{\sin\big{(}\theta^{\prime}+\theta_{0}+f(\varphi)\big{)} \sin(\varphi-\varphi^{\prime})}{D(\theta_{0}+f(\varphi),\theta^{\prime},\varphi, \varphi^{\prime})}\sin(\theta^{\prime})d\theta^{\prime}d\varphi^{\prime}\] \[-\frac{\omega_{N}}{2\pi}\int_{0}^{2\pi}\int_{0}^{\theta_{0}+f( \varphi^{\prime})}\frac{\sin\big{(}\theta^{\prime}+\theta_{0}+f(\varphi) \big{)}\sin^{2}\Big{(}\tfrac{\varphi-\varphi^{\prime}}{2}\Big{)}}{D^{2}\big{(} \theta_{0}+f(\varphi),\theta^{\prime},\varphi,\varphi^{\prime}\big{)}}\partial_{ \varphi}\Big{(}D\big{(}\theta_{0}+f(\varphi),\theta^{\prime},\varphi,\varphi^{ \prime}\big{)}\Big{)}\sin(\theta^{\prime})d\theta^{\prime}d\varphi^{\prime}\] \[\triangleq J_{1,1,1}\{f\}(\varphi)+J_{1,1,2}\{f\}(\varphi)+J_{1,1,3}\{f\}( \varphi).\]
Notice that the most singular integral is \(J_{1,1,3}\). Thus, we only deal with that term. Remark that
\[\partial_{\varphi}\Big{(}D\big{(}\theta_{0}+f(\varphi),\theta^ {\prime},\varphi,\varphi^{\prime}\big{)}\Big{)}= \,f^{\prime}(\varphi)\partial_{\theta}D\big{(}\theta_{0}+f(\varphi), \theta^{\prime},\varphi,\varphi^{\prime}\big{)}+\partial_{\varphi}D\big{(} \theta_{0}+f(\varphi),\theta^{\prime},\varphi,\varphi^{\prime}\big{)}\] \[= \,2f^{\prime}(\varphi)\Big{[}\tfrac{1}{2}\sin\big{(}\theta_{0}+f( \varphi)-\theta^{\prime}\big{)}+\cos\big{(}\theta_{0}+f(\varphi)\big{)}\sin( \theta^{\prime})\sin^{2}\Big{(}\tfrac{\varphi-\varphi^{\prime}}{2}\Big{)}\, \Big{]}\] \[+\sin\big{(}\theta_{0}+f(\varphi)\big{)}\sin(\theta^{\prime})\sin( \varphi-\varphi^{\prime}).\]
We can make the change of variables \(\theta^{\prime}=t(\theta_{0}+f(\varphi^{\prime}))\) to simplify the integral obtaining
\[J_{1,1,3}\{f\}(\varphi)=-\frac{\omega_{N}}{2\pi}\int_{0}^{2\pi}\int_{0}^{1} \mathbb{K}\{f\}(t,\varphi,\varphi^{\prime})dtd\varphi^{\prime},\]
where
\[\mathbb{K}\{f\}(t,\varphi,\varphi^{\prime}) \triangleq\frac{\sin\left((1+t)\theta_{0}+f(\varphi)+tf(\varphi^{ \prime})\right)\sin^{2}\left(\frac{\varphi-\varphi^{\prime}}{2}\right)}{D^{2} \big{(}\theta_{0}+f(\varphi),t(\theta_{0}+f(\varphi^{\prime})),\varphi,\varphi ^{\prime}\big{)}}\sin\left(t(\theta_{0}+f(\varphi^{\prime}))\right)\big{(} \theta_{0}+f(\varphi^{\prime})\big{)}\] \[\times\Big{\{}2f^{\prime}(\varphi)\Big{[}\tfrac{1}{2}\sin\left((1 -t)\theta_{0}+f(\varphi)-tf(\varphi^{\prime})\right)+\cos\left(\theta_{0}+f( \varphi)\right)\sin\left(t(\theta_{0}+f(\varphi^{\prime}))\right)\sin^{2} \left(\frac{\varphi-\varphi^{\prime}}{2}\right)\Big{]}\] \[\quad+\sin\left(\theta_{0}+f(\varphi)\right)\sin\left(t(\theta_{0 }+f(\varphi))\right)\sin(\varphi-\varphi^{\prime})\Big{\}}.\]
Our goal is to check that \(f\mapsto J_{1,1,3}\{f\}\) is continuous. For this aim, we take \(f_{1},f_{2}\in B_{r,m}^{1+\alpha}\) and we estimate the difference at those points
\[J_{1,1,3}\{f_{2}\}(\varphi)-J_{1,1,3}\{f_{1}\}(\varphi).\]
In order to simplify, let us illustrate one of the terms since the order of singularity is the same at every term. For that, define
\[\tilde{J}_{1,1,3}\{f_{1},f_{2}\}(\varphi)\triangleq\int_{0}^{2\pi}\int_{0}^{1} \mathbb{K}\{f_{1},f_{2}\}(t,\varphi,\varphi^{\prime})dtd\varphi^{\prime},\]
where
\[\mathbb{K}\{f_{1},f_{2}\}(t,\varphi,\varphi^{\prime}) \triangleq\frac{\sin\left((1+t)\theta_{0}+f_{2}(\varphi)+tf_{2}( \varphi^{\prime})\right)\sin^{2}\left(\frac{\varphi-\varphi^{\prime}}{2} \right)}{D^{2}\big{(}\theta_{0}+f_{2}(\varphi),t(\theta_{0}+f_{2}(\varphi^{ \prime})),\varphi,\varphi^{\prime}\big{)}}\sin^{2}\big{(}t(\theta_{0}+f_{2}( \varphi^{\prime}))\big{)}\big{(}\theta_{0}+f_{2}(\varphi^{\prime})\big{)}\sin (\varphi-\varphi^{\prime})\] \[\quad\times\Big{[}\sin\big{(}\theta_{0}+f_{2}(\varphi)\big{)}- \sin\big{(}\theta_{0}+f_{1}(\varphi)\big{)}\Big{]}.\]
To estimate the previous term in the Holder space \(C^{\alpha}(\mathbb{T})\), we use Proposition A.1 with kernel
\[K(\varphi,\varphi^{\prime})\triangleq\int_{0}^{1}\mathbb{K}\{f_{1},f_{2}\}(t,\varphi,\varphi^{\prime})dt.\]
Remark that the 1-Lipschitz property of the function \(\sin\) implies
\[\Big{|}\sin\big{(}\theta_{0}+f_{2}(\varphi)\big{)}-\sin\big{(}\theta_{0}+f_{1 }(\varphi)\big{)}\Big{|}\leqslant\|f_{1}-f_{2}\|_{L^{\infty}(\mathbb{T})}.\]
Now, we choose
\[\begin{cases}r<\frac{1}{2}|\theta_{0}-\frac{\pi}{2}|,&\text{if }\theta_{0} \neq\frac{\pi}{2},\\ r<\frac{\pi}{4},&\text{if }\theta_{0}=\frac{\pi}{2}\end{cases}\]
and denote
\[\mathfrak{m}_{\theta_{0}}(r)\triangleq\begin{cases}\theta_{0}-r,&\text{if } \theta_{0}\leqslant\frac{\pi}{2},\\ \theta_{0}+r,&\text{if }\theta_{0}>\frac{\pi}{2},\end{cases}\qquad\quad\mathbb{M}_{ \theta_{0}}(r)\triangleq\begin{cases}\theta_{0}-r,&\text{if }\theta_{0}>\frac{\pi}{2},\\ 1,&\text{if }\theta_{0}=\frac{\pi}{2},\\ \theta_{0}+r,&\text{if }\theta_{0}<\frac{\pi}{2}.\end{cases}\]
Notice that \(\mathfrak{m}_{\theta_{0}}(r)\in(0,\pi)\) and \(\mathbb{M}_{\theta_{0}}(r)\in(0,\pi)\), thus, a convexity argument ensures that
\[\exists C_{1},C_{2}>0,\quad\forall t\in[0,1],\quad C_{1}t\leqslant\sin\big{(}t \,\mathfrak{m}_{\theta_{0}}(r)\big{)}\leqslant C_{2}t,\qquad C_{1}t\leqslant\sin \big{(}t\,\mathbb{M}_{\theta_{0}}(r)\big{)}\leqslant C_{2}t.\]
A direct estimation gives
\[\big{|}K(\varphi,\varphi^{\prime})\big{|} \leqslant C\|f_{1}-f_{2}\|_{L^{\infty}(\mathbb{T})}\int_{0}^{1} \frac{|\sin(\varphi-\varphi^{\prime})|\sin^{2}\left(\frac{\varphi-\varphi^{ \prime}}{2}\right)}{D^{2}\big{(}\theta_{0}+f_{2}(\varphi),t(\theta_{0}+f_{2}( \varphi^{\prime})),\varphi,\varphi^{\prime}\big{)}}\sin^{2}\big{(}t\,\mathbb{M} _{\theta_{0}}(r)\big{)}dt\] \[\leqslant C\|f_{1}-f_{2}\|_{L^{\infty}(\mathbb{T})}\int_{0}^{1} \frac{|\sin(\varphi-\varphi^{\prime})|\sin^{2}\left(\frac{\varphi-\varphi^{ \prime}}{2}\right)t^{2}}{D^{2}\big{(}\theta_{0}+f_{2}(\varphi),t(\theta_{0}+f_{2 }(\varphi^{\prime})),\varphi,\varphi^{\prime}\big{)}}dt.\]
We need to control the denominator. For any \(t\in[0,1]\), we have
\[D\big{(}\theta_{0}+f_{2}(\varphi),t(\theta_{0}+f_{2}(\varphi^{ \prime})),\varphi,\varphi^{\prime}\big{)} \geqslant 2\delta_{0}\Big{[}\sin^{2}\left(\tfrac{(1-t)\theta_{0}+f_{2}( \varphi)-tf_{2}(\varphi^{\prime})}{2}\right)+\sin\big{(}t\,\mathbbm{m}_{\theta_ {0}}(r)\big{)}\sin^{2}\left(\tfrac{\varphi-\varphi^{\prime}}{2}\right)\Big{]}\] \[\geqslant C\Big{[}\sin^{2}\left(\tfrac{(1-t)\theta_{0}+f_{2}( \varphi)-tf_{2}(\varphi^{\prime})}{2}\right)+t\sin^{2}\left(\tfrac{\varphi- \varphi^{\prime}}{2}\right)\Big{]}.\]
For \(r\) small enough, we have
\[\big{|}(1-t)\theta_{0}+f_{2}(\varphi)-tf_{2}(\varphi^{\prime})\big{|}\leqslant \theta_{0}+2\|f_{2}\|_{L^{\infty}(\mathbb{T})}\leqslant\theta_{0}+2r<\pi.\]
Hence, by concavity of the function \(\sin\) on \((0,\frac{\pi}{2})\), we infer
\[\sin^{2}\left(\tfrac{(1-t)\theta_{0}+f_{2}(\varphi)-tf_{2}( \varphi^{\prime})}{2}\right) \geqslant C\big{|}\theta_{0}+f_{2}(\varphi)-t\big{(}\theta_{0}+f _{2}(\varphi^{\prime})\big{)}\big{|}^{2}\] \[\geqslant C\big{|}(1-t)\big{(}\theta_{0}+f_{2}(\varphi)\big{)}+t \big{(}f_{2}(\varphi)-f_{2}(\varphi^{\prime})\big{)}\big{|}^{2}\] \[\geqslant C\Big{[}(1-t)-t\|f_{2}\|_{\mathrm{Lip}}\left|\sin\left( \tfrac{\varphi-\varphi^{\prime}}{2}\right)\right|\Big{]}^{2}\] \[\geqslant C\Big{[}\tfrac{(1-t)^{2}}{2}-t^{2}\|f_{2}\|_{\mathrm{ Lip}}^{2}\sin^{2}\left(\tfrac{\varphi-\varphi^{\prime}}{2}\right)\Big{]}.\]
To obtain the last estimate, we have used the following classical inequality
\[\forall(a,b)\in(\mathbb{R}_{+})^{2},\quad(a-b)^{2}\geqslant\tfrac{a^{2}}{2}- b^{2}.\]
Consequently,
\[D\big{(}\theta_{0}+f_{2}(\varphi),t(\theta_{0}+f_{2}(\varphi^{ \prime})),\varphi,\varphi^{\prime}\big{)} \geqslant C\Big{[}\tfrac{(1-t)^{2}}{2}-t^{2}\|f_{2}\|_{\mathrm{ Lip}}^{2}\sin^{2}\left(\tfrac{\varphi-\varphi^{\prime}}{2}\right)+t\sin^{2} \left(\tfrac{\varphi-\varphi^{\prime}}{2}\right)\Big{]}\] \[\geqslant C\Big{[}\tfrac{(1-t)^{2}}{2}+\big{(}1-tr^{2}\big{)}t \sin^{2}\left(\tfrac{\varphi-\varphi^{\prime}}{2}\right)\Big{]}.\]
Therefore, for \(r\) small enough, we get for any \(t\in[0,1]\),
\[D\big{(}\theta_{0}+f_{2}(\varphi),t(\theta_{0}+f_{2}(\varphi^{\prime})), \varphi,\varphi^{\prime}\big{)}\geqslant C\Big{[}(1-t)^{2}+t\sin^{2}\left( \tfrac{\varphi-\varphi^{\prime}}{2}\right)\Big{]}.\]
Putting together the foregoing calculations yields
\[\big{|}K(\varphi,\varphi^{\prime})\big{|}\leqslant C\|f_{1}-f_{2}\|_{L^{ \infty}(\mathbb{T})}\int_{0}^{1}\frac{|\sin(\varphi-\varphi^{\prime})|\sin^{2} \left(\tfrac{\varphi-\varphi^{\prime}}{2}\right)t^{2}}{\Big{[}(1-t)^{2}+t \sin^{2}\left(\tfrac{\varphi-\varphi^{\prime}}{2}\right)\Big{]}^{2}}dt.\]
Now, observe that
\[\frac{t^{2}\sin^{2}\left(\tfrac{\varphi-\varphi^{\prime}}{2}\right)}{(1-t)^{2 }+t\sin^{2}\left(\tfrac{\varphi-\varphi^{\prime}}{2}\right)}\leqslant Ct\]
and then
\[\big{|}K(\varphi,\varphi^{\prime})\big{|}\leqslant C\|f_{1}-f_{2}\|_{L^{ \infty}(\mathbb{T})}\int_{0}^{1}\frac{t|\sin(\varphi-\varphi^{\prime})|}{(1-t) ^{2}+t\sin^{2}\left(\tfrac{\varphi-\varphi^{\prime}}{2}\right)}dt.\]
Next, use that for any \(t\in[0,1]\),
\[\frac{t|\sin(\varphi-\varphi^{\prime})|}{(1-t)^{2}+t\sin^{2}\left(\tfrac{ \varphi-\varphi^{\prime}}{2}\right)} \leqslant\frac{t|\sin(\varphi-\varphi^{\prime})|}{\Big{[}(1-t)^{2 }+t\sin^{2}\left(\tfrac{\varphi-\varphi^{\prime}}{2}\right)\Big{]}^{\frac{1}{ 2}}\sqrt{t}\left|\sin\left(\tfrac{\varphi-\varphi^{\prime}}{2}\right)\right|}\] \[\leqslant\Big{[}(1-t)^{2}+t\sin^{2}\left(\tfrac{\varphi-\varphi^{ \prime}}{2}\right)\Big{]}^{-\frac{1}{2}}\] \[\leqslant C\Big{[}|1-t|+\sqrt{t}\left|\sin\left(\tfrac{\varphi- \varphi^{\prime}}{2}\right)\right|\Big{]}^{-1}.\]
The last inequality follows from the following classical estimate
\[\forall(a,b)\in(\mathbb{R}_{+})^{2},\quad\sqrt{a^{2}+b^{2}}\geqslant\tfrac{1}{ \sqrt{2}}(a+b).\]
This implies in turn
\[|K(\varphi,\varphi^{\prime})| \leqslant C\|f_{1}-f_{2}\|_{L^{\infty}(\mathbb{T})}\int_{0}^{1} \Big{[}|1-t|+\sqrt{t}\left|\sin\left(\tfrac{\varphi-\varphi^{\prime}}{2}\right) \right|\Big{]}^{-1}dt\] \[\leqslant C\|f_{1}-f_{2}\|_{L^{\infty}(\mathbb{T})}\int_{0}^{1} \left|\sin\left(\tfrac{\varphi-\varphi^{\prime}}{2}\right)\right|^{-(1-\alpha)} t^{-\frac{1-\alpha}{2}}|1-t|^{-\alpha}dt\] \[\leqslant C\|f_{1}-f_{2}\|_{L^{\infty}(\mathbb{T})}\left|\sin \left(\tfrac{\varphi-\varphi^{\prime}}{2}\right)\right|^{-(1-\alpha)},\]
where we have used the classical interpolation estimate
\[\forall\alpha\in(0,1),\quad\forall(a,b)\in(\mathbb{R}_{+})^{2},\quad(a+b)^{- 1}\leqslant a^{-\alpha}b^{-(1-\alpha)}.\]
The above computations allow to conclude that the hypothesis (A.1) is checked for the kernel \(K\). Similarly, we can check that (A.2) is satisfied and hence Proposition A.1 can be applied obtaining the continuity in \(f\). Let us continue with the second condition in (2.9). We can write
\[\partial_{\varphi}\Big{(}d_{f}\Psi\{f\}[h]\big{(}\theta_{0}+f(\varphi),\varphi \big{)}\Big{)}=\frac{\omega_{N}-\omega_{S}}{4\pi}\int_{0}^{2\pi}\frac{ \partial_{\varphi}\big{[}D\big{(}\theta_{0}+f(\varphi),\theta_{0}+f(\varphi^{ \prime}))\big{]}}{D\big{(}\theta_{0}+f(\varphi),\theta_{0}+f(\varphi^{\prime} )\big{)}}\sin\big{(}\theta_{0}+f(\varphi^{\prime})\big{)}h(\varphi^{\prime})d \varphi^{\prime}.\]
By adding and subtracting \(\partial_{\varphi^{\prime}}\big{[}D\big{(}\theta_{0}+f(\varphi),\theta_{0}+f( \varphi^{\prime})\big{)}\big{]}\) appropriately and integrating by parts, we find
\[\partial_{\varphi}\Big{(}d_{f}\Psi\{f\}[h]\big{(}\theta_{0}+f( \varphi),\varphi\big{)}\Big{)}\] \[=\frac{\omega_{N}-\omega_{S}}{4\pi}\int_{0}^{2\pi}\frac{ \partial_{\varphi}\big{[}D\big{(}\theta_{0}+f(\varphi),\theta_{0}+f(\varphi^ {\prime}),\varphi,\varphi^{\prime}\big{)}\big{]}+\partial_{\varphi^{\prime}} \big{[}D\big{(}\theta_{0}+f(\varphi),\theta_{0}+f(\varphi^{\prime}),\varphi, \varphi^{\prime}\big{)}\big{]}}{D\big{(}\theta_{0}+f(\varphi),\theta_{0}+f( \varphi^{\prime}),\varphi,\varphi^{\prime}\big{)}}\sin\big{(}\theta_{0}+f( \varphi^{\prime})\big{)}h(\varphi^{\prime})d\varphi^{\prime}\] \[\quad+\frac{\omega_{N}-\omega_{S}}{4\pi}\int_{0}^{2\pi}\frac{ \partial_{\varphi}\big{[}D\big{(}\theta_{0}+f(\varphi),\theta_{0}+f(\varphi^ {\prime}),\varphi,\varphi^{\prime}\big{)}\big{]}\partial_{\varphi^{\prime}} \Big{(}\sin\big{(}\theta_{0}+f(\varphi^{\prime})\big{)}h(\varphi^{\prime}) \Big{)}d\varphi^{\prime}.\]
Using the definition of \(D\) in (1.8), we infer
\[\partial_{\varphi}D(\theta,\theta^{\prime},\varphi,\varphi^{\prime})=\sin( \theta)\sin(\theta^{\prime})\sin(\varphi-\varphi^{\prime})=-\partial_{\varphi^ {\prime}}D(\theta,\theta^{\prime},\varphi,\varphi^{\prime}).\]
Combined with (2.10)-(2.11), we get
\[\partial_{\varphi}\big{[}D\big{(}\theta_{0}+f(\varphi),\theta_{0 }+f(\varphi^{\prime}),\varphi,\varphi^{\prime}\big{)}\big{]}+\partial_{ \varphi^{\prime}}\big{[}D\big{(}\theta_{0}+f(\varphi),\theta_{0}+f(\varphi^{ \prime}),\theta_{0}+f(\varphi^{\prime}),\varphi,\varphi^{\prime}\big{)}\big{]}\] \[=f^{\prime}(\varphi)\partial_{\theta}D\big{(}\theta_{0}+f(\varphi ),\theta_{0}+f(\varphi^{\prime}),\varphi,\varphi^{\prime}\big{)}+f^{\prime}( \varphi^{\prime})\partial_{\theta^{\prime}}D\big{(}\theta_{0}+f(\varphi), \theta_{0}+f(\varphi^{\prime}),\varphi,\varphi^{\prime}\big{)}\] \[=\sin\big{(}f(\varphi)-f(\varphi^{\prime})\big{)}\big{(}f^{ \prime}(\varphi)-f^{\prime}(\varphi^{\prime})\big{)}+2f^{\prime}(\varphi)\cos \big{(}\theta_{0}+f(\varphi)\big{)}\sin\big{(}\theta_{0}+f(\varphi^{\prime} )\big{)}\sin^{2}\left(\tfrac{\varphi-\varphi^{\prime}}{2}\right)\] \[\quad+2f^{\prime}(\varphi^{\prime})\cos\big{(}\theta_{0}+f(\varphi ^{\prime})\big{)}\sin\big{(}\theta_{0}+f(\varphi)\big{)}\sin^{2}\left(\tfrac {\varphi-\varphi^{\prime}}{2}\right).\]
As a consequence, we obtain
\[\partial_{\varphi}\Big{(}d_{f}\Psi_{p}\{f\}[h]\big{(}\theta_{0}+f (t,\varphi),\varphi\big{)}\Big{)}\] \[= \frac{\omega_{N}-\omega_{S}}{4\pi}\int_{0}^{2\pi}\frac{\sin\big{(} f(\varphi)-f(\varphi^{\prime})\big{)}\big{(}f^{\prime}(\varphi)-f^{\prime}( \varphi^{\prime})\big{)}}{D\big{(}\theta_{0}+f(\varphi),\theta_{0}+f(\varphi^{ \prime}),\varphi,\varphi^{\prime}\big{)}}\sin\big{(}\theta_{0}+f(\varphi^{ \prime})\big{)}h(\varphi^{\prime})d\varphi^{\prime}\] \[+\frac{\omega_{N}-\omega_{S}}{2\pi}\int_{0}^{2\pi}\frac{\cos\big{(} \theta_{0}+f(\varphi)\big{)}f^{\prime}(\varphi)\sin\big{(}\theta_{0}+f( \varphi^{\prime})\big{)}\sin^{2}\left(\tfrac{\varphi-\varphi^{\prime}}{2} \right)}{D\big{(}\theta_{0}+f(\varphi),\theta_{0}+f(\varphi^{\prime}),\varphi, \varphi^{\prime}\big{)}}\sin\big{(}\theta_{0}+f(\varphi^{\prime})\big{)}h( \varphi^{\prime})d\varphi^{\prime}\] \[+\frac{\omega_{N}-\omega_{S}}{2\pi}\int_{0}^{2\pi}\frac{\cos(\theta_{ 0}+f(\varphi^{\prime}))f^{\prime}(\varphi)\sin(\theta_{0}+f(\varphi))\sin^{2} \left(\tfrac{\varphi-\varphi^{\prime}}{2}\right)}{D\big{(}\theta_{0}+f( \varphi),\theta_{0}+f(\varphi^{\prime}),\varphi,\varphi^{\prime}\big{)}}\sin \big{(}\theta_{0}+f(\varphi^{\prime})\big{)}h(\varphi^{\prime})d\varphi^{\prime}\] \[\triangleq \frac{\omega_{N}-\omega_{S}}{4\pi}\big{[}I_{1}\{f\}h(\varphi)+I_{2} \{f\}h(\varphi)+I_{3}\{f\}h(\varphi)+I_{4}\{f\}h(\varphi)\big{]}.\]
In the following, we show that \(f\mapsto I_{i}\{f\}\) is continuous by showing that it has a modulus of continuity. Let us just give the details for \(I_{1}\) and the others follow similarly. For that, take \(f_{1},f_{2}\in B^{1+\alpha}_{r,m}\), and estimate
\[I_{1}\{f_{1}\}h(\varphi)-I_{1}\{f_{2}\}h(\varphi)= \int_{0}^{2\pi}K_{1}\{f_{1},f_{2}\}(\varphi,\varphi^{\prime})\big{(} f_{1}^{\prime}(\varphi)-f_{1}^{\prime}(\varphi^{\prime})\big{)}h(\varphi^{ \prime})d\varphi^{\prime}\] \[+\int_{0}^{2\pi}K_{2}\{f_{1},f_{2}\}(\varphi,\varphi^{\prime}) \big{(}(f_{1}-f_{2})^{\prime}(\varphi)-(f_{1}-f_{2})^{\prime}(\varphi^{\prime} )\big{)}h(\varphi^{\prime})d\varphi^{\prime}\] \[+\int_{0}^{2\pi}K_{3}\{f_{1},f_{2}\}(\varphi,\varphi^{\prime}) \big{(}f_{2}^{\prime}(\varphi)-f_{2}^{\prime}(\varphi^{\prime})\big{)}h( \varphi^{\prime})d\varphi^{\prime},\]
where
\[K_{1}\{f_{1},f_{2}\}(\varphi,\varphi^{\prime}) \triangleq\frac{\sin\big{(}f_{1}(\varphi)-f_{1}(\varphi^{\prime} )\big{)}-\sin\big{(}f_{2}(\varphi)-f_{2}(\varphi^{\prime})\big{)}}{D\big{(} \theta_{0}+f_{1}(\varphi),\theta_{0}+f_{1}(\varphi^{\prime}),\varphi,\varphi^{ \prime}\big{)}}\sin\big{(}\theta_{0}+f_{1}(\varphi^{\prime})\big{)},\] \[K_{2}\{f_{1},f_{2}\}(\varphi,\varphi^{\prime}) \triangleq\frac{\sin\big{(}f_{2}(\varphi)-f_{2}(\varphi^{\prime} )\big{)}}{D\big{(}\theta_{0}+f_{1}(\varphi),\theta_{0}+f_{1}(\varphi^{\prime} ),\varphi,\varphi^{\prime}\big{)}}\sin\big{(}\theta_{0}+f_{1}(\varphi^{\prime} )\big{)},\] \[K_{3}\{f_{1},f_{2}\}(\varphi,\varphi^{\prime}) \triangleq\frac{\sin\big{(}f_{2}(\varphi)-f_{2}(\varphi^{\prime} )\big{)}}{D\big{(}\theta_{0}+f_{1}(\varphi),\theta_{0}+f_{1}(\varphi^{\prime} ),\varphi,\varphi^{\prime}\big{)}}\Big{[}\sin\big{(}\theta_{0}+f_{1}(\varphi^ {\prime})\big{)}-\sin\big{(}\theta_{0}+f_{2}(\varphi^{\prime})\big{)}\Big{]}\] \[\quad+\frac{\sin\big{(}f_{2}(\varphi)-f_{2}(\varphi^{\prime}) \big{)}}{D\big{(}\theta_{0}+f_{1}(\varphi),\theta_{0}+f_{1}(\varphi^{\prime} ),\varphi,\varphi^{\prime}\big{)}D\big{(}\theta_{0}+f_{2}(\varphi),\theta_{0}+ f_{2}(\varphi^{\prime}),\varphi,\varphi^{\prime}\big{)}}\] \[\quad\times\Big{[}D\big{(}\theta_{0}+f_{2}(\varphi),\theta_{0}+f_{ 2}(\varphi^{\prime}),\varphi,\varphi^{\prime}\big{)}-D\big{(}\theta_{0}+f_{1} (\varphi),\theta_{0}+f_{1}(\varphi^{\prime}),\varphi,\varphi^{\prime}\big{)} \Big{]}\sin\big{(}\theta_{0}+f_{2}(\varphi^{\prime})\big{)}.\]
Since the kernel of the integral operator has a non differentiable term, our purpose is to use Proposition A.2. For that, let us estimate each kernel \(K_{i}\). First note that using (1.8) and (2.6), we get
\[D\big{(}\theta_{0}+f(\varphi),\theta_{0}+f(\varphi^{\prime}), \varphi,\varphi^{\prime}\big{)} \geqslant 2\sin\big{(}\theta_{0}+f(\varphi)\big{)}\sin\big{(} \theta_{0}+f(\varphi^{\prime})\big{)}\sin\Big{(}\frac{\varphi-\varphi^{\prime}} {2}\Big{)}\] \[\geqslant 2\delta_{0}^{2}\sin^{2}\left(\tfrac{\varphi-\varphi^{ \prime}}{2}\right). \tag{2.12}\]
Using (2.12), it is easy to check that for \(K_{1}\) we get
\[|K_{1}\{f_{1},f_{2}\}(\varphi,\varphi^{\prime})| \leqslant C\big{|}(f_{1}-f_{2})(\varphi)-(f_{1}-f_{2})(\varphi^{ \prime})\big{|}\left|\sin\left(\tfrac{\varphi-\varphi^{\prime}}{2}\right) \right|^{-2}\] \[\leqslant C\|f_{1}-f_{2}\|_{C^{1+\alpha}(\mathbb{T})}\left|\sin \left(\tfrac{\varphi-\varphi^{\prime}}{2}\right)\right|^{-1}\]
and
\[|\partial_{\varphi}K_{1}\{f_{1},f_{2}\}(\varphi,\varphi^{\prime})| \leqslant C\|f_{1}-f_{2}\|_{C^{1+\alpha}(\mathbb{T})}\left|\sin\left(\tfrac{ \varphi-\varphi^{\prime}}{2}\right)\right|^{-2}.\]
Similarly for \(K_{2}\) we obtain
\[|K_{2}\{f_{1},f_{2}\}(\varphi,\varphi^{\prime})| \leqslant C\left|\sin\left(\tfrac{\varphi-\varphi^{\prime}}{2} \right)\right|^{-1},\] \[|\partial_{\varphi}K_{2}\{f_{1},f_{2}\}(\varphi,\varphi^{\prime})| \leqslant C\left|\sin\left(\tfrac{\varphi-\varphi^{\prime}}{2}\right) \right|^{-2}.\]
Finally, we shall work with the last kernel \(K_{3}\). Note that
\[D\big{(}\theta_{0}+f_{2}(\varphi),\theta_{0}+f_{2}(\varphi^{ \prime}),\varphi,\varphi^{\prime}\big{)}-D\big{(}\theta_{0}+f_{1}(\varphi), \theta_{0}+f_{1}(\varphi^{\prime}),\varphi,\varphi^{\prime}\big{)}\] \[=2\sin^{2}\left(\tfrac{f_{2}(\varphi)-f_{2}(\varphi^{\prime})}{2} \right)-2\sin^{2}\left(\tfrac{f_{1}(\varphi)-f_{1}(\varphi^{\prime})}{2}\right)\] \[\quad+2\sin^{2}\left(\tfrac{\varphi-\varphi^{\prime}}{2}\right) \left|\sin\left(\theta_{0}+f_{2}(\varphi)\right)\sin\left(\theta_{0}+f_{2}( \varphi^{\prime})\right)-\sin\left(\theta_{0}+f_{1}(\varphi)\right)\sin\left( \theta_{0}+f_{1}(\varphi^{\prime})\right)\right\}\] \[\leqslant 2\left|\sin\left(\tfrac{f_{2}(\varphi)-f_{2}(\varphi^{ \prime})}{2}\right)\right|\left|\sin\left(\tfrac{f_{2}(\varphi)-f_{2}(\varphi^{ \prime})}{2}\right)-\sin\left(\tfrac{f_{1}(\varphi)-f_{1}(\varphi^{\prime})}{2} \right)\right|\] \[\quad+2\left|\sin\left(\tfrac{f_{1}(\varphi)-f_{1}(\varphi^{\prime} )}{2}\right)\right|\left|\sin\left(\tfrac{f_{2}(\varphi)-f_{1}(\varphi^{\prime} )}{2}\right)-\sin\left(\tfrac{f_{1}(\varphi)-f_{1}(\varphi^{\prime})}{2} \right)\right|\] \[\quad+2\sin^{2}\left(\tfrac{\varphi-\varphi^{\prime}}{2}\right) \left|\sin\left(\theta_{0}+f_{2}(\varphi)\right)\right|\left|\sin\left(\theta_{0}+f_ {2}(\varphi^{\prime})\right)-\sin\left(\theta_{0}+f_{1}(\varphi^{\prime})\right)\right|\] \[\leqslant C\|f_{2}-f_{1}\|_{C^{1+\alpha}(\mathbb{T})}\sin^{2} \left(\tfrac{\varphi-\varphi^{\prime}}{2}\right).\]
To get the last estimate, we have used the 1-Lipschitz property of the function \(\sin\) together with (A.5) and
\[\forall k\in\{1,2\},\quad\left|\sin\Big{(}\tfrac{f_{k}(\varphi)-f_{k }(\varphi^{\prime})}{2}\Big{)}\right| \leqslant|f_{k}(\varphi)-f_{k}(\varphi^{\prime})|\] \[\leqslant\|f_{k}\|_{C^{1+\alpha}(\mathbb{T})}|\varphi-\varphi^{ \prime}|\] \[\leqslant Cr\left|\sin\Big{(}\tfrac{\varphi-\varphi^{\prime}}{2} \Big{)}\right|.\]
Hence
\[|K_{3}\{f_{1},f_{2}\}(\varphi,\varphi^{\prime})|\leqslant C\|f_{2}-f_{1}\|_{C^ {1+\alpha}(\mathbb{T})}\left|\sin\Big{(}\tfrac{\varphi-\varphi^{\prime}}{2} \Big{)}\right|^{-1}.\]
Then, differentiating we find
\[|\partial_{\varphi}K_{3}\{f_{1},f_{2}\}(\varphi,\varphi^{\prime})|\leqslant C \|f_{2}-f_{1}\|_{C^{1+\alpha}(\mathbb{T})}\left|\sin\Big{(}\tfrac{\varphi- \varphi^{\prime}}{2}\Big{)}\right|^{-2}.\]
Hence Proposition A.2 implies
\[\left\|\big{(}I_{1}\{f_{1}\}-I_{2}\{f_{2}\}\big{)}[h]\right\|_{C^{\alpha}( \mathbb{T})}\leqslant C\|f_{1}-f_{2}\|_{C^{1+\alpha}(\mathbb{T})}\|h\|_{L^{ \infty}(\mathbb{T})},\]
concluding that \(f\mapsto I_{1}\{f\}\) is continuous.
**(ii)** It follows from
\[\partial_{c}d_{f}\mathscr{F}(c,f)[h]=\partial_{\varphi}h. \tag{2.13}\]
**(iii)** We assume now that \(f=0\). Since \(\Psi_{p}\{0\}=0\), we have
\[d_{f}\mathscr{F}(c,0)[h](\varphi)=c\,\partial_{\varphi}h(\varphi)+\frac{1}{ \sin(\theta_{0})}\partial_{\varphi}\Big{(}\partial_{\theta}\Psi_{\text{\tiny FC }}(\theta_{0})h(\varphi)+\big{(}d_{f}\Psi_{p}\{0\}[h]\big{)}(\theta_{0},\varphi )\Big{)}.\]
From (2.2), we deduce
\[\frac{1}{\sin(\theta_{0})}\partial_{\varphi}\Big{(}\partial_{\theta}\Psi_{ \text{\tiny FC}}(\theta_{0})h\Big{)}=\big{(}\tfrac{\omega_{N}-\omega_{S}}{2} -\widetilde{\gamma}\big{)}\,\partial_{\varphi}h=-\big{(}\tfrac{\omega_{N}- \omega_{S}}{2}-\widetilde{\gamma}\big{)}\sum_{n=1}^{\infty}\mathbf{m}nh_{n} \sin(\mathbf{m}n\varphi). \tag{2.14}\]
After straightforward simplifications and using Lemma A.1, we get
\[\big{(}d_{f}\Psi_{p}\{0\}[h]\big{)}(\theta_{0},\varphi) =\frac{\omega_{N}-\omega_{S}}{4\pi}\sin(\theta_{0})\int_{0}^{2\pi }h(\varphi^{\prime})\log\Big{(}1-\cos^{2}(\theta_{0})-\sin^{2}(\theta_{0})\cos (\varphi-\varphi^{\prime})\Big{)}d\varphi^{\prime}\] \[=\frac{\omega_{N}-\omega_{S}}{4\pi}\sin(\theta_{0})\sum_{n=1}^{ \infty}h_{n}\int_{0}^{2\pi}\log\Big{(}1-\cos^{2}(\theta_{0})-\sin^{2}(\theta _{0})\cos(\varphi^{\prime})\Big{)}\cos\big{(}\mathbf{m}n(\varphi-\varphi^{ \prime})\big{)}d\varphi^{\prime}\] \[=\frac{\omega_{N}-\omega_{S}}{2}\sin(\theta_{0})\sum_{n=1}^{ \infty}h_{n}I_{\mathbf{m}n}(\theta_{0},\theta_{0})\cos(\mathbf{m}n\varphi)\] \[=-\frac{\omega_{N}-\omega_{S}}{2}\sin(\theta_{0})\sum_{n=1}^{ \infty}\frac{h_{n}}{\mathbf{m}n}\cos(\mathbf{m}n\varphi).\]
Therefore,
\[\frac{1}{\sin(\theta_{0})}\partial_{\varphi}\Big{(}\big{(}d_{f}\Psi_{p}\{0\}[ h]\big{)}(\theta_{0},\varphi)\Big{)}=\frac{\omega_{N}-\omega_{S}}{2}\sum_{n=1}^{ \infty}h_{n}\sin(\mathbf{m}n\varphi). \tag{2.15}\]
Introducing the classical \(2\pi\)-periodic Hilbert transform \(\mathcal{H}\) defined by
\[\mathcal{H}h(\varphi)\triangleq\frac{1}{2\pi}\int_{0}^{2\pi}h(\varphi^{\prime })\cot\Big{(}\tfrac{\varphi-\varphi^{\prime}}{2}\Big{)}\,d\varphi^{\prime},\]
which acts on the cosine basis as
\[\forall n\in\mathbb{N}^{*},\quad\mathcal{H}\cos(n\varphi)=\sin(n\varphi),\]
we have
\[\frac{1}{\sin(\theta_{0})}\partial_{\varphi}\Big{(}\big{(}d_{f}\Psi_{p}\{0\}[ h]\big{)}(\theta_{0},\varphi)\Big{)}=\frac{\omega_{N}-\omega_{S}}{2}\, \mathcal{H}h(\varphi). \tag{2.16}\]
Combining (2.14), (2.15) and (2.16), we obtain the Fourier representation (2.5) or equivalently, the following structure for the linearized operator
\[d_{f}\mathscr{F}(c,0)=\Big{(}c+\frac{\omega_{N}-\omega_{S}}{2}-\widetilde{\gamma }\Big{)}\partial_{\varphi}+\frac{\omega_{N}-\omega_{S}}{2}\mathcal{H}. \tag{2.17}\]
Clearly, if \(c\neq\widetilde{\gamma}-\frac{\omega_{N}-\omega_{S}}{2}\), the operator \(\Big{(}c+\frac{\omega_{N}-\omega_{S}}{2}-\widetilde{\gamma}\Big{)}\partial_{ \varphi}:X_{\mathbf{m}}^{1+\alpha}\to Y_{\mathbf{m}}^{\alpha}\) is an isomorphism. We shall prove the compactness of the Hilbert transform in the Holder spaces. For that, we come back to the integral expression which can be rewritten as
\[\mathcal{H}h(\varphi)=\frac{1}{2\pi}\int_{0}^{2\pi}K(\varphi,\varphi^{\prime}) \partial_{\varphi^{\prime}}h(\varphi^{\prime})d\varphi^{\prime},\qquad K( \varphi,\varphi^{\prime})\triangleq\log\Big{(}\Big{|}\sin\big{(}\tfrac{ \varphi-\varphi^{\prime}}{2}\big{)}\Big{|}\Big{)}.\]
For any \(\delta\in(0,1)\), we have
\[|K(\varphi,\varphi^{\prime})|\lesssim\Big{|}\sin\big{(}\tfrac{\varphi- \varphi^{\prime}}{2}\big{)}\Big{|}^{-\delta},\qquad|\partial_{\varphi}K( \varphi,\varphi^{\prime})|\lesssim\Big{|}\sin\big{(}\tfrac{\varphi-\varphi^ {\prime}}{2}\big{)}\Big{|}^{-(1+\delta)}.\]
Thus, we can apply Lemma A.1 (with \(\delta=1-\beta\)) and get
\[\forall\beta\in(\alpha,1),\quad\|\mathcal{H}h\|_{C^{\beta}(\mathbb{T})} \lesssim\|\partial_{\varphi}h\|_{L^{\infty}(\mathbb{T})}\lesssim\|h\|_{C^{1+ \alpha}(\mathbb{T})}.\]
Since, for \(\beta\in(\alpha,1)\), the injection \(C^{\beta}(\mathbb{T})\hookrightarrow C^{\alpha}(\mathbb{T})\) is compact, we deduce that the operator \(\mathcal{H}:C^{1+\alpha}(\mathbb{T})\to C^{\alpha}(\mathbb{T})\) is compact. Thus, (2.17) together with [18, Cor. 5.9] implies, for \(c\neq\widetilde{\gamma}-\frac{\omega_{N}-\omega_{N}}{2}\), the desired Fredholmness property. This proves Proposition 2.1.
According to (2.5), the candidates for bifurcation points define the following singular set
\[\mathcal{S}_{c}\triangleq\Big{\{}c_{\mathbf{m}}(\widetilde{\gamma})\triangleq \widetilde{\gamma}-(\omega_{N}-\omega_{S})\tfrac{\mathbf{m}-1}{2\mathbf{m}}, \quad\mathbf{m}\in\mathbb{N}^{*}\Big{\}}. \tag{2.18}\]
Finally, in the following proposition, we gather all the remaining conditions required to apply the Crandall-Rabinowitz Theorem. Then, Theorem 1.1 follows immediately from this proposition.
**Proposition 2.2**.: _Let \(\alpha\in(0,1)\), \(\widetilde{\gamma}\in\mathbb{R}\) and \(\mathbf{m}\in\mathbb{N}^{*}\)._
1. _The linear operator_ \(d_{f}\mathscr{F}\big{(}c_{\mathbf{m}}(\widetilde{\gamma}),0\big{)}:X_{ \mathbf{m}}^{1+\alpha}\to Y_{\mathbf{m}}^{\alpha}\) _is of Fredholm type with index zero._
2. _The kernel of_ \(d_{f}\mathscr{F}\big{(}c_{\mathbf{m}}(\widetilde{\gamma}),0\big{)}\) _is one dimensional. More precisely,_ \[\ker\Big{(}d_{f}\mathscr{F}\big{(}c_{\mathbf{m}}(\widetilde{\gamma}),0\big{)} \Big{)}=\mathsf{span}\big{(}\varphi\mapsto\cos(\mathbf{m}\varphi)\big{)}.\] (2.19)
3. _The transversality condition is satisfied, namely_ \[\partial_{c}d_{f}\mathscr{F}\big{(}c_{\mathbf{m}}(\widetilde{\gamma}),0\big{)} [\varphi\mapsto\cos(\mathbf{m}\varphi)]\not\in\mathrm{Im}\Big{(}d_{f} \mathscr{F}\big{(}c_{\mathbf{m}}(\widetilde{\gamma}),\widetilde{\gamma},0 \big{)}\Big{)}.\] (2.20)
Proof.: **(i)** By construction (2.18), our bifurcation points satisfy
\[c_{\mathbf{m}}(\widetilde{\gamma})=\widetilde{\gamma}-(\omega_{N}-\omega_{S}) \frac{\mathbf{m}-1}{2\mathbf{m}}\neq\widetilde{\gamma}-\frac{\omega_{N}- \omega_{S}}{2}.\]
Hence, Proposition 2.1-(iii) gives the desired Fredholmness property.
**(ii)** The sequence \(\big{(}c_{\mathbf{m}\mathbf{m}}(\widetilde{\gamma})\big{)}_{n\in\mathbb{N}^{*}}\) being strictly monotone, then (2.5) and (2.18) give that the kernel is one dimensional and generated by \(\varphi\mapsto\cos(\mathbf{m}\varphi)\).
**(iii)** To prove the transversality condition, we first need to describe the range. For this aim, we introduce on \(Y_{\mathbf{m}}^{\alpha}\) the scalar product
\[\left(\sum_{n=1}^{\infty}a_{n}\sin(\mathbf{m}n\varphi)\Big{|}\sum_{n=1}^{ \infty}b_{n}\sin(\mathbf{m}n\varphi)\right)\triangleq\sum_{n=1}^{\infty}a_{ n}b_{n}.\]
Now we claim that
\[\mathrm{Im}\Big{(}d_{f}\mathscr{F}\big{(}c_{\mathbf{m}}(\widetilde{\gamma}),0 \big{)}\Big{)}=\mathsf{span}^{\perp_{(\cdot|\cdot)}}\big{(}\varphi\mapsto\sin (\mathbf{m}\varphi)\big{)}. \tag{2.21}\]
Indeed, the first inclusion is obvious from (2.5)-(2.18). The converse inclusion is obtained because the range is closed and of codimension \(1\), which results from the Fredholmness property with zero index and the one dimensional kernel condition. Now, it remains to check the transversality condition. In view of (2.13), we infer
\[\partial_{c}d_{f}\mathscr{F}\big{(}c_{\mathbf{m}}(\widetilde{\gamma}),0\big{)} [\cos(\mathbf{m}\varphi)]=-\mathbf{m}\sin(\mathbf{m}\varphi)\in\mathsf{span} \big{(}\varphi\mapsto\sin(\mathbf{m}\varphi)\big{)}. \tag{2.22}\]
Combining (2.21) and (2.22), the condition (2.20) follows. This achieves the proof of Proposition 2.2.
The two-interfaces case: vorticity bands
This section is devoted to the proof of Theorem 1.2 dealing with the case of two interfaces (\(M=3\)). As before, we shall reformulate the problem with a suitable functional and implement bifurcation techniques. The computations are more involved due to the interactions between the boundaries which leads to a vectorial analysis.
### Equations of interest
We start again by some remarks on the flat solution.
**Lemma 3.1**.: _Let \(0<\theta_{1}<\theta_{2}<\pi.\) For any \(\omega_{N},\omega_{C},\omega_{S}\in\mathbb{R}\) such that_
\[\omega_{N}+\omega_{S}=(\omega_{N}-\omega_{C})\cos(\theta_{1})+(\omega_{C}- \omega_{S})\cos(\theta_{2}), \tag{3.1}\]
_the following function describing the flat cap (FC2)_
\[\overline{\Omega}_{\mathrm{rc2}}(\theta)\triangleq\omega_{N}\mathbf{1}_{0< \theta<\theta_{1}}+\omega_{C}\mathbf{1}_{\theta_{1}\leqslant\theta<\theta_{2 }}+\omega_{S}\mathbf{1}_{\theta_{2}\leqslant\theta<\pi}\]
_is a stationary solution to Euler equations. In addition,_
\[\partial_{\theta}\Psi_{\mathrm{rc2}}(\theta_{1})=\omega_{N}\tan\big{(}\tfrac {\theta_{1}}{2}\big{)}-\widetilde{\gamma}\sin(\theta_{1}),\qquad\Psi_{ \mathrm{rc2}}(\theta_{2})=-\omega_{S}\cot\big{(}\tfrac{\theta_{2}}{2}\big{)}- \widetilde{\gamma}\sin(\theta_{2}). \tag{3.2}\]
Proof.: \(\blacktriangleright\) Observe that
\[\forall\alpha\in\mathbb{R},\quad\forall\xi\in\mathbb{S}^{2},\quad\overline{ \Omega}_{\mathrm{rc2}}\big{(}\mathcal{R}(\alpha)\xi\big{)}=\overline{\Omega}_ {\mathrm{rc2}}(\xi).\]
Hence, Lemma 1.2 applies and proves that this is a stationary solution.
\(\blacktriangleright\) The constraint (3.1) follows again from (1.5) and (1.2), namely
\[0=\int_{\mathbb{S}^{2}}\Omega_{\mathrm{rc2}}(t,\xi)d\sigma(\xi) =\int_{0}^{2\pi}\int_{0}^{\pi}\Omega_{\mathrm{rc2}}(t,\theta,\varphi )\sin(\theta)d\theta d\varphi\] \[=2\pi\left(\omega_{N}\int_{0}^{\theta_{1}}\sin(\theta)d\theta+ \omega_{C}\int_{\theta_{1}}^{\theta_{2}}\sin(\theta)d\theta+\omega_{S}\int_{ \theta_{2}}^{\pi}\sin(\theta)d\theta\right)\] \[=2\pi\Big{[}\omega_{N}\big{(}1-\cos(\theta_{1})\big{)}+\omega_{C }\big{(}\cos(\theta_{1})-\cos(\theta_{2})\big{)}+\omega_{S}\big{(}1+\cos( \theta_{2})\big{)}\Big{]}.\]
\(\blacktriangleright\) The potential velocity solves the elliptic equation
\[\Delta\Psi_{\mathrm{rc2}}=\Omega_{\mathrm{rc2}},\qquad\text{i.e.}\qquad \partial_{\theta}\big{[}\sin(\theta)\partial_{\theta}\Psi_{\mathrm{rc2}}( \theta)\big{]}=\sin(\theta)\Big{(}\omega_{N}\mathbf{1}_{0<\theta<\theta_{1}}+ \omega_{C}\mathbf{1}_{\theta_{1}\leqslant\theta<\theta_{2}}+1_{\theta_{2} \leqslant\theta<\pi}\Big{)}+\widetilde{\gamma}\sin(2\theta).\]
Integrating the previous relation and chosing the constant of integration as in Lemma 2.1 gives
\[\partial_{\theta}\Psi_{\mathrm{rc2}}(\theta)=\begin{cases}\frac{\omega_{N}}{ \sin(\theta)}\big{(}1-\cos(\theta)\big{)}-\widetilde{\gamma}\sin(\theta),& \text{if }\theta\in(0,\theta_{1}),\\ \frac{\omega_{N}}{\sin(\theta)}\big{(}1-\cos(\theta_{1})\big{)}+\frac{\omega_ {C}}{\sin(\theta)}\big{(}\cos(\theta_{1})-\cos(\theta)\big{)}-\widetilde{ \gamma}\sin(\theta),&\text{if }\theta\in[\theta_{1},\theta_{2}),\\ \frac{\omega_{N}}{\sin(\theta)}\big{(}1-\cos(\theta_{1})\big{)}+\frac{\omega_ {C}}{\sin(\theta)}\big{(}\cos(\theta_{1})-\cos(\theta_{2})\big{)}+\frac{ \omega_{S}}{\sin(\theta)}\big{(}\cos(\theta_{2})-\cos(\theta)\big{)}- \widetilde{\gamma}\sin(\theta),&\text{if }\theta\in[\theta_{2},\pi).\end{cases}\]
Finally, using (3.1), we can write
\[\partial_{\theta}\Psi_{\mathrm{rc2}}(\theta)=\begin{cases}\frac{\omega_{N}}{ \sin(\theta)}\big{(}1-\cos(\theta)\big{)}-\widetilde{\gamma}\sin(\theta),& \text{if }\theta\in(0,\theta_{1}),\\ \frac{\omega_{C}}{\sin(\theta)}\big{(}\cos(\theta_{2})-\cos(\theta)\big{)}- \frac{\omega_{S}}{\sin(\theta)}\big{(}1+\cos(\theta_{2})\big{)}-\widetilde{ \gamma}\sin(\theta),&\text{if }\theta\in[\theta_{1},\theta_{2}),\\ -\frac{\omega_{S}}{\sin(\theta)}\big{(}1+\cos(\theta)\big{)}-\widetilde{ \gamma}\sin(\theta),&\text{if }\theta\in[\theta_{2},\pi).\end{cases}\]
At \(\theta=\theta_{1}\), we find
\[\partial_{\theta}\Psi_{\mathrm{rc2}}(\theta_{1}) =\frac{\omega_{N}}{\sin(\theta_{1})}\big{(}1-\cos(\theta_{1}) \big{)}-\widetilde{\gamma}\sin(\theta_{1})\] \[=\omega_{N}\tan\big{(}\tfrac{\theta_{2}}{2}\big{)}-\widetilde{ \gamma}\sin(\theta_{1}).\]
At \(\theta=\theta_{2}\), we find
\[\partial_{\theta}\Psi_{\mathrm{rc2}}(\theta_{2}) =-\frac{\omega_{S}}{\sin(\theta_{2})}\big{(}1+\cos(\theta_{2}) \big{)}-\widetilde{\gamma}\sin(\theta_{2})\] \[=-\omega_{S}\cot\big{(}\tfrac{\theta_{2}}{2}\big{)}-\widetilde{ \gamma}\sin(\theta_{2}).\]
From now on, we fix
\[0<\theta_{1}<\theta_{2}<\pi, \tag{3.3}\]
and consider a vortex cap solution close to \(\overline{\Omega}_{\varrho_{\rm{C2}}}\) in the form
\[\overline{\Omega}(t,\theta,\varphi)=\omega_{N}\mathbf{1}_{0<\theta<\theta_{1}+f _{1}(t,\varphi)}+\omega_{C}\mathbf{1}_{\theta_{1}+f_{1}(t,\varphi)\leqslant \theta<\theta_{2}+f_{2}(t,\varphi)}+\omega_{S}\mathbf{1}_{\theta_{2}+f_{2}(t, \varphi)\leqslant\theta<\pi},\]
with \(\omega_{N},\omega_{C},\omega_{S}\in\mathbb{R}\) satisfying (3.1) and
\[\forall k\in\{1,2\},\quad|f_{k}(t,\varphi)|\ll 1.\]
For \(k\in\{1,2\}\), the interface oscillating around \(\theta=\theta_{k}\) can be parametrized by
\[z_{k}(t,\varphi)\triangleq\begin{pmatrix}\sin\left(\theta_{k}+f_{k}(t, \varphi)\right)\cos(\varphi)\\ \sin\left(\theta_{k}+f_{k}(t,\varphi)\right)\sin(\varphi)\\ \cos\left(\theta_{k}+f_{k}(t,\varphi)\right)\end{pmatrix}.\]
In view of (1.21), the parametrizations \(z_{1}\) and \(z_{2}\) must satisfy the following equations
\[\forall k\in\{1,2\},\quad\partial_{t}z_{k}(t,\varphi)\cdot\big{(}J\partial_{ \varphi}z_{k}(t,\varphi)\big{)}=\partial_{\varphi}\Big{(}\Psi\big{(}t,z_{k}( t,\varphi)\big{)}\Big{)}.\]
Proceeding as in Section 2.1, we obtain
\[\partial_{t}z_{k}(t,\varphi)\cdot\big{(}J\partial_{\varphi}z_{k}(t,\varphi) \big{)}=\sin\big{(}\theta_{k}+f_{k}(t,\varphi)\big{)}\partial_{t}f_{k}(t, \varphi).\]
Consequently, the unknowns \(f_{1}\) and \(f_{2}\) have to solve the following (coupled) system
\[\forall k\in\{1,2\},\quad\partial_{t}f_{k}(t,\varphi)=\frac{\partial_{\varphi }\Big{(}\Psi\big{(}t,z_{k}(t,\theta)\big{)}\Big{)}}{\sin\big{(}\theta_{0}+f_{ k}(t,\varphi)\big{)}}.\]
Now, the stream function writes
\[\Psi\big{(}t,z_{k}(t,\varphi)\big{)} =\frac{\omega_{N}}{4\pi}\int_{0}^{2\pi}\int_{0}^{\theta_{1}+f_{1} (t,\varphi^{\prime})}\log\Big{(}D\big{(}\theta_{k}+f_{k}(t,\varphi),\theta^{ \prime},\varphi,\varphi^{\prime}\big{)}\Big{)}\sin(\theta^{\prime})d\theta^{ \prime}d\varphi^{\prime}\] \[\quad+\frac{\omega_{C}}{4\pi}\int_{0}^{2\pi}\int_{\theta_{1}+f_{ 1}(t,\varphi^{\prime})}^{\theta_{2}+f_{2}(t,\varphi^{\prime})}\log\Big{(}D \big{(}\theta_{k}+f_{k}(t,\varphi),\theta^{\prime},\varphi,\varphi^{\prime} \big{)}\Big{)}\sin(\theta^{\prime})d\theta^{\prime}d\varphi^{\prime}\] \[\quad+\frac{\widetilde{\gamma}}{4\pi}\int_{0}^{2\pi}\int_{0}^{ \pi}\log\Big{(}D\big{(}\theta_{k}+f_{k}(t,\varphi),\theta^{\prime},\varphi, \varphi^{\prime}\big{)}\Big{)}\sin(2\theta^{\prime})d\theta^{\prime}d\varphi ^{\prime}.\]
Figure 4: Representation of two interfaces (in red and blue) vortex cap solutions with 6-fold symmetry.
Remark that the unperturbed stream function can be written as follows
\[\Psi_{\text{rc2}}(\theta) =\frac{\omega_{N}}{4\pi}\int_{0}^{2\pi}\int_{0}^{\theta_{1}}\log \left(D(\theta,\theta^{\prime},0,\varphi^{\prime})\right)\sin(\theta^{\prime})d \theta^{\prime}d\varphi^{\prime}\] \[\quad+\frac{\omega_{C}}{4\pi}\int_{\theta_{1}}^{\theta_{2}}\int_{ \theta_{0}}^{\pi}\log\left(D(\theta,\theta^{\prime},0,\varphi^{\prime})\right) \sin(\theta^{\prime})d\theta^{\prime}d\varphi^{\prime}\] \[\quad+\frac{\omega_{S}}{4\pi}\int_{\theta_{2}}^{\pi}\int_{\theta_ {0}}^{\pi}\log\left(D(\theta,\theta^{\prime},0,\varphi^{\prime})\right)\sin( \theta^{\prime})d\theta^{\prime}d\varphi^{\prime}\] \[\quad+\frac{\widetilde{\gamma}}{4\pi}\int_{0}^{2\pi}\int_{0}^{ \pi}\log\left(D(\theta,\theta^{\prime},0,\varphi^{\prime})\right)\sin(2\theta ^{\prime})d\theta^{\prime}d\varphi^{\prime}.\]
Making appeal to Chasles' relation, we can write
\[\Psi\big{(}t,z_{k}(t,\varphi)\big{)} =\Psi_{\text{rc2}}\big{(}\theta_{k}+f_{k}(t,\varphi)\big{)}+\Psi_ {p,2}\{f_{1},f_{2}\}\big{(}\theta_{k}+f_{k}(t,\varphi),\varphi\big{)},\] \[\Psi_{p,2}\{f_{1},f_{2}\}(\theta,\varphi) \triangleq\frac{\omega_{N}-\omega_{C}}{4\pi}\int_{0}^{2\pi}\int_{ \theta_{1}}^{\theta_{1}+f_{1}(t,\varphi^{\prime})}\log\left(D(\theta,\theta^{ \prime},\varphi,\varphi^{\prime})\right)\sin(\theta^{\prime})d\theta^{\prime}d \varphi^{\prime}\] \[\quad+\frac{\omega_{C}-\omega_{S}}{4\pi}\int_{0}^{2\pi}\int_{ \theta_{2}}^{\theta_{2}+f_{2}(t,\varphi^{\prime})}\log\left(D(\theta,\theta^{ \prime},\varphi,\varphi^{\prime})\right)\sin(\theta^{\prime})d\theta^{\prime}d \varphi^{\prime}.\]
Therefore, the vortex cap equation (1.21) becomes
\[\forall k\in\{1,2\},\quad\partial_{t}f_{k}(t,\varphi)=\frac{\partial_{\varphi }\Big{(}\Psi_{\text{rc2}}\big{(}\theta_{k}+f_{k}(t,\varphi)\big{)}+\Psi_{p,2} \{f_{1},f_{2}\}\big{(}\theta_{k}+f_{k}(t,\varphi),\varphi\big{)}\Big{)}}{\sin \big{(}\theta_{k}+f_{k}(t,\varphi)\big{)}}. \tag{3.4}\]
We look for traveling solutions at speed \(c\in\mathbb{R}\)
\[\forall k\in\{1,2\},\qquad f_{k}(t,\varphi)=f_{k}(\varphi-ct).\]
Thus, we shall solve
\[\mathscr{G}(c,f_{1},f_{2})=0,\qquad\mathscr{G}\triangleq(\mathscr{G}_{1}, \mathscr{G}_{2}),\]
where
\[\mathscr{G}_{k}(c,f_{1},f_{2})(\varphi)\triangleq c\,\partial_{\varphi}f_{k}( \varphi)+\frac{\partial_{\varphi}\Big{(}\Psi_{\text{rc2}}\big{(}\theta_{k}+f_{k }(\varphi)\big{)}+\Psi_{p,2}\{f_{1},f_{2}\}\big{(}\theta_{k}+f_{k}(\varphi), \varphi\big{)}\Big{)}}{\sin\big{(}\theta_{k}+f_{k}(\varphi)\big{)}}.\]
Observe that
\[\forall c\in\mathbb{R},\quad\mathscr{G}(c,0,0)=0.\]
This leads again to implement bifurcation theory.
### Spectral properties and proof of the main result
We check here the hypothesis of Crandall-Rabinowitz Theorem.
**Proposition 3.1**.: _Let \(\alpha\in(0,1)\) and \(\mathbf{m}\in\mathbb{N}^{*}\). There exists \(r>0\) such that_
1. _The function_ \(\mathscr{G}:\mathbb{R}\times B_{r,\mathbf{m}}^{1+\alpha}\times B_{r,\mathbf{m} }^{1+\alpha}\to Y_{\mathbf{m}}^{\alpha}\times Y_{\mathbf{m}}^{\alpha}\) _is well-defined and of class_ \(C^{1}\)_._
2. _The partial derivative_ \(\partial_{c}d_{(f_{1},f_{2})}\mathscr{G}:\mathbb{R}\times B_{r,\mathbf{m}}^{1 +\alpha}\times B_{r,\mathbf{m}}^{1+\alpha}\to\mathcal{L}(X_{\mathbf{m}}^{1+ \alpha}\times X_{\mathbf{m}}^{1+\alpha},Y_{\mathbf{m}}^{\alpha}\times Y_{ \mathbf{m}}^{\alpha})\) _exists and is continuous._
3. _At the equilibrium_ \((f_{1},f_{2})=(0,0)\)_, the linearized operator admits the following Fourier representation_ \[d_{(f_{1},f_{2})}\mathscr{G}(c,0,0)\left[\sum_{n=1}^{\infty}h_{n}^{(1) }\cos(\mathbf{m}\varphi),\sum_{n=1}^{\infty}h_{n}^{(2)}\cos(\mathbf{m}n\varphi)\right]\] \[\quad=\sum_{n=1}^{\infty}\mathbf{m}M_{\mathbf{m}n}(c,\theta_{1}, \theta_{2})\begin{pmatrix}h_{n}^{(1)}\\ h_{n}^{(2)}\end{pmatrix}\sin(\mathbf{m}n\varphi),\] (3.5) _with_ \[M_{n}(c,\theta_{1},\theta_{2})\triangleq\begin{pmatrix}-c+\frac{\omega_{N}- \omega_{C}}{2n}-\frac{\omega_{N}}{2\cos^{2}\left(\frac{\theta_{1}}{2}\right)}+ \widetilde{\gamma}&\frac{\omega_{P}-\omega_{S}}{2n}\frac{\sin(\theta_{2})}{\sin( \theta_{1})}\tan^{n}\left(\frac{\theta_{2}}{2}\right)\cot^{n}\left(\frac{\theta_ {2}}{2}\right)\\ \frac{\omega_{N}-\omega_{C}}{2n}\frac{\sin(\theta_{1})}{\sin(\theta_{2})}\tan^{n} \left(\frac{\theta_{1}}{2}\right)\cot^{n}\left(\frac{\theta_{2}}{2}\right)&-c+ \frac{\omega_{C}-\omega_{S}}{2n}+\frac{\omega_{S}}{2\sin^{2}\left(\frac{\theta_ {2}}{2}\right)}+\widetilde{\gamma}\end{pmatrix}.\] (3.6) _In addition, if_ \(c\not\in\left\{\widetilde{\gamma}-\frac{\omega_{N}}{2\cos^{2}\left(\frac{\theta_{1}} {2}\right)},\widetilde{\gamma}+\frac{\omega_{S}}{2\sin^{2}\left(\frac{\theta_{2}} {2}\right)}\right\},\) _then the operator_ \(d_{(f_{1},f_{2})}\mathscr{G}(c,0,0):X_{\mathbf{m}}^{1+\alpha}\times X_{\mathbf{m }}^{1+\alpha}\to Y_{\mathbf{m}}^{\alpha}\times Y_{\mathbf{m}}^{\alpha}\) _is of Fredholm type with index zero._
Proof.: **(i)** The proof is very close to Proposition 2.1-(i). Indeed, the functional involves terms corresponding to the self-interaction of each boundary with itself, which correspond to the one-interface analysis. It also involves new terms corresponding to the interaction between both boundaries, but which are non-singular (smooth kernels). Therefore, we omit the proof and just give the expression of the linearized operator. For \(k\in\{1,2\}\),
\[d_{f_{k}}\mathscr{G}_{k}(c,f_{1},f_{2})[h_{k}](\varphi)\] \[=c\,\partial_{\varphi}h_{k}(\varphi)+h_{k}(\varphi)\frac{\cos \big{(}\theta_{k}+f_{k}(\varphi)\big{)}}{\sin^{2}\big{(}\theta_{k}+f_{k}( \varphi)\big{)}}\partial_{\varphi}\Big{(}\Psi_{\text{\tiny PC}}\big{(}\theta_ {k}+f_{k}(\varphi)\big{)}+\Psi_{p,2}\{f_{1},f_{2}\}\big{(}\theta_{k}+f_{k}( \varphi),\varphi\big{)}\Big{)}\] \[\quad+\frac{1}{\sin\big{(}\theta_{k}+f_{k}(\varphi)\big{)}} \partial_{\varphi}\Big{(}\big{(}d_{f_{k}}\Psi_{p,2}\{f_{1},f_{2}\}[h]\big{)} \big{(}\theta_{k}+f_{k}(\varphi),\varphi\big{)}\Big{)}\]
and
\[d_{f_{3-k}}\mathscr{G}_{k}(c,f_{1},f_{2})[h_{3-k}](\varphi)=\frac{1}{\sin \big{(}\theta_{k}+f_{k}(\varphi)\big{)}}\partial_{\varphi}\Big{(}\big{(}d_{f _{3-k}}\Psi_{p,2}\{f_{1},f_{2}\}[h_{3-k}]\big{)}\big{(}\theta_{k}+f_{k}( \varphi),\varphi\big{)}\Big{)}.\]
If we denote \((\omega_{N},\omega_{C},\omega_{S})=(\omega_{1},\omega_{2},\omega_{3})\), then for \(k\in\{1,2\}\), we have,
\[\big{(}d_{f_{k}}\Psi_{p,2}\{f_{1},f_{2}\}[h_{k}]\big{)}(\theta,\varphi)=\frac{ \omega_{k}-\omega_{k+1}}{4\pi}\int_{0}^{2\pi}h_{k}(\varphi^{\prime})\log \Big{(}D\big{(}\theta,\theta_{k}+f_{k}(\varphi^{\prime}),\varphi,\varphi^{ \prime}\big{)}\Big{)}\sin\big{(}\theta_{k}+f_{k}(\varphi^{\prime})\big{)}d \varphi^{\prime}.\]
**(ii)** Immediate since
\[\partial_{\varphi}f_{(f_{1},f_{2})}\mathscr{G}(c,f_{1},f_{2})[h_{1},h_{2}]= \begin{pmatrix}\partial_{\varphi}h_{1}&0\\ 0&\partial_{\varphi}h_{2}\end{pmatrix}. \tag{3.7}\]
**(iii)** We assume now that \((f_{1},f_{2})=(0,0).\) Since \(\Psi_{p,2}\{0,0\}=0\), we have
\[d_{f_{k}}\mathscr{G}_{k}(c,0,0)[h_{k}](\varphi) =c\,\partial_{\varphi}h_{k}(\varphi)+\frac{1}{\sin(\theta_{k})} \partial_{\varphi}\Big{(}\partial_{\theta}\Psi_{\text{\tiny PC}}(\theta_{k})h _{k}(\varphi)+\big{(}d_{f_{k}}\Psi_{p,2}\{0,0\}[h_{k}]\big{)}(\theta_{k}, \varphi)\Big{)},\] \[d_{f_{3-k}}\mathscr{G}_{k}(c,0,0)[h_{3-k}](\varphi) =\frac{1}{\sin(\theta_{k})}\partial_{\varphi}\Big{(}\big{(}d_{f_{ 3-k}}\Psi_{p,2}\{0,0\}[h_{3-k}]\big{)}(\theta_{k},\varphi)\Big{)}.\]
From (3.2), we deduce
\[\frac{1}{\sin(\theta_{1})}\partial_{\varphi}\Big{(}\partial_{ \theta}\Psi_{\text{\tiny PC}}(\theta_{1})h_{1}\Big{)} =\left(\frac{\omega_{N}\tan\big{(}\frac{\theta_{1}}{2}\big{)}}{ \sin(\theta_{1})}-\widetilde{\gamma}\right)\partial_{\varphi}h_{1}\] \[=\left(\frac{\omega_{N}}{2\cos^{2}\big{(}\frac{\theta_{1}}{2} \big{)}}-\widetilde{\gamma}\right)\partial_{\varphi}h_{1}\] \[=\left(\widetilde{\gamma}-\frac{\omega_{N}}{2\cos^{2}\big{(} \frac{\theta_{1}}{2}\big{)}}\right)\sum_{n=1}^{\infty}\mathbf{m}nh_{n}^{(1)} \sin(\mathbf{m}m\varphi) \tag{3.8}\]
and
\[\frac{1}{\sin(\theta_{2})}\partial_{\varphi}\Big{(}\partial_{ \theta}\Psi_{\text{\tiny PC}}(\theta_{2})h_{2}\Big{)} =-\left(\frac{\omega_{S}\cot\big{(}\frac{\theta_{2}}{2}\big{)}}{ \sin(\theta_{2})}+\widetilde{\gamma}\right)\partial_{\varphi}h_{2}\] \[=-\left(\frac{\omega_{S}}{2\sin^{2}\big{(}\frac{\theta_{2}}{2} \big{)}}+\widetilde{\gamma}\right)\partial_{\varphi}h_{2}\] \[=\left(\frac{\omega_{S}}{2\sin^{2}\big{(}\frac{\theta_{2}}{2} \big{)}}+\widetilde{\gamma}\right)\sum_{n=1}^{\infty}\mathbf{m}nh_{n}^{(2)} \sin(\mathbf{m}n\varphi). \tag{3.9}\]
After straightforward simplifications and using Lemma A.1, we get for \(k,\ell\in\{1,2\}\),
\[\big{(}d_{f_{\ell}}\Psi_{p,2}\{0,0\}[h_{\ell}]\big{)}(\theta_{k},\varphi) =\frac{\omega_{\ell}-\omega_{\ell+1}}{4\pi}\sin(\theta_{\ell})\int _{0}^{2\pi}h_{\ell}(\varphi^{\prime})\log\Big{(}D(\theta_{k},\theta_{\ell}, \varphi,\varphi^{\prime})\Big{)}d\varphi^{\prime}\] \[=\frac{\omega_{\ell}-\omega_{\ell+1}}{2}\sin(\theta_{\ell})\sum_{n= 1}^{\infty}h_{n}^{(\ell)}I_{\mathbf{m}n}(\theta_{k},\theta_{\ell})\cos( \mathbf{m}n\varphi)\] \[=-\frac{\omega_{\ell}-\omega_{\ell+1}}{2}\sin(\theta_{\ell})\sum_{ n=1}^{\infty}\frac{h_{n}^{(\ell)}}{\mathbf{m}n}\tan^{\mathbf{m}n}\left(\frac{\min( \theta_{k},\theta_{\ell})}{2}\right)\cot^{\mathbf{m}n}\left(\frac{\max(\theta_{ k},\theta_{\ell})}{2}\right)\cos(\mathbf{m}n\varphi).\]
Therefore,
\[\frac{\partial_{\varphi}\Big{(}\big{(}d_{f_{\ell}}\Psi_{p,2}\{0,0\}[h_{\ell}]\big{)} (\theta_{k},\varphi)\Big{)}}{\sin(\theta_{k})}=\frac{\omega_{\ell}-\omega_{\ell +1}}{2}\frac{\sin(\theta_{\ell})}{\sin(\theta_{k})}\sum_{n=1}^{\infty}h_{n}^{( \ell)}\tan^{\mathbf{m}n}\left(\frac{\min(\theta_{k},\theta_{\ell})}{2}\right) \cot^{\mathbf{m}n}\left(\frac{\max(\theta_{k},\theta_{\ell})}{2}\right)\sin( \mathbf{m}m\varphi).\]
Putting together the foregoing calculations, we get (3.5)-(3.6). Now, denoting
\[\mathcal{Q}(\varphi)\triangleq\log\left(1-\cos(\theta_{1})\cos(\theta_{2})- \sin(\theta_{1})\sin(\theta_{2})\cos(\varphi)\right),\]
we have
\[d_{(f_{1},f_{2})}\mathscr{G}(c,0,0)=I+K,\] \[I\triangleq\begin{pmatrix}\left(c+\frac{\omega_{N}}{2\cos^{2} \left(\frac{\theta_{1}}{2}\right)}-\widetilde{\gamma}\right)\partial_{ \varphi}&0\\ 0&\left(c-\frac{\omega_{S}}{2\sin^{2}\left(\frac{\theta_{2}}{2}\right)}- \widetilde{\gamma}\right)\partial_{\varphi}\end{pmatrix},\] \[K\triangleq\begin{pmatrix}\frac{\omega_{N}-\omega_{C}}{2} \mathcal{H}&\frac{\omega_{C}-\omega_{S}}{2}\frac{\sin(\theta_{2})}{\sin( \theta_{1})}\partial_{\varphi}\mathcal{Q}*\cdot\\ \frac{\omega_{N}-\omega_{C}}{2}\frac{\sin(\theta_{1})}{\sin(\theta_{2})} \partial_{\varphi}\mathcal{Q}*\cdot&\frac{\omega_{C}-\omega_{S}}{2}\mathcal{ H}\end{pmatrix}. \tag{3.10}\]
If \(c\not\in\left\{\widetilde{\gamma}-\frac{\omega_{N}}{2\cos^{2}\left(\frac{ \theta_{1}}{2}\right)},\widetilde{\gamma}+\frac{\omega_{S}}{2\sin^{2}\left( \frac{\theta_{1}}{2}\right)}\right\},\) then \(I:X_{\mathbf{m}}^{1+\alpha}\times X_{\mathbf{m}}^{1+\alpha}\to Y_{\mathbf{m} }^{\alpha}\times Y_{\mathbf{m}}^{\alpha}\) is an isomorphism. We have already studied the compact property of the Hilbert transform in the proof of Proposition 2.2, so we are left with the anti-diagonal terms. Actually, the corresponding symbol decays exponentially fast in \(n\), which implies that \(\partial_{\varphi}\mathcal{Q}*\cdot\) is smoothing at every order (so a fortiori compact in the considered functional framework). Thus, the operator \(K:X_{\mathbf{m}}^{1+\alpha}\times X_{\mathbf{m}}^{1+\alpha}\to Y_{\mathbf{m} }^{\alpha}\times Y_{\mathbf{m}}^{\alpha}\) is compact. We deduce the desired Fredholmness property.
We shall now study the spectrum.
**Lemma 3.2**.: _Let \(\widetilde{\gamma}\in\mathbb{R}.\) There exists \(N(\theta_{1},\theta_{2})\triangleq N(\theta_{1},\theta_{2},\omega_{N},\omega _{S},\omega_{C})\in\mathbb{N}^{*}\) such that for any \(n\in\mathbb{N}^{*}\) with \(n\geqslant N(\theta_{1},\theta_{2}),\) there exist two velocities_
\[c_{n}^{\pm}(\widetilde{\gamma},\theta_{1},\theta_{2})\triangleq \widetilde{\gamma}+\frac{\omega_{S}}{4\sin^{2}\left(\frac{\theta_{2}}{2} \right)}-\frac{\omega_{N}}{4\cos^{2}\left(\frac{\theta_{1}}{2}\right)}+\frac{ \omega_{N}-\omega_{S}}{4n} \tag{3.11}\] \[\quad\pm\frac{1}{4}\sqrt{\left(\frac{\omega_{S}}{\sin^{2}\left( \frac{\theta_{2}}{2}\right)}+\frac{\omega_{N}}{\cos^{2}\left(\frac{\theta_{1}}{ 2}\right)}-\frac{\omega_{N}+\omega_{S}-2\omega_{C}}{n}\right)^{2}+\frac{1}{n^ {2}}(\omega_{N}-\omega_{C})(\omega_{C}-\omega_{S})\tan^{2n}\left(\frac{\theta _{2}}{2}\right)\cot^{2n}\left(\frac{\theta_{2}}{2}\right)}\]
_for which the matrix \(M_{n}\big{(}c_{n}^{\pm}(\widetilde{\gamma},\theta_{1},\theta_{2}),\theta_{1}, \theta_{2}\big{)}\) is singular. The sequences \(\big{(}c_{n}^{\pm}(\widetilde{\gamma},\theta_{1},\theta_{2})\big{)}_{n\geqslant N (\theta_{1},\theta_{2})}\) are strictly monotone and_
\[\mathbb{L}\triangleq\left\{\lim_{n\to\infty}c_{n}^{+}(\widetilde{\gamma}, \theta_{1},\theta_{2})\,,\,\lim_{n\to\infty}c_{n}^{-}(\widetilde{\gamma}, \theta_{1},\theta_{2})\right\}=\left\{\widetilde{\gamma}-\frac{\omega_{N}}{2 \cos^{2}\left(\frac{\theta_{1}}{2}\right)}\,,\,\widetilde{\gamma}+\frac{ \omega_{S}}{2\sin^{2}\left(\frac{\theta_{z}}{2}\right)}\right\}. \tag{3.12}\]
_Moreover,_
1. _If_ \[\omega_{S}\cos^{2}\left(\frac{\theta_{1}}{2}\right)+\omega_{N}\sin^{2}\left( \frac{\theta_{z}}{2}\right)\neq 0.\] (3.13) _then_ \(|\mathbb{L}|=2\) _and the following equations have no solution_ \[c_{p}^{+}(\widetilde{\gamma},\theta_{1},\theta_{2})=c_{q}^{-}(\widetilde{\gamma },\theta_{1},\theta_{2}),\qquad p,q\geqslant N(\theta_{1},\theta_{2}).\] (3.14)
2. _If_ \[\omega_{S}\cos^{2}\left(\frac{\theta_{1}}{2}\right)+\omega_{N}\sin^{2}\left( \frac{\theta_{2}}{2}\right)=0.\] (3.15) _then_ \[|\mathbb{L}|=1,\qquad\omega_{N}+\omega_{S}=\omega_{C}\neq 0,\qquad\omega_{N} \omega_{S}<0.\] _In particular, (_3.11_) simplifies into_ \[c_{n}^{\pm}(\widetilde{\gamma},\theta_{1},\theta_{2})=\widetilde{\gamma}+\frac{ \omega_{N}-\omega_{S}}{4n}-\frac{\omega_{N}}{2\cos^{2}\left(\frac{\theta_{1}}{2} \right)}\pm\frac{1}{4n}\sqrt{\omega_{C}^{2}-\omega_{N}\omega_{S}\tan^{2n}\left( \frac{\theta_{1}}{2}\right)\cot^{2n}\left(\frac{\theta_{2}}{2}\right)}.\] _In addition, for any_ \(\mathbf{m}\in\mathbb{N}\) _with_ \(\mathbf{m}\geqslant N(\theta_{1},\theta_{2}),\) _under one of the additional constraints_
\[\begin{split}(\mathbf{H1}+)&\ \omega_{C}>0,\qquad\omega_{N}>0, \qquad\omega_{S}<0,\\ (\mathbf{H2}+)&\ \omega_{C}>0,\qquad\omega_{N}<0, \qquad\omega_{S}>0\qquad\text{and}\qquad 2\cos^{2}\left(\tfrac{\theta_{1}}{2} \right)>\sin^{2}\left(\tfrac{\theta_{2}}{2}\right),\\ (\mathbf{H3}+)&\ \omega_{C}<0,\qquad\omega_{N}>0, \qquad\omega_{S}<0,\\ (\mathbf{H4}+)&\ \omega_{C}<0,\qquad\omega_{N}<0, \qquad\omega_{S}>0\qquad\text{and}\qquad 2\sin^{2}\left(\tfrac{\theta_{2}}{2} \right)>\cos^{2}\left(\tfrac{\theta_{1}}{2}\right),\end{split}\]
_the following equations have no solution_
\[\begin{split} c_{\mathbf{m}}^{+}(\widetilde{\gamma},\theta_{1}, \theta_{2})=c_{\mathbf{km}}^{-}(\widetilde{\gamma},\theta_{1},\theta_{2}), \qquad k\in\mathbb{N}^{*}.\end{split}\]
_And under one of the additional constraints_
\[\begin{split}(\mathbf{H1}-)&\ \omega_{C}>0,\qquad\omega_{N}<0, \qquad\omega_{S}>0,\\ (\mathbf{H2}-)&\ \omega_{C}>0,\qquad\omega_{N}>0, \qquad\omega_{S}<0\qquad\text{and}\qquad 2\sin^{2}\left(\tfrac{\theta_{2}}{2} \right)>\cos^{2}\left(\tfrac{\theta_{1}}{2}\right),\\ (\mathbf{H3}-)&\ \omega_{C}<0,\qquad\omega_{N}<0, \qquad\omega_{S}>0,\\ (\mathbf{H4}-)&\ \omega_{C}<0,\qquad\omega_{N}>0, \qquad\omega_{S}<0\qquad\text{and}\qquad 2\cos^{2}\left(\tfrac{\theta_{1}}{2} \right)>\sin^{2}\left(\tfrac{\theta_{2}}{2}\right),\end{split}\]
_the following equations have no solution_
\[\begin{split} c_{\mathbf{km}}^{+}(\widetilde{\gamma},\theta_{1}, \theta_{2})=c_{\mathbf{m}}^{-}(\widetilde{\gamma},\theta_{1},\theta_{2}), \qquad k\in\mathbb{N}^{*}.\end{split}\]
_This means that there is no \(\mathbf{m}\)-fold spectral collision, i.e. the \(\mathbf{m}\)-fold spectrum is simple._
Proof.: From (3.6), we have that the determinant of \(M_{n}(c,\theta_{1},\theta_{2})\) is
\[\begin{split}&\det\left(M_{n}(c,\theta_{1},\theta_{2})\right) \triangleq c^{2}-\beta_{n}(\widetilde{\gamma},\theta_{1},\theta_{2})c+\gamma _{n}(\widetilde{\gamma},\theta_{1},\theta_{2})\in\mathbb{R}_{2}[c],\\ \beta_{n}(\widetilde{\gamma},\theta_{1},\theta_{2})& \triangleq\frac{\omega_{N}-\omega_{S}}{2n}-\frac{\omega_{N}}{2 \cos^{2}\left(\tfrac{\theta_{1}}{2}\right)}+\frac{\omega_{S}}{2\sin^{2}\left( \tfrac{\theta_{2}}{2}\right)}+2\widetilde{\gamma},\\ \gamma_{n}(\widetilde{\gamma},\theta_{1},\theta_{2})& \triangleq\left(\frac{\omega_{C}-\omega_{S}}{2n}+\frac{\omega_{S}}{2\sin^{2} \left(\tfrac{\theta_{1}}{2}\right)}+\widetilde{\gamma}\right)\left(\frac{ \omega_{N}-\omega_{C}}{2n}-\frac{\omega_{N}}{2\cos^{2}\left(\tfrac{\theta_{1}} {2}\right)}+\widetilde{\gamma}\right)\\ &\qquad-\frac{1}{4n^{2}}(\omega_{N}-\omega_{C})(\omega_{C}-\omega _{S})\tan^{2n}\left(\tfrac{\theta_{1}}{2}\right)\cot^{2n}\left(\tfrac{\theta_{ 2}}{2}\right).\end{split} \tag{3.17}\]
The discriminant of the previous polynomial is
\[\begin{split}\Delta_{n}(\theta_{1},\theta_{2})& \triangleq\beta_{n}^{2}(\widetilde{\gamma},\theta_{1},\theta_{2})-4 \gamma_{n}(\widetilde{\gamma},\theta_{1},\theta_{2})\\ &=\frac{1}{4}\left[\left(\frac{\omega_{S}}{\sin^{2}\left(\tfrac{ \theta_{2}}{2}\right)}+\frac{\omega_{N}}{\cos^{2}\left(\tfrac{\theta_{1}}{2} \right)}-\frac{\omega_{N}+\omega_{S}-2\omega_{C}}{n}\right)^{2}+\frac{1}{n^{2}} (\omega_{N}-\omega_{C})(\omega_{C}-\omega_{S})\tan^{2n}\left(\tfrac{\theta_{1}} {2}\right)\cot^{2n}\left(\tfrac{\theta_{2}}{2}\right)\right].\end{split}\]
Notice that \(\Delta_{n}(\theta_{1},\theta_{2})\) is independent of \(\widetilde{\gamma}\). We shall now prove that
\[\exists\,N(\theta_{1},\theta_{2})\in\mathbb{N}^{*},\quad\forall n\in\mathbb{N },\quad n\geqslant N(\theta_{1},\theta_{2})\quad\Rightarrow\quad\Delta_{n}( \theta_{1},\theta_{2})>0. \tag{3.18}\]
Assuming that (3.18) is true, then we conclude that, for \(n\) large enough, we have two distinct real roots
\[\begin{split} c_{n}^{\pm}(\widetilde{\gamma},\theta_{1},\theta_{2} )&\triangleq\frac{1}{2}\beta_{n}(\widetilde{\gamma},\theta_{1}, \theta_{2})\pm\frac{1}{2}\sqrt{\Delta_{n}(\theta_{1},\theta_{2})}\\ &=\widetilde{\gamma}+\frac{\omega_{S}}{4\sin^{2}\left(\tfrac{ \theta_{2}}{2}\right)}-\frac{\omega_{N}}{4\cos^{2}\left(\tfrac{\theta_{1}}{2} \right)}+\frac{\omega_{N}-\omega_{S}}{4n}\\ &\pm\frac{1}{4}\sqrt{\left(\frac{\omega_{S}}{\sin^{2}\left(\tfrac{ \theta_{2}}{2}\right)}+\frac{\omega_{N}}{\cos^{2}\left(\tfrac{\theta_{1}}{2} \right)}-\frac{\omega_{N}+\omega_{S}-2\omega_{C}}{n}\right)^{2}+\frac{1}{n^{2}} (\omega_{N}-\omega_{C})(\omega_{C}-\omega_{S})\tan^{2n}\left(\tfrac{\theta_{1}} {2}\right)\cot^{2n}\left(\tfrac{\theta_{2}}{2}\right)}.\end{split}\]
\(\mathbf{1}\)**.** First assume that (3.13) holds. From the proof of Lemma A.1, we know that \(\tan\left(\tfrac{\theta_{1}}{2}\right)\cot\left(\tfrac{\theta_{1}}{2}\right)<1\), then
\[\forall k\in\mathbb{N},\quad\frac{1}{n^{2}}(\omega_{N}-\omega_{C})(\omega_{C}- \omega_{S})\tan^{2n}\left(\tfrac{\theta_{1}}{2}\right)\cot^{2n}\left(\tfrac{ \theta_{2}}{2}\right)\underset{n\rightarrow\infty}{=}O_{\theta_{1},\theta_{2}} \left(\frac{1}{n^{k}}\right). \tag{3.19}\]
Then,
\[\Delta_{\infty}(\theta_{1},\theta_{2})=\frac{1}{4}\left(\frac{\omega_{S}}{\sin^{2 }\left(\tfrac{\theta_{1}}{2}\right)}+\frac{\omega_{N}}{\cos^{2}\left(\tfrac{ \theta_{1}}{2}\right)}\right)^{2}>0, \tag{3.20}\]
and (3.18) is true. Factorizing, we can write for any \(n\) sufficiently large
\[c_{n}^{\pm}(\widetilde{\gamma},\theta_{1},\theta_{2})=\widetilde{\gamma}+\frac{ \omega_{S}}{4\sin^{2}\left(\frac{\theta_{2}}{2}\right)}-\frac{\omega_{N}}{4 \cos^{2}\left(\frac{\theta_{1}}{2}\right)}+\frac{\omega_{N}-\omega_{S}}{4n} \pm\frac{1}{4}\left|\frac{\omega_{S}}{\sin^{2}\left(\frac{\theta_{2}}{2}\right) }+\frac{\omega_{N}}{\cos^{2}\left(\frac{\theta_{1}}{2}\right)}-\frac{\omega_{N }+\omega_{S}-2\omega_{C}}{n}\right|\pm\]
with
\[\mathbf{r}_{n}(\theta_{1},\theta_{2})\triangleq\frac{1}{4}\left|\frac{\omega_{ S}}{\sin^{2}\left(\frac{\theta_{2}}{2}\right)}+\frac{\omega_{N}}{\cos^{2} \left(\frac{\theta_{1}}{2}\right)}-\frac{\omega_{N}+\omega_{S}-2\omega_{C}}{ n}\right|\left|\sqrt{1+\frac{\left(\omega_{N}-\omega_{C}\right)\left(\omega_{C}- \omega_{S}\right)\tan^{2n}\left(\frac{\theta_{1}}{2}\right)\cot^{2n}\left( \frac{\theta_{2}}{2}\right)}{\left(\left[\frac{\omega_{S}}{\sin^{2}\left( \frac{\theta_{2}}{2}\right)}+\frac{\omega_{N}}{\cos^{2}\left(\frac{\theta_{1} }{2}\right)}\right]n-\left(\omega_{N}+\omega_{S}-2\omega_{C}\right)\right)^{2}} }-1\right|.\]
Notice that (3.19) implies
\[\forall k\in\mathbb{N},\quad\mathbf{r}_{n}(\theta_{1},\theta_{2})\underset{n \rightarrow\infty}{=}O_{\theta_{1},\theta_{2}}\left(\frac{1}{n^{k}}\right). \tag{3.21}\]
We have the following dichotomy.
* If \(\omega_{S}\cos^{2}\left(\frac{\theta_{1}}{2}\right)+\omega_{N}\sin^{2}\left( \frac{\theta_{2}}{2}\right)>0\), then for \(n\) large enough we have \[\left|\frac{\omega_{S}}{\sin^{2}\left(\frac{\theta_{2}}{2}\right)}+\frac{\omega _{N}}{\cos^{2}\left(\frac{\theta_{1}}{2}\right)}-\frac{\omega_{N}+\omega_{S}- 2\omega_{C}}{n}\right|=\frac{\omega_{S}}{\sin^{2}\left(\frac{\theta_{2}}{2} \right)}+\frac{\omega_{N}}{\cos^{2}\left(\frac{\theta_{1}}{2}\right)}-\frac{ \omega_{N}+\omega_{S}-2\omega_{C}}{n},\] and therefore \[\begin{cases}c_{n}^{+}(\theta_{1},\theta_{2})=\widetilde{\gamma}+\frac{\omega _{S}}{2\sin^{2}\left(\frac{\theta_{2}}{2}\right)}+\frac{\omega_{C}-\omega_{S} }{2n}+\mathbf{r}_{n}(\theta_{1},\theta_{2}),\\ c_{n}^{-}(\theta_{1},\theta_{2})=\widetilde{\gamma}-\frac{\omega_{N}}{2\cos^{2 }\left(\frac{\theta_{1}}{2}\right)}+\frac{\omega_{N}-\omega_{C}}{2n}-\mathbf{ r}_{n}(\theta_{1},\theta_{2}).\end{cases}\] As a consequence, \[\begin{cases}c_{n+1}^{+}(\widetilde{\gamma},\theta_{1},\theta_{2})-c_{n}^{+} (\widetilde{\gamma},\theta_{1},\theta_{2})\underset{n\rightarrow\infty}{=} \frac{\omega_{S}-\omega_{C}}{2n(n+1)}+O_{\theta_{1},\theta_{2}}\left(\frac{1 }{n^{k}}\right),\\ c_{n+1}^{-}(\widetilde{\gamma},\theta_{1},\theta_{2})-c_{n}^{-}(\widetilde{ \gamma},\theta_{1},\theta_{2})\underset{n\rightarrow\infty}{=}\frac{\omega _{C}-\omega_{S}}{2n(n+1)}+O_{\theta_{1},\theta_{2}}\left(\frac{1}{n^{k}} \right).\end{cases}\] Since \(\omega_{N}\neq\omega_{C}\) and \(\omega_{C}\neq\omega_{S}\), then we conclude the asymptotic (strict) monotonicity of \(n\mapsto c_{n}^{\pm}(\widetilde{\gamma},\theta_{1},\theta_{2})\). In addition, \[\lim_{n\rightarrow\infty}c_{n}^{+}(\widetilde{\gamma},\theta_{1},\theta_{2})= \widetilde{\gamma}+\frac{\omega_{S}}{2\sin^{2}\left(\frac{\theta_{2}}{2} \right)},\qquad\text{and}\qquad\lim_{n\rightarrow\infty}c_{n}^{-}( \widetilde{\gamma},\theta_{1},\theta_{2})=\widetilde{\gamma}-\frac{\omega_{N}} {2\cos^{2}\left(\frac{\theta_{1}}{2}\right)}.\]
* If \(\omega_{S}\cos^{2}\left(\frac{\theta_{1}}{2}\right)+\omega_{N}\sin^{2}\left( \frac{\theta_{2}}{2}\right)<0\), then for \(n\) large enough we have \[\left|\frac{\omega_{S}}{\sin^{2}\left(\frac{\theta_{2}}{2}\right)}+\frac{\omega _{N}}{\cos^{2}\left(\frac{\theta_{1}}{2}\right)}-\frac{\omega_{N}+\omega_{S}- 2\omega_{C}}{n}\right|=\frac{\omega_{N}+\omega_{S}-2\omega_{C}}{n}-\frac{ \omega_{S}}{\sin^{2}\left(\frac{\theta_{2}}{2}\right)}-\frac{\omega_{N}}{\cos^ {2}\left(\frac{\theta_{1}}{2}\right)}\] and therefore \[\begin{cases}c_{n}^{+}(\theta_{1},\theta_{2})=\widetilde{\gamma}-\frac{\omega _{N}}{2\cos^{2}\left(\frac{\theta_{1}}{2}\right)}+\frac{\omega_{N}-\omega_{C} }{2n}-\mathbf{r}_{n}(\theta_{1},\theta_{2}),\\ c_{n}^{-}(\theta_{1},\theta_{2})=\widetilde{\gamma}+\frac{\omega_{S}}{2\sin^{2} \left(\frac{\theta_{2}}{2}\right)}+\frac{\omega_{C}-\omega_{S}}{2n}+\mathbf{ r}_{n}(\theta_{1},\theta_{2}).\end{cases}\] As a consequence, \[\begin{cases}c_{n+1}^{+}(\widetilde{\gamma},\theta_{1},\theta_{2})-c_{n}^{+} (\widetilde{\gamma},\theta_{1},\theta_{2})\underset{n\rightarrow\infty}{=}\frac{ \omega_{C}-\omega_{N}}{2n(n+1)}+O_{\theta_{1},\theta_{2}}\left(\frac{1}{n^{k}} \right),\\ c_{n+1}^{-}(\widetilde{\gamma},\theta_{1},\theta_{2})-c_{n}^{-}(\widetilde{\gamma}, \theta_{1},\theta_{2})\underset{n\rightarrow\infty}{=}\frac{\omega_{S}-\omega_{C} }{2n(n+1)}+O_{\theta_{1},\theta_{2}}\left(\frac{1}{n^{k}}\right).\end{cases}\] The monotonicity conclusion is still valid. In addition, in this case, the limits are exchanged \[\lim_{n\rightarrow\infty}c_{n}^{+}(\widetilde{\gamma},\theta_{1},\theta_{2})= \widetilde{\gamma}-\frac{\omega_{N}}{2\cos^{2}\left(\frac{\theta_{1}}{2}\right)} \qquad\text{and}\qquad\lim_{n\rightarrow\infty}c_{n}^{-}(\widetilde{\gamma}, \theta_{1},\theta_{2})=\widetilde{\gamma}+\frac{\omega_{S}}{2\sin^{2}\left(\frac{ \theta_{2}}{2}\right)}.\]
The condition (3.13) implies that the above limits are well-separated. Together with the strict monotonicity property, we conclude (3.14).
**2.** Conversely, we assume that (3.15) holds. This condition can also be written
\[\omega_{S}=-\omega_{N}\frac{1-\cos(\theta_{2})}{1+\cos(\theta_{1})}. \tag{3.22}\]
Notice that the Gauss constraint (3.1) can be written as follows
\[\omega_{C}=\frac{\omega_{N}\big{(}1-\cos(\theta_{1})\big{)}+\omega_{S}\big{(}1+ \cos(\theta_{2})\big{)}}{\cos(\theta_{2})-\cos(\theta_{1})}. \tag{3.23}\]
Plugging (3.22) into (3.23) gives
\[\omega_{C} =\omega_{N}\frac{\big{(}1-\cos(\theta_{1})\big{)}\big{(}1+\cos( \theta_{1})\big{)}-\big{(}1-\cos(\theta_{2})\big{)}\big{(}1+\cos(\theta_{2}) \big{)}}{\big{(}\cos(\theta_{2})-\cos(\theta_{1})\big{)}\big{(}1+\cos(\theta_ {1})\big{)}}\] \[=\omega_{N}\frac{\cos^{2}(\theta_{2})-\cos^{2}(\theta_{1})}{ \big{(}\cos(\theta_{2})-\cos(\theta_{1})\big{)}\big{(}1+\cos(\theta_{1}) \big{)}}\] \[=\omega_{N}\frac{\cos(\theta_{1})+\cos(\theta_{2})}{1+\cos( \theta_{1})}. \tag{3.24}\]
From (3.22) and (3.2), we get
\[\omega_{N}+\omega_{S}=\frac{\omega_{N}}{1+\cos(\theta_{1})}\Big{[}1+\cos( \theta_{1})-\big{(}1-\cos(\theta_{2})\big{)}\Big{]}=\omega_{N}\frac{\cos( \theta_{1})+\cos(\theta_{2})}{1+\cos(\theta_{1})}=\omega_{C}.\]
This last expression implies that \(\omega_{N}\neq 0\) (resp. \(\omega_{S}\neq 0\)) otherwise \(\omega_{S}=\omega_{C}\) (resp. \(\omega_{N}=\omega_{C}\)) which is excluded by construction. We also deduce
\[\omega_{N}-\omega_{C}=-\omega_{S},\qquad\omega_{C}-\omega_{S}=\omega_{N}.\]
Now, using (3.22), we infer
\[\omega_{N}\omega_{S}=-\omega_{N}^{2}\frac{1-\cos(\theta_{2})}{1+\cos(\theta_ {1})}<0. \tag{3.25}\]
In particular \(\omega_{N}\neq\omega_{S}\) and have opposite sign. Now, assume for the sake of contradiction that \(\omega_{C}=0\), i.e. \(\omega_{N}=-\omega_{S}.\) Combined with (3.15) and the fact that \(\frac{\theta_{1}}{2},\frac{\theta_{2}}{2}\in(0,\frac{\pi}{2})\), we deduce
\[\cos^{2}\big{(}\frac{\theta_{1}}{2}\big{)}=\sin^{2}\big{(}\frac{\theta_{2}}{2} \big{)}\,,\qquad\text{i.e.}\qquad\theta_{1}=\theta_{2}.\]
This enters in contradiction with (3.3). Thus,
\[\omega_{C}\neq 0\qquad\text{and}\qquad\omega_{N}+\omega_{S}-2\omega_{C}=- \omega_{C}\neq 0. \tag{3.26}\]
In this case, the discriminant becomes
\[\forall n\in\mathbb{N}^{*},\quad\Delta_{n}(\theta_{1},\theta_{2})=\frac{1}{4n ^{2}}\left[\omega_{C}^{2}-\omega_{N}\omega_{S}\tan^{2n}\big{(}\frac{\theta_{1 }}{2}\big{)}\cot^{2n}\big{(}\frac{\theta_{2}}{2}\big{)}\right]>0.\]
This implies in particular (3.18). Factorizing, we can write
\[c_{n}^{\pm}(\widetilde{\gamma},\theta_{1},\theta_{2}) =\widetilde{\gamma}-\frac{\omega_{N}}{2\cos^{2}\big{(}\frac{\theta _{2}}{2}\big{)}}+\frac{\omega_{N}-\omega_{S}}{4n}\pm\frac{|\omega_{C}|}{4n} \pm\mathbf{r}_{n}(\theta_{1},\theta_{2})\] \[=\widetilde{\gamma}+\frac{\omega_{S}}{2\sin^{2}\big{(}\frac{\theta _{2}}{2}\big{)}}+\frac{\omega_{N}-\omega_{S}}{4n}\pm\frac{|\omega_{C}|}{4n} \pm\mathbf{r}_{n}(\theta_{1},\theta_{2}),\]
with
\[\mathbf{r}_{n}(\theta_{1},\theta_{2})\triangleq\frac{|\omega_{C}|}{4n}\left[ \sqrt{1-\frac{\omega_{N}\omega_{S}}{\omega_{C}^{2}}\tan^{2n}\big{(}\frac{\theta _{1}}{2}\big{)}\cot^{2n}\big{(}\frac{\theta_{2}}{2}\big{)}}-1\right].\]
We have the following dichotomy.
* If \(\omega_{C}>0\), then \(|\omega_{C}|=\omega_{C}=\omega_{N}+\omega_{S}\) and therefore \[\begin{cases}c_{n}^{+}(\theta_{1},\theta_{2})=\widetilde{\gamma}-\frac{\omega _{N}}{2\cos^{2}\big{(}\frac{\theta_{1}}{2}\big{)}}+\frac{\omega_{N}}{2n}+ \mathbf{r}_{n}(\theta_{1},\theta_{2})=\widetilde{\gamma}+\frac{\omega_{N}}{2 \sin^{2}\big{(}\frac{\theta_{2}}{2}\big{)}}+\frac{\omega_{N}}{2n}+\mathbf{r} _{n}(\theta_{1},\theta_{2}),\\ c_{n}^{-}(\theta_{1},\theta_{2})=\widetilde{\gamma}-\frac{\omega_{N}}{2\cos^{2} \big{(}\frac{\theta_{1}}{2}\big{)}}-\frac{\omega_{S}}{2n}-\mathbf{r}_{n}( \theta_{1},\theta_{2})=\widetilde{\gamma}+\frac{\omega_{S}}{2\sin^{2}\big{(} \frac{\theta_{2}}{2}\big{)}}-\frac{\omega_{S}}{2n}-\mathbf{r}_{n}(\theta_{1}, \theta_{2}).\end{cases}\] (3.27)
As a consequence,
\[\begin{cases}c_{n+1}^{+}(\widetilde{\gamma},\theta_{1},\theta_{2})-c_{n}^{+}( \widetilde{\gamma},\theta_{1},\theta_{2})\underset{n\to\infty}{=}-\frac{\omega_ {N}}{2n(n+1)}+O_{\theta_{1},\theta_{2}}\left(\frac{1}{n^{3}}\right),\\ c_{n+1}^{-}(\widetilde{\gamma},\theta_{1},\theta_{2})-c_{n}^{-}( \widetilde{\gamma},\theta_{1},\theta_{2})\underset{n\to\infty}{=}\frac{\omega _{N}}{2n(n+1)}+O_{\theta_{1},\theta_{2}}\left(\frac{1}{n^{3}}\right).\end{cases}\]
This is sufficient to conclude the asymptotic strict monotonicity. In addition, since \(\omega_{N}\omega_{S}<0,\) then both sequences have the same monotonicity asymptotically. Nevertheless, in this case, both part of the spectrum accumulate at the same point
\[\lim_{n\to\infty}c_{n}^{+}(\widetilde{\gamma},\theta_{1},\theta_{2})= \widetilde{\gamma}-\frac{\omega_{N}}{2\cos^{2}\left(\frac{\theta_{1}}{2} \right)}=\widetilde{\gamma}+\frac{\omega_{S}}{2\sin^{2}\left(\frac{\theta_{2} }{2}\right)}=\lim_{n\to\infty}c_{n}^{-}(\widetilde{\gamma},\theta_{1},\theta_ {2}).\]
Therefore, one must avoid the spectral collisions by a more careful analysis.
Let us first study the equation
\[c_{\mathbf{m}}^{+}(\widetilde{\gamma},\theta_{1},\theta_{2})=c_{k\mathbf{m}}^ {-}(\widetilde{\gamma},\theta_{1},\theta_{2}),\qquad k\in\mathbb{N}^{*}.\]
According to (3.27), this equation is equivalent to
\[-k\big{(}\omega_{N}+\widetilde{\mathbf{r}}_{\mathbf{m}}\big{)}=\omega_{S}+ \widetilde{\mathbf{r}}_{k\mathbf{m}},\qquad\widetilde{\mathbf{r}}_{n}\triangleq 2 n\mathbf{r}_{n}(\theta_{1},\theta_{2})>0. \tag{3.28}\]
Observe that \(n\mapsto\widetilde{\mathbf{r}}_{n}\) is asymptotically decreasing and satisfies (3.21).
If \(\omega_{N}>0,\) then the equation (3.28) can be written
\[0=\omega_{C}+2(k-1)\omega_{N}+k\widetilde{\mathbf{r}}_{\mathbf{m}}+ \widetilde{\mathbf{r}}_{k\mathbf{m}}.\]
Each term in the right hand-side is non-negative and \(\omega_{C}>0,\) then this equation has no solution.
Assume now that \(\omega_{N}<0.\) By virtue of (3.25), we have \(\omega_{S}>0.\) According to (3.21) and (3.28), we can select \(\mathbf{m}\) large enough to ensure
\[\widetilde{\mathbf{r}}_{\mathbf{m}}<|\omega_{N}|.\]
Added to the asymptotic decay property of \(n\mapsto\widetilde{\mathbf{r}}_{n}\) we get
\[\forall k\in\mathbb{N}\setminus\{0,1\},\quad-k\big{(}\omega_{N}+\widetilde{ \mathbf{r}}_{\mathbf{m}}\big{)}\geqslant 2\big{(}|\omega_{N}|-\widetilde{ \mathbf{r}}_{\mathbf{m}}\big{)}\qquad\text{and}\qquad\omega_{S}+\widetilde{ \mathbf{r}}_{2\mathbf{m}}\geqslant\omega_{S}+\widetilde{\mathbf{r}}_{k\mathbf{ m}}.\]
Hence, it sufficies to impose
\[2\big{(}|\omega_{N}|-\widetilde{\mathbf{r}}_{\mathbf{m}}\big{)}>\omega_{S}+ \widetilde{\mathbf{r}}_{2\mathbf{m}},\qquad\text{i.e.}\qquad 2|\omega_{N}|> \omega_{S}+2\widetilde{\mathbf{r}}_{\mathbf{m}}+\widetilde{\mathbf{r}}_{2 \mathbf{m}},\]
so that the equations (3.28) admit no solution for any \(k\in\mathbb{N}^{*}\) (recall that \(c_{\mathbf{m}}^{+}\neq c_{\mathbf{m}}^{-}\)). Using (3.21), we deduce that, up to taking \(\mathbf{m}\) large enough, the following condition is sufficient
\[2|\omega_{N}|>\omega_{S}. \tag{3.29}\]
But, according to (3.15), the constraint (3.29) is equivalent to
\[2\cos^{2}\left(\frac{\theta_{1}}{2}\right)>\sin^{2}\left(\frac{\theta_{2}}{2} \right).\]
Now, we turn to the study of the equation
\[c_{k\mathbf{m}}^{+}(\widetilde{\gamma},\theta_{1},\theta_{2})=c_{\mathbf{m}}^ {-}(\widetilde{\gamma},\theta_{1},\theta_{2}),\qquad k\in\mathbb{N}^{*}.\]
Using again (3.27), this equation is equivalent to
\[\omega_{N}+\widetilde{\mathbf{r}}_{k\mathbf{m}}=-k\big{(}\omega_{S}+ \widetilde{\mathbf{r}}_{\mathbf{m}}\big{)}.\]
This is basically the same equation as (3.28) where \(\omega_{S}\) and \(\omega_{N}\) have been exchanged. So either \(\omega_{S}>0\) and there is no solution, or \(\omega_{S}<0\) and there is no solution provided that \(\mathbf{m}\) is large enough and
\[2|\omega_{S}|>\omega_{N},\qquad\text{i.e.}\qquad 2\sin^{2}\left(\frac{\theta_{2}}{2} \right)>\cos^{2}\left(\frac{\theta_{1}}{2}\right).\]
* If \(\omega_{C}<0,\) then \(|\omega_{C}|=-\omega_{C}=-\omega_{N}-\omega_{S}\) and therefore \[\begin{cases}c_{n}^{+}(\theta_{1},\theta_{2})=\widetilde{\gamma}-\frac{\omega_ {N}}{2\cos^{2}\left(\frac{\theta_{1}}{2}\right)}-\frac{\omega_{S}}{2n}+ \mathbf{r}_{n}(\theta_{1},\theta_{2})=\widetilde{\gamma}+\frac{\omega_{S}}{2 \sin^{2}\left(\frac{\theta_{2}}{2}\right)}-\frac{\omega_{S}}{2n}+\mathbf{r}_{n} (\theta_{1},\theta_{2}),\\ c_{n}^{-}(\theta_{1},\theta_{2})=\widetilde{\gamma}-\frac{\omega_{N}}{2\cos^{2} \left(\frac{\theta_{2}}{2}\right)}+\frac{\omega_{N}}{2n}-\mathbf{r}_{n}( \theta_{1},\theta_{2})=\widetilde{\gamma}+\frac{\omega_{N}}{2\sin^{2}\left( \frac{\theta_{2}}{2}\right)}+\frac{\omega_{N}}{2n}-\mathbf{r}_{n}(\theta_{1}, \theta_{2}).\end{cases}\] (3.30)
As a consequence,
\[\begin{cases}c_{n+1}^{+}(\widetilde{\gamma},\theta_{1},\theta_{2})-c_{n}^{+}( \widetilde{\gamma},\theta_{1},\theta_{2})\underset{n\to\infty}{=}\frac{\omega_{ S}}{2n(n+1)}+O_{\theta_{1},\theta_{2}}\left(\frac{1}{n^{2}}\right),\\ c_{n+1}^{-}(\widetilde{\gamma},\theta_{1},\theta_{2})-c_{n}^{-}( \widetilde{\gamma},\theta_{1},\theta_{2})\underset{n\to\infty}{=}-\frac{ \omega_{N}}{2n(n+1)}+O_{\theta_{1},\theta_{2}}\left(\frac{1}{n^{3}}\right). \end{cases}\]
As before, we can conclude the asymptotic strict monotonicity with the same limit.
\(\blacktriangleright\) According to (3.30), the equation
\[c_{\mathbf{m}}^{+}(\widetilde{\gamma},\theta_{1},\theta_{2})=c_{\mathbf{km}} ^{-}(\widetilde{\gamma},\theta_{1},\theta_{2}),\qquad k\in\mathbb{N}^{*}\]
is equivalent to
\[-k\big{(}-\omega_{S}+\widetilde{\mathbf{r}}_{\mathbf{m}}\big{)}=-\omega_{N}+ \widetilde{\mathbf{r}}_{k\mathbf{m}}\]
which is (3.28) with \((\omega_{N},\omega_{S})\) replaced by \((-\omega_{S},-\omega_{N})\). So there is no solution for either \(\omega_{S}<0\) or \(\omega_{S}>0\) and \(2\sin^{2}\left(\frac{\theta_{2}}{2}\right)>\cos^{2}\left(\frac{\theta_{2}}{2} \right).\)
\(\blacktriangleright\) The equation
\[c_{k\mathbf{m}}^{+}(\widetilde{\gamma},\theta_{1},\theta_{2})=c_{\mathbf{m}} ^{-}(\widetilde{\gamma},\theta_{1},\theta_{2}),\qquad k\in\mathbb{N}^{*}\]
is equivalent to
\[-\omega_{S}+\widetilde{\mathbf{r}}_{k\mathbf{m}}=-k\big{(}-\omega_{N}+ \widetilde{\mathbf{r}}_{\mathbf{m}}\big{)}\]
which is (3.28) with \((\omega_{N},\omega_{S})\) replaced by \((-\omega_{N},-\omega_{S})\). So there is no solution for either \(\omega_{N}<0\) or \(\omega_{N}>0\) and \(2\cos^{2}\left(\frac{\theta_{2}}{2}\right)>\sin^{2}\left(\frac{\theta_{2}}{2} \right).\)
In the following proposition, we gather all the remaining conditions required to apply the Crandall-Rabinowitz Theorem. Then, Theorem 1.2 follows immediatly.
**Proposition 3.2**.: _Let \(\alpha\in(0,1)\), \(\kappa\in\{+,-\}\) and \(\mathbf{m}\in\mathbb{N}^{*}\) with \(\mathbf{m}\geqslant N(\theta_{1},\theta_{2})\). Assume that (3.13) holds or assume that (3.15) with \((\mathbf{H}\mathbf{k}\kappa)\) for some \(k\in\llbracket 1,4\rrbracket\) holds._
1. _The linear operator_ \(d_{(f_{1},f_{2})}\mathscr{G}\big{(}c_{\mathbf{m}}^{\kappa}(\widetilde{\gamma},\theta_{1},\theta_{2}),0,0\big{)}\) _is of Fredholm type with index zero._
2. _The kernel of_ \(d_{(f_{1},f_{2})}\mathscr{G}\big{(}c_{\mathbf{m}}^{\kappa}(\widetilde{\gamma},\theta_{1},\theta_{2}),0,0\big{)}\) _is one dimensional. More precisely,_ \[\ker\Big{(}d_{(f_{1},f_{2})}\mathscr{G}\big{(}c_{\mathbf{m}}^{\kappa}( \widetilde{\gamma},\theta_{1},\theta_{2}),0,0\big{)}\Big{)}=\mathtt{span} \big{(}u_{0}\big{)},\] (3.31) _with_ \[u_{0}:\varphi\mapsto\begin{pmatrix}-c_{\mathbf{m}}^{\kappa}(\widetilde{\gamma},\theta_{1},\theta_{2})+\frac{\omega_{C}-\omega_{S}}{2\mathbf{m}}+\frac{\omega _{S}}{2\sin^{2}\left(\frac{\theta_{2}}{2}\right)}+\widetilde{\gamma}\\ \frac{\omega_{N}-\omega_{C}}{2\mathbf{m}}\frac{\sin(\theta_{1})}{\sin(\theta_{ 2})}\tan^{\mathbf{m}}\left(\frac{\theta_{1}}{2}\right)\cot^{\mathbf{m}}\left( \frac{\theta_{2}}{2}\right)\end{pmatrix}\cos(\mathbf{m}\varphi).\]
3. _The transversality condition is satisfied, namely_ \[\partial_{c}d_{(f_{1},f_{2})}\mathscr{G}\big{(}c_{\mathbf{m}}^{\kappa}( \widetilde{\gamma},\theta_{1},\theta_{2}),0,0\big{)}[u_{0}]\not\in\mathrm{Im} \Big{(}d_{(f_{1},f_{2})}\mathscr{G}\big{(}c_{\mathbf{m}}^{\kappa}(\widetilde{ \gamma},\theta_{1},\theta_{2}),0,0\big{)}\Big{)}.\] (3.32)
Proof.: **(i)** The strict monotonicity of \(n\mapsto c_{n}^{\kappa}(\widetilde{\gamma},\theta_{1},\theta_{2})\) gives \(c_{\mathbf{m}}^{\kappa}(\widetilde{\gamma},\theta_{1},\theta_{2})\not\in \mathbb{L}\), where \(\mathbb{L}\) is defined in (3.12). Together with Proposition 3.1-(iii) this implies the desired Fredholmness property.
**(ii)** The non-degeneracy conditions imply that
\[\forall n\in\mathbb{N}\setminus\{0,1\},\quad\det\Big{(}M_{\mathbf{mn}}\big{(} c_{\mathbf{m}}^{\kappa}(\widetilde{\gamma},\theta_{1},\theta_{2}),\theta_{1}, \theta_{2}\big{)}\Big{)}\neq 0.\]
Together with the fact that the matrix \(M_{\mathbf{m}}\big{(}c_{\mathbf{m}}^{\kappa}(\widetilde{\gamma},\theta_{1}, \theta_{2}),\theta_{1},\theta_{2}\big{)}\) is singular and non-zero, we obtain from (3.5) the desired result because
\[\ker\Big{(}M_{\mathbf{m}}\big{(}c_{\mathbf{m}}^{\kappa}(\widetilde{\gamma}, \theta_{1},\theta_{2}),\theta_{1},\theta_{2}\big{)}\Big{)}=\mathtt{span} \begin{pmatrix}-c_{\mathbf{m}}^{\kappa}(\widetilde{\gamma},\theta_{1},\theta_{2 })+\frac{\omega_{C}-\omega_{S}}{2\mathbf{m}}+\frac{\omega_{S}}{2\sin^{2}\left( \frac{\theta_{2}}{2}\right)}+\widetilde{\gamma}\\ \frac{\omega_{C}-\omega_{N}}{2\mathbf{m}}\frac{\sin(\theta_{1})}{\sin(\theta_{ 2})}\tan^{\mathbf{m}}\left(\frac{\theta_{1}}{2}\right)\cot^{\mathbf{m}}\left( \frac{\theta_{2}}{2}\right)\end{pmatrix}.\]
**(iii)** The next step is to describe the range. For this aim, we introduce on \(Y_{\mathbf{m}}^{\alpha}\times Y_{\mathbf{m}}^{\alpha}\) the scalar product
\[\left(\left(\sum_{n=1}^{\infty}a_{n}\sin(\mathbf{m}n\varphi),\sum_{n=1}^{\infty}c_ {n}\sin(\mathbf{m}n\varphi)\right)\Big{|}\left(\sum_{n=1}^{\infty}b_{n}\sin( \mathbf{m}n\varphi),\sum_{n=1}^{\infty}d_{n}\sin(\mathbf{m}n\varphi)\right) \right)_{2}\triangleq\sum_{n=1}^{\infty}a_{n}b_{n}+c_{n}d_{n}.\]
Now we claim that
\[\text{Im}\Big{(}d_{(f_{1},f_{2})}\mathscr{G}\big{(}c^{\kappa}_{\mathbf{m}}( \widetilde{\gamma},\theta_{1},\theta_{2}),0,0\big{)}\Big{)}=\text{span}^{\perp_{ (\cdot)_{2}}}\left(g_{0}:\varphi\mapsto\begin{pmatrix}-c^{\kappa}_{\mathbf{m}}( \widetilde{\gamma},\theta_{1},\theta_{2})+\frac{\omega_{S}-\omega_{C}}{2 \mathbf{m}}+\frac{\omega_{S}}{2\sin^{2}\left(\frac{\theta_{2}}{2}\right)}+ \widetilde{\gamma}\right)\\ \frac{\omega_{S}-\omega_{C}}{2\mathbf{m}}\frac{\sin(\theta_{2})}{\sin(\theta_{ 1})}\tan^{\mathbf{m}}\big{(}\frac{\theta_{2}}{2}\big{)}\cot^{\mathbf{m}}\big{(} \frac{\theta_{2}}{2}\big{)}\end{pmatrix}\sin(\mathbf{m}\varphi)\right). \tag{3.33}\]
Indeed, as in the proof of Proposition 2.2-(iii), we shall prove the first inclusion and the second one is obtained by the Fredholmness property and the previous point. First observe that
\[v^{\kappa}_{\mathbf{m}}(\theta_{1},\theta_{2})\triangleq\begin{pmatrix}-c^{ \kappa}_{\mathbf{m}}(\widetilde{\gamma},\theta_{1},\theta_{2})+\frac{\omega_{ C}-\omega_{S}}{2\mathbf{m}}+\frac{\omega_{S}}{2\sin^{2}\left(\frac{\theta_{2}}{2} \right)}+\widetilde{\gamma}\\ \frac{\omega_{S}-\omega_{C}}{2\mathbf{m}}\frac{\sin(\theta_{2})}{\sin(\theta_ {1})}\tan^{\mathbf{m}}\big{(}\frac{\theta_{2}}{2}\big{)}\cot^{\mathbf{m}} \big{(}\frac{\theta_{2}}{2}\big{)}\end{pmatrix}\in\ker\Big{(}M^{\top}_{ \mathbf{m}}\big{(}c^{\kappa}_{\mathbf{m}}(\widetilde{\gamma},\theta_{1}, \theta_{2}),\theta_{1},\theta_{2}\big{)}\Big{)}.\]
Now, consider
\[g:\varphi\mapsto\sum_{n=1}^{\infty}\mathbf{m}nM_{\mathbf{mn}}\big{(}c^{\kappa} _{\mathbf{m}}(\widetilde{\gamma},\theta_{1},\theta_{2}),\theta_{1},\theta_{2 }\big{)}\begin{pmatrix}h^{(1)}_{n}\\ h^{(2)}_{n}\end{pmatrix}\sin(\mathbf{m}n\varphi)\in\text{Im}\Big{(}d_{(f_{1},f _{2})}\mathscr{G}\big{(}c^{\kappa}_{\mathbf{m}}(\widetilde{\gamma},\theta_{1},\theta_{2}),0,0\big{)}\Big{)}.\]
Then, denoting \(\cdot\) the usual scalar product on \(\mathbb{R}^{2}\), we have
\[\big{(}g\,|\,g_{0}\big{)}_{2} =\left(\mathbf{m}M_{\mathbf{m}}\big{(}c^{\kappa}_{\mathbf{m}}( \widetilde{\gamma},\theta_{1},\theta_{2}),\theta_{1},\theta_{2}\big{)} \begin{pmatrix}h^{(1)}_{1}\\ h^{(2)}_{1}\end{pmatrix}\right)\cdot v^{\kappa}_{\mathbf{m}}(\theta_{1},\theta_ {2})\] \[=\mathbf{m}\begin{pmatrix}h^{(1)}_{1}\\ h^{(2)}_{1}\end{pmatrix}\cdot\Big{(}M^{\top}_{\mathbf{m}}\big{(}c^{\kappa}_{ \mathbf{m}}(\widetilde{\gamma},\theta_{1},\theta_{2}),\theta_{1},\theta_{2} \big{)}v^{\kappa}_{\mathbf{m}}(\theta_{1},\theta_{2}\big{)}\Big{)}\] \[=0.\]
This proves the claim. Now, we turn to the transversality condition. We shall prove that the following quantity does not vanish
\[\Big{(}\partial_{c}d_{(f_{1},f_{2})}\mathscr{G}\big{(}c^{\kappa}_ {\mathbf{m}}(\widetilde{\gamma},\theta_{1},\theta_{2}),0,0\big{)}[u_{0}]\,|\,g_ {0}\Big{)}_{2}\] \[=\mathbf{m}\left[\left(-c^{\kappa}_{\mathbf{m}}(\widetilde{ \gamma},\theta_{1},\theta_{2})+\frac{\omega_{C}-\omega_{S}}{2\mathbf{m}}+ \frac{\omega_{S}}{2\sin^{2}\left(\frac{\theta_{2}}{2}\right)}+\widetilde{\gamma }\right)^{2}-\frac{1}{4\mathbf{m}^{2}}(\omega_{N}-\omega_{C})(\omega_{C}- \omega_{S})\tan^{2\mathbf{m}}\big{(}\frac{\theta_{1}}{2}\big{)}\cot^{2\mathbf{ m}}\big{(}\frac{\theta_{2}}{2}\big{)}\right].\]
Using the fact that \(\det\Big{(}M_{\mathbf{m}}\big{(}c^{\kappa}_{\mathbf{m}}(\widetilde{\gamma}, \theta_{1},\theta_{2}),\theta_{1},\theta_{2}\big{)}\Big{)}=0\), we obtain
\[\left(-c^{\kappa}_{\mathbf{m}}(\widetilde{\gamma},\theta_{1}, \theta_{2})+\frac{\omega_{C}-\omega_{S}}{2\mathbf{m}}+\frac{\omega_{S}}{2\sin^{ 2}\left(\frac{\theta_{2}}{2}\right)}+\widetilde{\gamma}\right)^{2}-\frac{1}{4 \mathbf{m}^{2}}(\omega_{N}-\omega_{C})(\omega_{C}-\omega_{S})\tan^{2\mathbf{m}} \left(\frac{\theta_{1}}{2}\right)\cot^{2\mathbf{m}}\left(\frac{\theta_{2}}{2}\right)\] \[=\left(-c^{\kappa}_{\mathbf{m}}(\widetilde{\gamma},\theta_{1}, \theta_{2})+\frac{\omega_{C}-\omega_{S}}{2\mathbf{m}}+\frac{\omega_{S}}{2\sin^{2} \left(\frac{\theta_{2}}{2}\right)}+\widetilde{\gamma}\right)\left(\frac{2 \omega_{C}-\omega_{N}-\omega_{S}}{2\mathbf{m}}+\frac{\omega_{S}}{2\sin^{2} \left(\frac{\theta_{2}}{2}\right)}+\frac{\omega_{N}}{2\cos^{2}\left(\frac{ \theta_{1}}{2}\right)}\right).\]
According to (3.13) or (3.15)-(3.26), up to taking \(\mathbf{m}\) large enough, we can ensure
\[\frac{2\omega_{C}-\omega_{N}-\omega_{S}}{2\mathbf{m}}+\frac{\omega_{S}}{2\sin^{ 2}\left(\frac{\theta_{2}}{2}\right)}+\frac{\omega_{N}}{2\cos^{2}\left(\frac{ \theta_{1}}{2}\right)}\neq 0.\]
Besides, we can write
\[-c^{\kappa}_{\mathbf{m}}(\widetilde{\gamma},\theta_{1},\theta_{2})+\frac{ \omega_{C}-\omega_{S}}{2\mathbf{m}}+\frac{\omega_{S}}{2\sin^{2}\left(\frac{ \theta_{2}}{2}\right)}+\widetilde{\gamma}=\frac{\omega_{N}}{4\cos^{2}\left(\frac{ \theta_{1}}{2}\right)}+\frac{\omega_{S}}{4\sin^{2}\left(\frac{\theta_{2}}{2} \right)}-\frac{\omega_{N}+\omega_{S}-2\omega_{C}}{4\mathbf{m}}+\kappa\frac{1}{2} \sqrt{\Delta_{\mathbf{m}}(\theta_{1},\theta_{2})}.\]
Assume for the sake of contradiction that
\[-c^{\kappa}_{\mathbf{m}}(\widetilde{\gamma},\theta_{1},\theta_{2})+\frac{\omega_{C} -\omega_{S}}{2\mathbf{m}}+\frac{\omega_{S}}{2\sin^{2}\left(\frac{\theta_{2}}{2} \right)}+\widetilde{\gamma}=0.\]
This equation is equivalent to
\[\frac{\omega_{N}}{2\cos^{2}\left(\frac{\theta_{1}}{2}\right)}+\frac{\omega_{S}}{2 \sin^{2}\left(\frac{\theta_{2}}{2}\right)}-\frac{\omega_{N}+\omega_{S}-2 \omega_{C}}{2\mathbf{m}}=-\kappa\sqrt{\Delta_{\mathbf{m}}(\theta_{1},\theta_{2 })}.\]
Taking the square, we end up with
\[\frac{1}{\mathbf{m}^{2}}(\omega_{N}-\omega_{C})(\omega_{C}-\omega_{S})\tan^{2 \mathbf{m}}\left(\tfrac{\theta_{1}}{2}\right)\cot^{2\mathbf{m}}\left(\tfrac{ \theta_{2}}{2}\right)=0.\]
But, by construction \(\omega_{N}\neq\omega_{C}\) and \(\omega_{C}\neq\omega_{S}.\) In addition \(\tan\left(\tfrac{\theta_{1}}{2}\right)\cot\left(\tfrac{\theta_{2}}{2}\right) \in(0,1)\). Contradiction. Consequently,
\[\left(\partial_{c}d_{(f_{1},f_{2})}\mathscr{G}\big{(}c_{\mathbf{m}}^{\kappa}( \widetilde{\gamma},\theta_{1},\theta_{2}),0,0\big{)}\big{|}\,g_{0}\right)_{2} \neq 0.\]
This ends the proof of Proposition 3.2.
## Appendix A Appendix
### An integral
In this appendix, we give an explicit value for an integral with parameters oftenly used in this work.
**Lemma A.1**.: _Let \(n\in\mathbb{N}^{*}\) and \(a,b\in(0,\pi)\). We define_
\[I_{n}(a,b)\triangleq\frac{1}{2\pi}\int_{0}^{2\pi}\cos(nx)\log\big{(}1-\cos(a )\cos(b)-\sin(a)\sin(b)\cos(x)\big{)}dx.\]
_Then_
\[I_{n}(a,b)=I_{n}(b,a)=-\frac{1}{n}\tan^{n}\left(\frac{\min(a,b)}{2}\right)\cot ^{n}\left(\frac{\max(a,b)}{2}\right).\]
Proof.: \(\blacktriangleright\) Let us first begin with the case \(a=b.\) Observe that
\[\log\big{(}1-\cos^{2}(a)-\sin^{2}(a)\cos(x)\big{)} =\log\big{(}1-\cos(x)\big{)}+\log\big{(}\sin^{2}(a)\big{)}\] \[=\log\Big{(}\sin^{2}\left(\tfrac{x}{2}\right)\Big{)}+\log(2)+\log \big{(}\sin^{2}(a)\big{)}.\]
Hence, using [15, Lem. A.3], we get
\[I_{n}(a,a)=\frac{1}{2\pi}\int_{0}^{2\pi}\log\Big{(}\sin^{2}\left(\tfrac{x}{2} \right)\Big{)}\cos(nx)dx=-\frac{1}{n}.\]
\(\blacktriangleright\) Now, we assume \(a\neq b\) with, without loss of generality, \(a<b.\) We can write
\[I_{n}(a,b)=\frac{1}{\pi}\int_{0}^{\pi}\cos(nx)\log\big{(}1-\mu_{a,b}\cos(x) \big{)}dx,\qquad\mu_{a,b}\triangleq\frac{\sin(a)\sin(b)}{1-\cos(a)\cos(b)}>0.\]
Notice that
\[a\neq b\Leftrightarrow\cos(a-b)<1 \Leftrightarrow\cos(a)\cos(b)+\sin(a)\sin(b)<1\] \[\Leftrightarrow\sin(a)\sin(b)<1-\cos(a)\cos(b)\] \[\Leftrightarrow\mu_{a,b}<1.\]
Performing an integration by parts yields
\[I_{n}(a,b)=-\frac{\mu_{a,b}}{n\pi}\int_{0}^{\pi}\frac{\sin(nx)\sin(x)}{1-\mu_ {a,b}\cos(x)}dx.\]
Now we shall use the following result which can be found in [42, p. 391]
\[\frac{1}{1+\alpha^{2}}\int_{0}^{\pi}\frac{\sin(nx)\sin(x)}{1-\frac{2\alpha}{ 1+\alpha^{2}}\cos(x)}dx=\int_{0}^{\pi}\frac{\sin(nx)\sin(x)}{1-2\alpha\cos(x) +\alpha^{2}}dx=\begin{cases}\frac{\pi}{2}\alpha^{n-1},&\text{if }\alpha^{2}<1,\\ \frac{\pi}{2\alpha^{n+1}},&\text{if }\alpha^{2}>1.\end{cases}\]
We apply it with
\[\frac{2\alpha}{1+\alpha^{2}}=\mu_{a,b},\qquad\text{i.e.}\qquad\mu_{a,b} \alpha^{2}-2\alpha+\mu_{a,b}=0.\]
The discriminant of the previous second order polynomial equation is \(\Delta=4(1-\mu_{a,b}^{2})>0\), so we can take
\[\alpha=\alpha_{a,b}\triangleq\frac{1-\sqrt{1-\mu_{a,b}^{2}}}{\mu_{a,b}}\in(0,1).\]
Observe that
\[\alpha_{a,b}<1 \Leftrightarrow 1-\mu_{a,b}<\sqrt{1-\mu_{a,b}^{2}}\] \[\Leftrightarrow(1-\mu_{a,b})^{2}<1-\mu_{a,b}^{2}\] \[\Leftrightarrow\mu_{a,b}<1.\]
Hence,
\[I_{n}(a,b)=-\frac{\mu_{a,b}(1+\alpha_{a,b}^{2})\alpha_{a,b}^{n-1}}{2n}=-\frac {\alpha_{a,b}^{n}}{n}.\]
We can also write
\[\alpha_{a,b} =\frac{1}{\mu_{a,b}}-\sqrt{\frac{1}{\mu_{a,b}^{2}}-1}\] \[=\frac{1-\cos(a)\cos(b)-\sqrt{\big{(}1-\cos(a)\cos(b)\big{)}^{2}- \sin^{2}(a)\sin^{2}(b)}}{\sin(a)\sin(b)}.\]
But
\[\big{(}1-\cos(a)\cos(b)\big{)}^{2}-\sin^{2}(a)\sin^{2}(b) =1+\cos^{2}(a)\cos^{2}(b)-2\cos(a)\cos(b)-\big{(}1-\cos^{2}(a) \big{)}\big{(}1-\cos^{2}(b)\big{)}\] \[=\cos^{2}(a)+\cos^{2}(b)-2\cos(a)\cos(b)\] \[=\big{(}\cos(a)-\cos(b)\big{)}^{2}.\]
Since \(a,b\in(0,\pi)\) with \(a<b\), then \(\cos(a)>\cos(b)\). Consequently,
\[\alpha_{a,b} =\frac{1-\cos(a)\cos(b)-\big{(}\cos(a)-\cos(b)\big{)}}{\sin(a) \sin(b)}\] \[=\frac{\big{(}1-\cos(a)\big{)}\big{(}1+\cos(b)\big{)}}{\sin(a) \sin(b)}\] \[=\frac{2\sin^{2}\big{(}\frac{a}{2}\big{)}\,2\cos^{2}\big{(}\frac{ b}{2}\big{)}}{2\sin\big{(}\frac{a}{2}\big{)}\cos\big{(}\frac{a}{2}\big{)}\,2\sin \big{(}\frac{b}{2}\big{)}\cos\big{(}\frac{b}{2}\big{)}}\] \[=\tan\big{(}\frac{a}{2}\big{)}\cot\big{(}\frac{b}{2}\big{)}\,.\]
Finally, we have
\[\forall n\in\mathbb{N}^{*},\quad\forall\,0<a<b<\pi,\quad I_{n}(a,b)=-\frac{ \tan^{n}\big{(}\frac{a}{2}\big{)}\cot^{n}\big{(}\frac{b}{2}\big{)}}{n}.\]
### Potential theory
This section is devoted to some results on the continuity of specific operators with singular kernels. The proof of the next result can be found in [58, Lem. 2.6].
**Proposition A.1**.: _Let \(\alpha\in(0,1).\) Consider a kernel \(K:\mathbb{T}\times\mathbb{T}\to\mathbb{R}\) smooth out of the diagonal and satisfying, for some \(C_{0}>0\),_
\[\forall\varphi\neq\varphi^{\prime}\in\mathbb{T},\quad|K(\varphi, \varphi^{\prime})| \leqslant C_{0}\left|\sin\left(\tfrac{\varphi-\varphi^{\prime}}{2} \right)\right|^{-(1-\alpha)},\] (A.1) \[\forall\varphi\neq\varphi^{\prime}\in\mathbb{T},\quad|\partial_{ \varphi}K(\varphi,\varphi^{\prime})| \leqslant C_{0}\left|\sin\left(\tfrac{\varphi-\varphi^{\prime}}{2} \right)\right|^{-(2-\alpha)}.\] (A.2)
_Then, the integral operator \(\mathcal{K}\) defined by_
\[\forall\varphi\in\mathbb{T},\quad\mathcal{K}(f)(\varphi)\triangleq\int_{0}^{ 2\pi}K(\varphi,\varphi^{\prime})f(\varphi^{\prime})d\varphi^{\prime}\]
_is bounded from \(L^{\infty}(\mathbb{T})\) into \(C^{\alpha}(\mathbb{T}).\) More precisely, we have the following estimate_
\[\forall f\in L^{\infty}(\mathbb{T}),\quad\|\mathcal{K}(f)\|_{C^{\alpha}( \mathbb{T})}\leqslant CC_{0}\|f\|_{L^{\infty}(\mathbb{T})},\]
_with \(C>0\) an absolute constant._
In some cases, the above proposition cannot be applied directly because the kernel \(K\) has a non differentiable term, and thus the condition (A.2) does not make sense. In those cases, let us give the alternative result.
**Proposition A.2**.: _Let \(\alpha\in(0,1)\) and \(g\in C^{\alpha}(\mathbb{T})\), consider a kernel \(K:\mathbb{T}\times\mathbb{T}\to\mathbb{R}\) smooth out of the diagonal and satisfying_
\[\forall\varphi\neq\varphi^{\prime}\in\mathbb{T},\quad|K(\varphi, \varphi^{\prime})| \leqslant C_{0}\left|\sin\left(\tfrac{\varphi-\varphi^{\prime}}{2} \right)\right|^{-1},\] (A.3) \[\forall\varphi\neq\varphi^{\prime}\in\mathbb{T},\quad|\partial_{ \varphi}K(\varphi,\varphi^{\prime})| \leqslant C_{0}\left|\sin\left(\tfrac{\varphi-\varphi^{\prime}}{2} \right)\right|^{-2}.\] (A.4)
_Then, the integral operator \(\widetilde{\mathcal{K}}_{g}\) defined by_
\[\forall\varphi\in\mathbb{T},\quad\widetilde{\mathcal{K}}_{g}(f)(\varphi) \triangleq\int_{0}^{2\pi}K(\varphi,\varphi^{\prime})\big{(}g(\varphi)-g( \varphi^{\prime})\big{)}f(\varphi^{\prime})d\varphi^{\prime}\]
_is bounded from \(L^{\infty}(\mathbb{T})\) into \(C^{\alpha}(\mathbb{T}).\) More precisely, we have the following estimate_
\[\forall f\in L^{\infty}(\mathbb{T}),\quad\|\widetilde{\mathcal{K}}_{g}(f)\|_ {C^{\alpha}(\mathbb{T})}\leqslant CC_{0}\|g\|_{C^{\alpha}(\mathbb{T})}\|f\|_{L ^{\infty}(\mathbb{T})},\]
_with \(C>0\) an absolute constant._
Proof.: The \(L^{\infty}\) norm of \(\widetilde{\mathcal{K}}_{g}(f)\) can be estimated as
\[\left|\widetilde{\mathcal{K}}_{g}(f)(\varphi)\right| \leqslant C\|f\|_{L^{\infty}(\mathbb{T})}\|g\|_{C^{\alpha}( \mathbb{T})}\int_{0}^{2\pi}|K(\varphi,\varphi^{\prime})||\varphi-\varphi^{ \prime}|^{\alpha}d\varphi^{\prime}\] \[\leqslant CC_{0}\|f\|_{L^{\infty}(\mathbb{T})}\|g\|_{C^{\alpha}( \mathbb{T})}\sup_{\varphi\in\mathbb{T}}\int_{0}^{2\pi}\left|\sin\left(\tfrac {\varphi-\varphi^{\prime}}{2}\right)\right|^{-1}|\varphi-\varphi^{\prime}|^{ \alpha}d\varphi^{\prime}.\]
Since we work on the torus, we can always assume that
\[\forall\varphi\neq\varphi^{\prime}\in\mathbb{T},\quad 0<|\varphi-\varphi^{ \prime}|\leqslant\pi.\]
As a consequence, we have the following classical convexity estimate
\[\forall\varphi\neq\varphi^{\prime}\in\mathbb{T},\quad\tfrac{2}{\pi}|\varphi- \varphi^{\prime}|\leqslant 2\left|\sin\left(\tfrac{\varphi-\varphi^{\prime}}{2} \right)\right|\leqslant|\varphi-\varphi^{\prime}|.\] (A.5)
Therefore, by using (A.5), a change of variable and the fact that \(\alpha\in(0,1)\), we get
\[\forall\varphi\in\mathbb{T},\quad\int_{0}^{2\pi}\left|\sin\left(\tfrac{ \varphi-\varphi^{\prime}}{2}\right)\right|^{-1}|\varphi-\varphi^{\prime}|^{ \alpha}d\varphi^{\prime}\leqslant C\int_{0}^{2\pi}\left|\sin\left(\tfrac{ \varphi^{\prime}}{2}\right)\right|^{-(1-\alpha)}d\varphi^{\prime}<\infty.\]
Hence, we obtain
\[\|\widetilde{\mathcal{K}}_{g}(f)\|_{L^{\infty}(\mathbb{T})}\leqslant CC_{0} \|f\|_{L^{\infty}(\mathbb{T})}\|g\|_{C^{\alpha}(\mathbb{T})}.\]
For the Holder regularity, take \(\varphi_{1}\neq\varphi_{2}\in\mathbb{T}\). Define
\[d\triangleq 2\left|\sin\left(\tfrac{\varphi_{1}-\varphi_{2}}{2}\right)\right| =\left|e^{\mathrm{i}\varphi_{1}}-e^{\mathrm{i}\varphi_{2}}\right|\]
and for \(\varphi\in\mathbb{T}\) and \(r>0\),
\[B_{\varphi}(r)\triangleq\left\{\varphi^{\prime}\in\mathbb{T}\quad\text{s.t.} \quad 2\left|\sin\left(\tfrac{\varphi-\varphi^{\prime}}{2}\right)\right|<r \right\},\qquad B_{\varphi}^{c}(r)\triangleq\mathbb{T}\setminus B_{\varphi}(r).\]
Hence
\[\widetilde{\mathcal{K}}_{g}(f)(\varphi_{1})-\widetilde{\mathcal{ K}}_{g}(f)(\varphi_{2})= \int_{0}^{2\pi}K(\varphi_{1},\varphi^{\prime})\big{(}g(\varphi_{1})-g( \varphi^{\prime})\big{)}f(\varphi^{\prime})d\varphi^{\prime}-\int_{0}^{2\pi }K(\varphi_{2},\varphi^{\prime})\big{(}g(\varphi_{2})-g(\varphi^{\prime}) \big{)}f(\varphi^{\prime})d\varphi^{\prime}\] \[= \int_{B_{\varphi_{1}}(3d)}K(\varphi_{1},\varphi^{\prime})\big{(}g (\varphi_{1})-g(\varphi^{\prime})\big{)}f(\varphi^{\prime})d\varphi^{\prime}\] \[-\int_{B_{\varphi_{1}}(3d)}K(\varphi_{2},\varphi^{\prime})\big{(} g(\varphi_{2})-g(\varphi^{\prime})\big{)}f(\varphi^{\prime})d\varphi^{\prime}\] \[+\int_{B_{\varphi_{1}}^{c}(3d)}(K(\varphi_{1},\varphi^{\prime})-K( \varphi_{2},\varphi^{\prime}))\big{(}g(\varphi_{1})-g(\varphi^{\prime})\big{)}f( \varphi^{\prime})d\varphi^{\prime}\] \[+\int_{B_{\varphi_{1}}^{c}(3d)}K(\varphi_{2},\varphi^{\prime}) \big{(}g(\varphi_{1})-g(\varphi_{2})\big{)}f(\varphi^{\prime})d\varphi^{\prime}\] \[\triangleq I_{1}+I_{2}+I_{3}+I_{4}.\]
Using (A.1), (A.5) and a change of variables, we arrive at
\[|I_{1}| \leqslant CC_{0}\|f\|_{L^{\infty}(\mathbb{T})}\|g\|_{C^{\infty}( \mathbb{T})}\int_{B_{\varphi_{1}}(3d)}\left|\sin\left(\tfrac{\varphi_{1}- \varphi^{\prime}}{2}\right)\right|^{-1}|\varphi_{1}-\varphi^{\prime}|^{\alpha}d \varphi^{\prime}\] \[\leqslant CC_{0}\|f\|_{L^{\infty}(\mathbb{T})}\|g\|_{C^{\infty}( \mathbb{T})}\int_{B_{\varphi_{1}}(3d)}\left|\sin\left(\tfrac{\varphi_{1}- \varphi^{\prime}}{2}\right)\right|^{-(1-\alpha)}d\varphi^{\prime}\] \[\leqslant CC_{0}\|f\|_{L^{\infty}(\mathbb{T})}\|g\|_{C^{\infty}( \mathbb{T})}\int_{0}^{\frac{3}{4}d}\frac{dw}{|w|^{1-\alpha}\sqrt{1-w^{2}}}dw\] \[\leqslant CC_{0}\|f\|_{L^{\infty}(\mathbb{T})}\|g\|_{C^{\infty}( \mathbb{T})}d^{\alpha}\] \[=CC_{0}\|f\|_{L^{\infty}(\mathbb{T})}\|g\|_{C^{\infty}(\mathbb{T} )}|\varphi_{1}-\varphi_{2}|^{\alpha}.\]
In order to work with \(I_{2}\), note that \(B_{\varphi_{1}}(3d)\subset B_{\varphi_{2}}(4d)\). Thus, proceding as before, we infer
\[|I_{2}| \leqslant CC_{0}\|f\|_{L^{\infty}(\mathbb{T})}\|g\|_{C^{\infty}( \mathbb{T})}\int_{B_{\varphi_{1}}(3d)}\left|\sin\left(\tfrac{\varphi_{2}- \varphi^{\prime}}{2}\right)\right|^{-1}|\varphi_{2}-\varphi^{\prime}|^{\alpha }d\varphi^{\prime}\] \[\leqslant CC_{0}\|f\|_{L^{\infty}(\mathbb{T})}\|g\|_{C^{\infty}( \mathbb{T})}\int_{B_{\varphi_{2}}(4d)}\left|\sin\left(\tfrac{\varphi_{2}- \varphi^{\prime}}{2}\right)\right|^{-1}|\varphi_{2}-\varphi^{\prime}|^{\alpha }d\varphi^{\prime}\] \[\leqslant CC_{0}\|f\|_{L^{\infty}(\mathbb{T})}\|g\|_{C^{\infty}( \mathbb{T})}\int_{B_{\varphi_{2}}(4d)}\left|\sin\left(\tfrac{\varphi_{2}- \varphi^{\prime}}{2}\right)\right|^{-(1-\alpha)}d\varphi^{\prime}\] \[\leqslant CC_{0}\|f\|_{L^{\infty}(\mathbb{T})}\|g\|_{C^{\infty}( \mathbb{T})}|\varphi_{1}-\varphi_{2}|^{\alpha}.\]
For the third term \(I_{3}\) we use the mean value theorem and (A.2) achieving
\[|I_{3}| \leqslant C\left|(\varphi_{1}-\varphi_{2})\int_{0}^{1}\int_{B_{ \varphi_{1}}^{c}(3d)}(\partial_{x}K)\big{(}\varphi_{1}+(1-s)(\varphi_{2}- \varphi_{1}),\varphi^{\prime}\big{)}\big{(}g(\varphi_{1})-g(\varphi^{\prime}) \big{)}f(\varphi^{\prime})d\varphi^{\prime}ds\right|\] \[\leqslant CC_{0}\|f\|_{L^{\infty}(\mathbb{T})}\|g\|_{C^{\infty}( \mathbb{T})}|\varphi_{1}-\varphi_{2}|\int_{0}^{1}\int_{B_{\varphi_{1}}^{c}(3d )}\left|\sin\left(\tfrac{\varphi_{1}+(1-s)(\varphi_{2}-\varphi_{1})-\varphi^{ \prime}}{2}\right)\right|^{-2}|\varphi_{1}-\varphi^{\prime}|^{\alpha}d\varphi^ {\prime}ds.\]
Note that if \(\varphi^{\prime}\in B_{\varphi_{1}}^{c}(3d)\) and \(s\in[0,1]\), then
\[2\left|\sin\left(\tfrac{\varphi_{1}+(1-s)(\varphi_{2}-\varphi_{ 1})-\varphi^{\prime}}{2}\right)\right| =\left|e^{\mathrm{i}(\varphi_{1}-\varphi^{\prime})}-e^{\mathrm{i }(1-s)(\varphi_{1}-\varphi_{2})}\right|\] \[\geqslant\left|e^{\mathrm{i}(\varphi_{1}-\varphi^{\prime})}-1 \right|-\left|e^{\mathrm{i}(1-s)(\varphi_{1}-\varphi_{2})}-1\right|\] \[\geqslant\tfrac{2}{3}\left|e^{\mathrm{i}\varphi_{1}}-e^{\mathrm{ i}\varphi^{\prime}}\right|=\tfrac{4}{3}\left|\sin\left(\tfrac{\varphi_{1}- \varphi^{\prime}}{2}\right)\right|\] (A.6)
which implies, through (A.5) and a change of variables,
\[|I_{3}| \leqslant CC_{0}\|f\|_{L^{\infty}(\mathbb{T})}\|g\|_{C^{\infty}( \mathbb{T})}|\varphi_{1}-\varphi_{2}|\int_{B_{\varphi_{1}}^{c}(3d)}\left|\sin \left(\tfrac{\varphi_{1}-\varphi^{\prime}}{2}\right)\right|^{-2}|\varphi_{1}- \varphi^{\prime}|^{\alpha}d\varphi^{\prime}\] \[\leqslant CC_{0}\|f\|_{L^{\infty}(\mathbb{T})}\|g\|_{C^{\infty}( \mathbb{T})}|\varphi_{1}-\varphi_{2}|\int_{B_{\varphi_{1}}^{c}(3d)}\left|\sin \left(\tfrac{\varphi_{1}-\varphi^{\prime}}{2}\right)\right|^{-(2-\alpha)}d \varphi^{\prime}\] \[\leqslant CC_{0}\|f\|_{L^{\infty}(\mathbb{T})}\|g\|_{C^{\infty}( \mathbb{T})}|\varphi_{1}-\varphi_{2}|\int_{d}^{1}\frac{dw}{|w|^{2-\alpha} \sqrt{1-w^{2}}}\] \[\leqslant CC_{0}\|f\|_{L^{\infty}(\mathbb{T})}\|g\|_{C^{\infty}( \mathbb{T})}|\varphi_{1}-\varphi_{2}|\frac{1}{|\varphi_{1}-\varphi_{2}|^{1- \alpha}}\] \[\leqslant CC_{0}\|f\|_{L^{\infty}(\mathbb{T})}\|g\|_{C^{\infty}( \mathbb{T})}|\varphi_{1}-\varphi_{2}|^{\alpha}.\]
Let us finish with \(I_{4}\), using that \(g\in C^{\alpha}(\mathbb{T})\),
\[|I_{4}|\leqslant CC_{0}\|f\|_{L^{\infty}(\mathbb{T})}\|g\|_{C^{\infty}( \mathbb{T})}\int_{B_{\varphi_{1}}^{c}(3d)}\left|\sin\left(\tfrac{\varphi_{2}- \varphi^{\prime}}{2}\right)\right|^{-1}|\varphi_{1}-\varphi_{2}|^{\alpha}d \varphi^{\prime}.\]
Applying (A.6) with \(s=0\), we get
\[2\left|\sin\left(\tfrac{\varphi_{2}-\varphi^{\prime}}{2}\right)\right|\geqslant \tfrac{4}{3}\left|\sin\left(\tfrac{\varphi_{1}-\varphi^{\prime}}{2}\right) \right|.\]
Besides, for \(\varphi^{\prime}\in B^{c}_{\varphi_{1}}(3d)\), we have
\[\left|\sin\left(\frac{\varphi_{1}-\varphi_{2}}{2}\right)\right|\leqslant\frac{1} {3}\left|\sin\left(\frac{\varphi_{1}-\varphi^{\prime}}{2}\right)\right|.\]
Combining the foregoing facts, we end up with
\[|I_{4}| \leqslant CC_{0}\|f\|_{L^{\infty}(\mathbb{T})}\|g\|_{C^{\infty}( \mathbb{T})}\int_{B^{c}_{\varphi_{1}}(3d)}\left|\sin\left(\frac{\varphi_{1}- \varphi^{\prime}}{2}\right)\right|^{-(1-\alpha)}d\varphi^{\prime}\] \[\leqslant CC_{0}\|f\|_{L^{\infty}(\mathbb{T})}\|g\|_{C^{\infty}( \mathbb{T})}|\varphi_{1}-\varphi_{2}|^{\alpha}.\]
Putting together the preceding estimates yields
\[\left|\widetilde{\mathcal{K}}_{g}(f)(\varphi_{1})-\widetilde{\mathcal{K}}_{g }(f)(\varphi_{2})\right|\leqslant CC_{0}\|f\|_{L^{\infty}(\mathbb{T})}\|g\|_ {C^{\infty}(\mathbb{T})}|\varphi_{1}-\varphi_{2}|^{\alpha},\]
concluding the proof.
### Crandall-Rabinowitz theorem
In this last appendix, we recall the classical Crandall-Rabinowitz Theorem whose proof can be found in [20].
**Theorem A.1** (Crandall-Rabinowitz Theorem).: _Let \(\lambda_{0}\in\mathbb{R},X,Y\) be two Banach spaces, \(V\) be a neighborhood of \(0\) in \(X\) and \(\mathscr{F}:\mathbb{R}\times V\to Y\) be a function with the properties,_
1. \(\mathscr{F}(\lambda,0)=0\) _for all_ \(\lambda\in\mathbb{R}\)_._
2. _The partial derivatives_ \(\partial_{\lambda}\mathscr{F}\)_,_ \(d_{f}\mathscr{F}\) _and_ \(\partial_{\lambda}d_{f}\mathscr{F}\) _exist and are continuous._
3. _The operator_ \(d_{f}\mathscr{F}(\lambda_{0},0)\) _is Fredholm of zero index and_ \(\ker\big{(}d_{f}\mathscr{F}(\lambda_{0},0)\big{)}=\mathtt{span}(f_{0})\) _is one-dimensional._
4. _Transversality assumption:_ \(\partial_{\lambda}d_{f}\mathscr{F}(\lambda_{0},0)[f_{0}]\notin\mathrm{Im} \big{(}d_{f}\mathscr{F}(\lambda_{0},0)\big{)}\)_._
_If \(Z\) is any complement of \(\ker\big{(}d_{f}\mathscr{F}(\lambda_{0},0)\big{)}\) in \(X\), then there is a neighborhood \(U\) of \((\lambda_{0},0)\) in \(\mathbb{R}\times X\), an interval \((-a,a)\) with \(a>0\), and two continuous functions \(\Phi:(-a,a)\to\mathbb{R}\), \(\beta:(-a,a)\to Z\) such that \(\Phi(0)=\lambda_{0}\) and \(\beta(0)=0\) and_
\[\mathscr{F}^{-1}(0)\cap U=\big{\{}\big{(}\Phi(s),sf_{0}+s\beta(s)\big{)}:|s|< a\big{\}}\cup\{(\lambda,0):(\lambda,0)\in U\}.\]
### Proof of Lemma 1.2
Fix \(\alpha\in\mathbb{R}\). Since \(\mathcal{R}(\alpha)\in SO_{3}(\mathbb{R})\), then it preserves the Euclidean norm \(|\cdot|_{\mathbb{R}^{3}}\), i.e.
\[\forall\xi\in\mathbb{R}^{3},\quad\left|\mathcal{R}(\alpha)\xi\right|_{ \mathbb{R}^{3}}=|\xi|_{\mathbb{R}^{3}}.\]
As a consequence,
\[\forall(\xi,\xi^{\prime})\in(\mathbb{S}^{2})^{2},\quad\forall \alpha\in\mathbb{R},\quad G\big{(}\mathcal{R}(\alpha)\xi,\mathcal{R}(\alpha) \xi^{\prime}\big{)} =\frac{1}{2\pi}\log\left(\frac{\left|\mathcal{R}(\alpha)\xi- \mathcal{R}(\alpha)\xi^{\prime}\right|_{\mathbb{R}^{3}}}{2}\right)\] \[=\frac{1}{2\pi}\log\left(\frac{\left|\mathcal{R}(\alpha)(\xi-\xi^ {\prime})\right|_{\mathbb{R}^{3}}}{2}\right)\] \[=\frac{1}{2\pi}\log\left(\frac{|\xi-\xi^{\prime}|_{\mathbb{R}^{3} }}{2}\right)\] \[=G(\xi,\xi^{\prime}).\]
Hence, using the change of variables \(\xi^{\prime}\mapsto\mathcal{R}(\alpha)\xi^{\prime}\in SO(\mathbb{R}^{3})\) (which preserves \(\mathbb{S}^{2}\)), we get for any \(\xi\in\mathbb{S}^{2}\),
\[\Psi\big{(}\mathcal{R}(\alpha)\xi\big{)} =\int_{\mathbb{S}^{2}}G\big{(}\mathcal{R}(\alpha)\xi,\xi^{\prime }\big{)}\Omega(\xi^{\prime})d\xi^{\prime}\] \[=\int_{\mathbb{S}^{2}}G\big{(}\mathcal{R}(\alpha)\xi,\mathcal{R}( \alpha)\xi^{\prime}\big{)}\Omega\big{(}\mathcal{R}(\alpha)\xi^{\prime}\big{)}d \xi^{\prime}\] \[=\int_{\mathbb{S}^{2}}G(\xi,\xi^{\prime})\Omega(\xi^{\prime})d\xi^ {\prime}\] \[=\Psi(\xi).\] (A.7)
This achieves the proof of Lemma 1.2. |
2306.17428 | Pure-state photon-pair source with a long coherence time for large-scale
quantum information processing | The Hong-Ou-Mandel interference between independent photons plays a pivotal
role in the large-scale quantum networks involving distant nodes. Photons need
to work in a pure state for indistinguishability to reach high-quality
interference. Also, they need to have a sufficiently long coherence time to
reduce the time synchronization requirements in practical application. In this
paper, we discuss a scheme for generating a pure-state photon-pair source with
a long coherence time in periodically poled potassium titanyl phosphate (PPKTP)
crystals. By selecting the appropriate pump laser and filter, we could
simultaneously eliminate the frequency correlation of the parametric photons
while achieving a long coherence time. We experimentally developed this
pure-state photon-pair source of 780 nm on PPKTP crystals pumped by a 390 nm
pulsed laser. The source provided a coherence time of tens of picoseconds, and
it showed to have the potential to be applied in long-distance quantum
interference. Furthermore, we experimentally demonstrated the Hong-Ou-Mandel
(HOM) interference between two photon sources with visibility exceeding the
classical limit. | Bo Li, Yu-Huai Li, Yuan Cao, Juan Yin, Cheng-Zhi Peng | 2023-06-30T06:57:36Z | http://arxiv.org/abs/2306.17428v1 | Pure-state photon-pair source with a long coherence time for large-scale quantum information processing
###### Abstract
The Hong-Ou-Mandel interference between independent photons plays a pivotal role in the large-scale quantum networks involving distant nodes. Photons need to work in a pure state for indistinguishability to reach high-quality interference. Also, they need to have a sufficiently long coherence time to reduce the time synchronization requirements in practical application. In this paper, we discuss a scheme for generating a pure-state photon-pair source with a long coherence time in periodically poled potassium titanyl phosphate (PPKTP) crystals. By selecting the appropriate pump laser and filter, we could simultaneously eliminate the frequency correlation of the parametric photons while achieving a long coherence time. We experimentally developed this pure-state photon-pair source of 780 nm on PPKTP crystals pumped by a 390 nm pulsed laser. The source provided a coherence time of tens of picoseconds, and it showed to have the potential to be applied in long-distance quantum interference. Furthermore, we experimentally demonstrated the Hong-Ou-Mandel (HOM) interference between two photon sources with visibility exceeding the classical limit.
## I Introduction
Photon-pair sources are essential in the fields of quantum information processing, which include quantum communication [1; 2] quantum computation [3], and Bell tests [4; 5]. Complex quantum communication protocols such as teleportation [6] and swapping [7] involve more than two photons, where interference is required between independent photon sources. Thus, independent photon-pair sources should be prepared with high spectral purity. To implement these schemes on a large scale, it is crucial for the coherence time of the photon source to be sufficiently long (in the order of picoseconds), which will help mitigate the challenges associated with time synchronization over long distances. Simultaneously, it is essential to thoroughly address other key parameters such as brightness and purity. Moreover, source implementation should consider the flexibility and robustness required for practical applications.
Nowadays, the most extensively used approach to generate photon pairs is spontaneous parametric down-conversion (SPDC) in nonlinear crystals [8]. The pulse duration, material, thickness, and cut angle of a nonlinear medium, or the length and period of a periodically poled nonlinear medium should be carefully considered to deal with the group velocity match (GVM) issue [9]. In conventional multiphoton setups, the common used crystal is \(\beta\)-barium borate (BBO) or bismuth borate (BiBO) crystals with a thickness of a few millimeters, resulting in a photon bandwidth of 3 (or 8) nm [10; 11; 12]. This configuration, operated with a high-power (\(\sim\) Watt-level) and femtosecond (\(\sim\) 100 fs) pump laser, lacks flexibility when applied in practical field experiments or a power-starved scenarios such as in a satellites. With \(\sim 100\) fs of pump pulse [10], the synchronization precision should be at the level of ten femtoseconds between remote parties, which reaches the state-of-the-art technology that involves optical clocks [13], cavity-stabilized lasers [14], and frequency comb assisted time transfer [15], which are not currently practical. For practical applications, it is advisable to ensure a long coherence time of the source comparable to the practical time synchronization achieved through commercial atomic clocks or oven-protected crystal oscillators [16]. Typically, time synchronization systems utilizing these methods yield a precision of tens of picoseconds.
By taking advantage of the quasi-phase-matching (QPM), periodically poled nonlinear crystals can provide a flexible approach for the almost arbitrary phase-matching angles and wavelengths [17]. Thus, with a collineation configuration, the crystal thickness can be fairly long for a high generation rate of photon pairs without unwanted walk-off. QPM-SPDC sources have a variety of applications, such as space-borne [18; 19; 20; 21] and airborne payloads [22; 23] in long-distance quantum communication, photonic boson sampling [24] in quantum computation. A series of great efforts have been made concerning the telecom photon pairs generated from PPKTP [25; 26] for multiphoton applications. When preparing degenerate parametric photons at around 1560 nm in type-II PPKTP crystals, the GVM condition can intrinsically be satisfied. Thus, the time and frequency correlations between the signal and idler photons are naturally eliminated even without any filters. However, due to the wavelength increase, the emission rate of photon pairs at telecom band is decreased compared with the con
figuration of visible parametric photons [27]. PPKTP-based photon-pair sources having a wavelength of 780 nm, which are suitable for non-classical interference, have been reported [28]. However, the configuration employs a short crystal (1 mm) and ultrafast pump pulses (150 fs), leading to a short coherence time of \(\sim\) 1 ps.
In this paper, by carefully examining the GVM condition, we find that filtered SPDC sources can still be beneficial for application in the field experiment with an acceptable loss. We show that a 780-nm photon-pair source with a bandwidth of 30 pm can be constructed by carefully selecting the pump laser, nonlinear crystal parameters, and proper filters on the parametric photons. This implies that a coherence time of tens of picoseconds and a pair generation rate at the order of \(\sim 2\times 10^{5}\) pairs/mW/s could be reached. With such a configuration, a 30-mW pump laser with a repetition rate of 76 MHz was sufficient to produce \(\sim\) 0.1 photon pairs per pulse, which lower the requirement of high-power pump condition. Furthermore, a proof-of-principle experiment of the Hong-Ou-Mandel (HOM) interference [29] between two heralded single-photon sources was performed to demonstrate the feasibility of the proposed method in the multiphoton applications.
## II Design of the photon source
When developing a photon-pair source, it is crucial to consider the photon spectral emission and phase-matching conditions. The photon yield depends on several factors, including the wavelength employed, the crystal's length, and the focusing parameters in bulk crystals. The phase-matching conditions affect the spectral distribution of the generated photons. Understanding these conditions is essential for establishing the ideal filtering approach and other specific applications.
### General concepts
#### ii.1.1 Spectral emission
According to the theory of the SPDC process in the bulk crystals, in the case of a collinear emission with the degenerate wavelength, the spectral generation rate of parametric photons can be written as follows [30]:
\[\frac{dP_{s}}{P_{p}\cdot d\omega_{s}}=\frac{\hbar d^{2}L\omega_{s}^{4}}{2\pi c ^{4}\varepsilon_{0}n_{p}^{2}}f\left(\lambda_{s}\right)=\frac{\hbar d^{2}}{2\pi c ^{4}\varepsilon_{0}}f\left(\lambda_{s}\right)\cdot\frac{1}{n_{p}^{2}}\cdot L \omega_{s}^{4} \tag{1}\]
where \(\omega_{s}\), \(\omega_{i}\) and \(\omega_{p}\) are the angular frequencies of the signal, idler and pump photons, respectively. \(P_{s}\) is the signal photon power integrated over all emission angle. \(d\) is the effective nonlinear coefficient, \(L\) is the crystal length, \(\omega\) is the angular frequency, \(c\) is the speed of light in vacuum, \(\varepsilon_{0}\) is the permittivity of vacuum, \(n_{p}\) is the refractive index of the pump laser in the crystal, and \(P_{p}\) is the pump power. \(f(\lambda_{s})\) is a geometry related function, and it be regarded as a constant in this paper. The first term \(\frac{\hbar d^{2}}{2\pi c^{4}\varepsilon_{0}}f\left(\lambda_{s}\right)\) is a constant. The value of the second term \(\frac{1}{n_{p}^{2}}\) depends on \(\omega_{p}\). For a polarized photon transmission along the y-axis in a PPKTP crystal, the refractive index varies from 1.844 to 1.727 for a wavelength range of 0.4-2.0 \(\mu\)m [31]. This variation is negligible, as it is less than 7%. As indicated in the last term, the spectral brightness is directly proportional to the crystal length and the fourth power of \(\omega_{s}\). Therefore, increasing the spectral brightness may be achieved by using a longer crystal and shorter wavelength. In our scheme, it should be noted that filter will not impact the spectral emission. Thus, maximizing the initial spectral emission rate is imperative to increase the available photons with a narrow filter.
#### ii.1.2 Spectral correlation
This part details the spectral correlation effect of the general SPDC process. Theoretically, the wave function of the parametric photons can be expressed as follows [32]
\[|\psi\rangle=N\int\int d\omega_{s}d\omega_{i}f(\omega_{s},\omega_{i})\hat{a} _{s}^{\dagger}(\omega_{s})\hat{a}_{i}^{\dagger}(\omega_{i})|vac\rangle, \tag{2}\]
where \(\hat{a}_{s}^{\dagger}(\omega_{s})\) and \(\hat{a}_{i}^{\dagger}(\omega_{i})\) are the photon creation operators for the signal and idler beams, respectively, \(vac\) is vacuum state, and \(N\) is a normalization constant. \(f(\omega_{s},\omega_{i})\) is the joint spectral amplitude (JSA), which represents the spectral correlation between the signal and idler photons, and it can be expressed as the product of the pump envelope function \(\alpha(\omega_{s},\omega_{i})\) and QPM function \(\phi(\omega_{s},\omega_{i})\):
\[f(\omega_{s},\omega_{i})=\alpha(\omega_{s}+\omega_{i})\phi(\omega_{s},\omega_{ i}). \tag{3}\]
In the case of pumping by a Gaussian line shaped laser with a central frequency of \(\overline{\omega}_{p}\) and a bandwidth of \(\sigma_{p}\), the pump envelope function can be written as follows:
\[\alpha(\omega_{s}+\omega_{i})\propto exp[-\frac{(\omega_{s}+\omega_{i}- \overline{\omega}_{p})^{2}}{2\sigma_{p}^{2}}]. \tag{4}\]
The phase-matching function is given by the following:
\[\phi(\omega_{s},\omega_{i})=sinc[\frac{L}{2}\Delta k], \tag{5}\]
where \(\Delta k=k_{p}-k_{s}-k_{i}-m\frac{2\pi}{\Lambda}\), \(L\) is the crystal length, \(k\) is the wavenumber, \(\Lambda\) is the poling period, and \(m\) is the order of QPM.
The spectral correlation of photon pairs is determined by the joint spectral intensity (JSI) \(S(\omega_{s},\omega_{i})=|f(\omega_{s},\omega_{i})|^{2}\), which is the product of the pump envelope
intensity (PEI) \(|\alpha(\omega_{s}+\omega_{i})|^{2}\) and the phase-matching intensity (PMI) \(|\phi(\omega_{s},\omega_{i})|^{2}\). When there is a strong correlation between the photons, the initial purity of the generated photon pair will be low. One way to quantify the purity of the photon pair is by using the Schmidt decomposition on the PSI, see Appendix A.
#### ii.1.3 Filtering method
Various technical approaches are available to select the frequency mode, thus enhance the purity, one of which is the cavity-enhanced SPDC technology [33, 34, 35, 36, 37, 38, 39, 40, 41]. In this technique, the photon has a typical bandwidth of several hundred MHz, making it well-suited for quantum storage. There are studies focused on developing it towards long-range quantum communication [36, 39]. However, manipulating cavity-enhanced SPDC is relatively complex, as it requires handling issues such as cavity design, stability, compensation, loss control, and mode selection. The current research limit is the source brightness and efficiency, making it less effective for practical long-range quantum communication.
In comparison, using a narrow bandwidth filter to tailor the broadband SPDC photons is a straightforward and flexible way to increase photon purity and coherence length. Supposing that the interference filter is a Gaussian type filter with a bandwidth of \(\sigma\), then its amplitude function is in the form of the following:
\[g(\Delta\omega)=e^{-\Delta\omega^{2}/2\sigma^{2}}. \tag{6}\]
By introducing this to both signal and idler photons, the filtered JSA can be represented as follows:
\[f^{\prime}(\omega_{s},\omega_{i})=g(\omega_{s}-\overline{\omega}_{s})g(\omega_ {i}-\overline{\omega}_{i})f(\omega_{s},\omega_{i}). \tag{7}\]
Similarly, the purity can be obtained by Schmidt decomposition, for an optimal filtering strategy.
### Detailed design
Below we use a long PPKTP crystal as an example to analysis how to design the source.
In engineering a photon-pair source for practical multi-photon use, achieving high purity is crucial. This can be accomplished either by manipulating the natural SPDC condition to meet specific requirements, or by filtering the spectrum afterwards. A proper filter not only increases purity, but also enhances coherence time. However, the use of filters may result in photon loss, which can be mitigated by selecting high-performance periodically poled crystals that possess a high initial photon flux.
The factor that affect spectral brightness are represented in Eq. 1. To achieve a high photon flux, a type-II PPKTP crystal with a length of 30 mm and a poling period of 7.825 \(\mu\)m is used. The crystal was designed for degenerate emission with a pump laser of 390 nm, and it is crucial that the pump beam is focused onto the center of the crystal with an appropriate waist. A loose focus increase the coupling efficiency, while a strong focus can increase the pair emission rate. A trade-off between coupling efficiency and emission rate is necessary depending on specific applications [24, 42]. For further detailed analysis, refer to [17, 27].
The pump laser bandwidth is another important factor to consider. The bandwidth has less effect on the absolute photon emission, but it greatly affect the distribution of the JSI, thus the final filter selection. In our scheme, we aim to preserve as many photons as possible, so an optimal bandwidth around 10-15 pm is chosen at the lowest Schmidt number, as shown in Fig. 1. A lower Schmidt number represents a less mode number in the initial spectrum, thus reducing the filtering loss. From this, the optimized bandwidth of the pump laser could be obtained.
The corresponding JSI distribution is shown in Fig. 2. The PEI's slope is fixed to \(135^{\circ}\) due to energy conservation, and its width is proportional to the bandwidth of the pump laser \(\sigma_{p}\). The width of PMI is inversely proportional to the length of the PPKTP crystal \(L\), while the slope is highly dependent on the wavelengths of the pump, signal, and idler photons. for the type-II SPDC process of \(390~{}nm\to 780~{}nm+780~{}nm\), the slope of PMI is \(124.8^{\circ}\), which is close to the slope of PEI, resulting in a highly correlated JSI. This type of photon source is intrinsically spectral correlated. Thus, it cannot be directly applied in multiphoton engineering without first eliminating the correlation to achieve high purity.
To further improve the Schmidt number to approach 1, a filtering method is proposed to be applied to the signal and idler photons symmetrically. As shown in Fig. 3, by applying a filter with an appropriate bandwidth (\(\sim\)30 pm), the Schmidt number mode can decrease to 1.05. The Schmidt mode distributions for the filtered and the original frequency correlations are shown in Fig. 3(c).
According to the study by Meyer-Scott et al. [43], there is a trade-off between purity and heralding efficiency, which can be characterized by examining the JSI distribution. The filter heralding efficiency is defined as the ratio of the probability that both photons pass through their respective filters to the probability that each individual photon passes through its filter, denoted \(\eta_{f,\{s,i\}}=\Gamma_{both}/\Gamma_{\{s,i\}}\). Fig. 4 illustrates this trade-off for parametric photons. When the filter bandwidth is approximately 30 pm, the source achieves a high purity of \(P=0.95\). However, the corresponding filter heralding efficiencies for the signal and idler photon reduce to \(\eta_{f,s}=55.6\%\) and \(\eta_{f,i}=46.5\%\), respectively, with an average of 50.8%.
Table 1 presents one optimal parameters of the filtered SPDC process. The first column presents data measured using 30 mm bulk PPKTP crystals. The typical generation rate 3.1 M pairs/mW/s and the bandwidths of idler and signal photons are 0.21 and 0.26 nm, respectively.
The heralding efficiency is about 20%, which includes the single-mode coupling efficiency of about 40% and a typical Si detector efficiency of about 50%. The second column presents the predicted data with an ideal 30 pm filter employed symmetrically on both photons. The filtered SPDC designed in this paper requires a pump power of approximately 30 mW to produce an average photon number of \(\overline{n}\approx 0.08\). The two-fold coincidence rate has the potential to reach 237,000 counts per second. Our analysis indicate that filtered SPDC is still feasible in creating high-purity photon sources, which could have important implications for quantum information processing.
It's worth noting that waveguides are emerging as a practical alternative to bulk crystals in the field of photonics [44; 45]. By utilizing long waveguides instead of bulk crystals, photon emission in SPDC can be enhanced by one order of magnitude or more. This enhancement is attributed to waveguides offering a higher confinement of the pump field, which leads to an increased photon-pair emission rate. Advancements in waveguide technology are anticipated to address concerns regarding the uniformity of long crystal manufacturing, the efficiency of fiber coupling, and integration with free-space optics, which ultimately paves the way for more versatile applications.
In recent years, novel quantum photon sources, such as semiconductor quantum dots [46] and atomic ensembles [47; 48; 49], have emerged as promising way to reach
Figure 1: Schmidt numbers with different pump bandwidths. A lower Schmidt number implies a better natural purity of the generated photon pairs, and a less discard fraction in filtering to reach a high purity. We showed the interval with the lowest Schmidt number of \(\sim\) 7 (enlarged in the figure).
Figure 4: The relationship between filter bandwidth and spectral purity, and the heralding efficiency for signal and idler photons. The filter is applied symmetrically to both signal and idler photons. As the filter becomes narrower, the purity (indicated by a solid black line) increases gradually from a low level to 1. However, simultaneously, the heralding efficiency (indicated by red dashed lines) decreases towards 0. Therefore, one must consider the trade-off between the purity and the heralding efficiency. For our application, an optimal filter bandwidth of roughly 30 pm is recommended, as this allows the purity to reach 0.95.
Figure 3: Frequency correlation and Schmidt mode for the signal and idler photons. (a) The original JSI is highly correlated in frequency, with a Schmidt number of K = 6.99. (b) By filtering both the signal and idler photons with a 30 pm filter, the frequency correlation is almost completely removed, where the Schmidt number K = 1.05. (c) Comparison of the Schmidt mode distributions for the original and the filtered JSI.
Figure 2: Example of the (a) pump envelope intensity, (b) phase-matching intensity, and (c) joint spectral intensity for a specific configuration: type-II PPKTP with a length of 30 mm, a pump laser with a central wavelength of 390 nm and a full width at half maximum (FWHM) of 15 pm. A strong anti-correlation between the frequencies of the signal and idler photons can be seen in (c).
high brightness, high purity, and long coherence time, which are essential for many quantum applications. The semiconductor chip construction is complex and requires a low-temperature bath. While semiconductor quantum dots offer a near-perfect solution to the problem of low average photon number per excitation pulse, they are still in early development stages and have not yet surpassed mature SPDC technology in terms of flexibility and simplicity.
## III Experimental implementation
Based on the above analysis, we experimentally realized a frequency-uncorrelated photon-pair source. We employed a type-II PPKTP crystal with 1 mm \(\times\) 2 mm \(\times\) 30 mm for SPDC. The incident laser was focused with a waist diameter of \(\sim 50\)\(\mu\)m at the center of the PPKTP crystal. For the polarization, the pump and signal photons were adjusted to be along the crystal y-axis, while the idler photons were adjusted to be along the z-axis. Thus, the signal and idler photons could be separated by a polarized beam splitter, then coupled into single-mode fibers, and detected by Si avalanche photodiodes (APDs), respectively..
To perform the experiment, a pump laser with a pulse width of \(\sim\)15 picoseconds is required. Unfortunately, we did not have access to such a laser. To address this requirement, we modified an fs laser into a transform-limited ps laser using a filtering approach. As illustrated on the left side of Fig. 5, the fs pulsed laser with a wavelength of 780 nm was frequency-doubled at a 1-mm-thick lithium triborate (LBO) crystal to generate a pulsed laser of 390 nm. The 780-nm fs pulse came from a Ti: sapphire laser with a repetition rate of 76 MHz. The 390 nm laser has a FWHM of 1.6 nm and average power of 1.6 W. An air-spaced Fabry-Perot (FP) cavity with a finesse of 20 and an FWHM of 15 pm was selected as the narrow filter. The free spectral range of the FP filter was \(\sim\) 1.6 nm. Thus, coarse filtering was necessary to cover the range. Another consideration was the high-power fs laser; we employed a combination of a pair of dispersing prisms with a dispersion coefficient d\(\theta\) / d\(\lambda\) = 0.65 mrad / nm and a slit as the coarse filter. Before entering the first prism, the pump beam was expanded into a diameter of \(\sim\) 5 mm and a divergence angle of \(\sim\) 100 \(\mu\)rad by the lens. An adjustable slit was placed at the focus plane to block out any unwanted spectra. Afterward, the beam was reshaped, collimated, and then passed through the FP cavity for fine filtering. According to the time-bandwidth product limit, the final pump pulse corresponded to a pulse width of \(\sim\)15 ps. The remaining available pump power was \(\sim\)800 \(\mu\)W in total.
To improve the spectral purity of parametric photons, Fabry-Perot (FP) cavities with a full width at half maximum (FWHM) of 30 pm and a finesse of 20 were employed for filter purpose. To assess the purity of the parametric photons, the second-order autocorrelation function \(g^{(2)}(t)\) was measured in a Hanbury Brown and Twiss-like configuration [50], where photon coincidence was detected after a balanced beam splitter. The purity is related to the second-order autocorrelation function through the expression \(g^{(2)}(0)=1+P\), where \(P\) denotes the purity. For an ideal decorrelated state, \(g^{(2)}(0)=2\) with \(P=1\), while a strongly correlated state exhibits Poissonian statistics with \(g^{(2)}(0)=1\) and \(P=0\). In this study, numerical estimates indicated that the spectral purity was \(P_{s}=0.7\) and \(P_{i}=0.9\) for perfectly filtering the signal and idler photons, respectively. The measured value of \(g^{(2)}(0)\) was \(1.01\pm 0.02\) for unfiltered photons, indicating a very low purity. When the filter was applied to the idler photons, \(g^{(2)}(0)\) increased to \(1.72\pm 0.02\), indicating a purity of approximately \(P_{i}\sim\)0.72. To further enhance the purity, another filter could be employed on the signal photons. It is estimated that the purity could reach as high as 0.95 in theory when both photons are filtered.
After the filtering process, the observed coincidence count rate was \(1.5\times 10^{3}\) pairs/mW/s, corresponding to an estimated photon pair generating rate of \(2\times 10^{5}\) pairs/mW/s (\(\sim 0.003\) pairs/mW per pulse). For most multiphoton experiments and applications, considering the signal-to-noise ratio, the optimized generating rate of the parametric photon-pair is less than 0.1 per pulse. Hence, a 30-mW pump laser is sufficient. The measured heralding efficiency of each parametric photon was \(\sim 4.3\%\), which is lower than the predicted value listed in Table 1. The extra loss can be attributed to the spectral line shape of the FP cavity, the transmittance, and the distortion of the wavefront by the cavity. By improving the filtering performance and increasing the pump power, a higher coincidence count rate can be expected in the future.
HOM interference is one direct way to verify the purity of SPDC photon pair source. The theory behind HOM interference can be found in Appendix B. It should be noted that spectral correlation is associated with photon pairs joint distribution. The HOM interference is heralded by the idler photons. It is essential to recognize
\begin{table}
\begin{tabular}{l c c} Parameters & Without filter & filter\({}^{\mathrm{a}}\) \\ \hline Pump power (mW) & 30 & 30 \\ \(\Delta\lambda_{p}\) (nm) & 0.015 & 0.015 \\ \(\Delta\lambda_{\{s,i\}}\)(nm) & 0.21/0.26 & 0.03/0.03 \\ Generation rate (s\({}^{-1}\)) & 93.9M & 6.0 M \\ Coincidence rate (s\({}^{-1}\)) & 3.75 M & 0.237 M \\ Heralding efficiency & 20.0\% & \(\sim\)10.2\% \\ \(\overline{n}^{\mathrm{b}}\) & & 0.079 \\ Purity & 0.14 & 0.95 \\ \end{tabular} \({}^{\mathrm{a}}\) Ideal Gaussian filters are symmetrically applied on both signal and idler photons.
\({}^{\mathrm{b}}\) Average photon number per pulse with the repetition set as 76 MHz
\end{table}
Table 1: The optimal parameters for filtered SPDC.
that spectral correlation is associated with the joint distribution of photon pairs. HOM interference is heralded by the idler photons, filtering them can improve spectral purity, leading to and enhancement in the visibility of the signal photon's HOM interference. We built a HOM interference configuration by preparing and combining two photon-pair sources, as shown on the right side of Fig. 5. The idler photons worked as triggers, and the signal photons were combined on a beam splitter for interference. A thermoelectric cooler controlled the temperature of each PPKTP crystal to ensure that the signal (idler) photons have the same central wavelength. All the arrival times of the photons were recorded by a time-digital converter (TDC) to generate the four-fold coincidence events. When the signal photons were indistinguishable in all degrees of freedom, they bunched and left the same port of BS. Thus, the four-fold coincidence rate was ideally dorpped to near zero ideally.
Digital delay scanning was employed to evaluate the visibility of the HOM interference in the experiment. Before calculating the four-fold coincidence rate, a digital time delay \(\tau\) was applied to two of the detectors, as shown in Fig. 5. The digital time delay was set in a sequence of the integer multiple of the pulse interval. The four-fold coincidence counts were displayed with the different digital time delays \(\tau\), and the corresponding experimental results are presented in Fig. 6. When unfiltered, the HOM interference results in low visibility close to 0. We only filtered the signal photons by FP and left the trigger photons unfiltered. Thus, the central bin was relatively dropped, and the visibility increased to 46.01 \(\pm\) 2.95%. Further, when the photons of the four paths were all filtered by the FP filters, the visibility reached 73.48 \(\pm\) 2.38%, exceeding the classical limit 0.5 [51, 52], which provides solid evidence of the nonclassical interference between the two photon-pair sources.
## IV Conclusion
In summary, using a filtering method, we proposed an approach for building a frequency-uncorrelated photon
Figure 5: Experimental demonstration of the HOM interference. LBO, 1-mm thick lithium triborate; PBS, polarizing beam splitter; HWP, half-wave plate; LP, long-pass filter to block 390 nm; BS, 50:50 nonpolarization beam splitter; APD, Si avalanche photodiodes with a detection efficiency of \(\sim\) 0.5. \(\tau\) is the digital delay applied to photons 3 and 4.
Figure 6: Four-fold coincidence counts with different time delays. The x-axis represents the number of delayed pulse intervals. Since the repetition rate of the pump laser was 76 MHz, the interval of each pulse was 13.16 ns. Without the purification by filters, the visibility of the HOM interference is almost 0. When both signal and idler photons were filtered by FP with an FWHM of 30 pm, the visibility increased to 73.5% that exceeding the classical limit.
pair source at 780 nm. Then, we experimentally realized this approach and showed its potential in multiphoton applications by verifying the HOM interference. The corresponding coherence time of the photon source was as long as tens of picoseconds, which is suitable for large-scale multiphoton applications in which clocks are required to be maintained and independently synchronized in remote parties. In the future, by enhancing our configuration efficiency and updating the pump laser, we hope to develop the photon-pair source into a quantum communication application for long-distance channels.
Furthermore, the filtered SPDC technique presented in this study can be applied to research within telecom frequency regime. The spectral brightness of SPDC is known to decrease significantly as wavelength increases. To overcome this limitation and achieve high brightness and long coherence length, ultrabright SPDC sources based on periodically poled lithium niobate (PPLN) waveguides can be explored using the approach discussed in this paper. Furthermore, commercial telecom dense wavelength division multiplexing (DWDM) technologies can be combined for enhanced versatility in constructing a multiplexed photon source, as demonstrated in previous studies [53; 54].
## Acknowledgements
This work was supported by the National Key R&D Program of China (Grants No. 2017YFA0303900), the National Natural Science Foundation of China (Grants No. U1738201, No. 11904358, No. 61625503, and No. 11822409), the Chinese Academy of Sciences (CAS), Shanghai Municipal Science and Technology Major Project (Grant No. 2019SHZDZX01), and Anhui Initiative in Quantum Information Technologies. Y. C. was supported by the Youth Innovation Promotion Association of CAS (under Grant No. 2018492).
## Appendix A Schmidt decomposition
The purity of the photons is defined as \(P_{s}=Tr(\hat{\rho}_{s}^{2})\), where \(\hat{\rho}_{s}=Tr_{i}(\ket{\Psi}\bra{\Psi})\) is the reduced density operator of the signal [55]. The factorability of JSA determines the purity. When JSA results in a strong correlation, the Schmidt decomposition of the JSA can be used to quantify the purity property [56]. The Schmidt decomposition can express the joint function into the terms of a set of orthogonal basis. The JSA can be expressed as follows:
\[f(\omega_{s},\omega_{i})=\sum_{k}\sqrt{\lambda_{n}}u_{k}(\omega_{s})v_{k}( \omega_{i}), \tag{10}\]
where \(u_{k}(\omega_{s})\) and \(v_{k}(\omega_{i})\) are two orthogonal basis sets of spectral functions, known as Schmidt modes. \(\lambda_{n}\) is the weight of the Schmidt mode, and \(\sum_{n}\lambda_{n}=1\). The Schmidt number K is defined as follows:
\[K=\frac{1}{\sum_{n}\lambda_{n}^{2}}. \tag{11}\]
K is an indicator of entanglement. It represents the number of the Schmidt modes existing in the two-photon states. The spectral purity is the inverse of the Schmidt number K:
\[P=P_{s}=P_{i}=\frac{1}{K}. \tag{12}\]
For a two-photon state \(\ket{\psi}\) not correlated in the spectrum, the JSA can be written into a completely factorizable form:
\[f(\omega_{s},\omega_{i})=f_{s}(\omega_{s})f_{i}(\omega_{i}). \tag{13}\]
Thus, K = 1.
For the spectral correlation case, to calculate the numerical results of the Schimdt mode, we could discretize the spectrum. The JSA can be expressed in a large square matrix \(F\). The matrix element \(F_{mn}\) of the \(m\)th row and \(n\)th column is the discrete frequency value of \(f(\omega_{s,m},\omega_{i,n})\), where \(\omega_{s,m}\) and \(\omega_{i,n}\) are discrete frequencies. The mathematical singular value decomposition (SVD) can work to decompose the matrix \(F\) into the following:
\[F=UDV=U\left[\begin{matrix}d_{1}&&\\ &d_{2}&\\ &&\ddots\end{matrix}\right]V, \tag{14}\]
where \(U\) and \(V\) are unitary matrices, \(U\) depends on \(\omega_{s}\), and \(V\) depends on \(\omega_{i}\). The diagonal matrix D needs to be normalized with a coefficient \(\frac{1}{\sum d_{n}^{2}}\). The matrix of the weight of Schmidt mode is then
\[\left[\begin{matrix}\lambda_{1}&&\\ &\lambda_{2}&\\ &&\ddots\end{matrix}\right]=\frac{1}{\sum d_{n}^{2}}\left[\begin{matrix}d_{1} ^{2}&&\\ &d_{2}^{2}&\\ &&\ddots\end{matrix}\right]. \tag{15}\]
## Appendix B HOM interference theory
HOM interference acts as an essential role in revealing the nonclassical multiphoton phenomenon. Its visibility is directly related to the purity of the photon source. Down-converted photon pairs can serve as a single-photon source, where one of the photons (idler in this work) serves as a trigger. A HOM interference can be arranged by combining two independent heralded sources on a beam-splitter. The four-fold coincidence probability can be written as follows [57] :
\[P(t)= \int d\omega_{s_{1}}d\omega_{s_{2}}d\omega_{i_{1}}d\omega_{i_{2} }|f^{\prime}(\omega_{s_{1}},\omega_{i_{1}})f^{\prime}(\omega_{s_{2}},\omega_{ i_{2}}) \tag{16}\] \[-f^{\prime}(\omega_{s_{2}},\omega_{i_{1}})f^{\prime}(\omega_{s_{1 }},\omega_{i_{2}})\cdot e^{-i(\omega_{s_{2}}-\omega_{s_{1}})t}|^{2},\]
where \(t\) denotes the path delay between two independent sources.
When the independent single photons are indistinguishable, the HOM interference generates a dip in four-fold coincidence when the time delay is 0. Also, the visibility is defined as follows:
\[V=\frac{P(\infty)-P(0)}{P(\infty)}=\mathcal{E}/\mathcal{A}, \tag{3}\]
\[\mathcal{A}=\int d\omega_{s_{1}}d\omega_{s_{2}}d\omega_{i_{1}}d\omega_{i_{2}} \left|f^{\prime}(\omega_{s1},\omega_{i1})f^{\prime}(\omega_{s2},\omega_{i2}) \right|^{2}, \tag{4}\]
\[\mathcal{E}= \int d\omega_{s_{1}}d\omega_{s_{2}}d\omega_{i_{1}}d\omega_{i_{2}} f^{\prime}(\omega_{s1},\omega_{i1})f^{\prime}(\omega_{s2},\omega_{i2}) \tag{5}\]
\[\mathcal{E}= \int d\omega_{s_{1}}d\omega_{s_{2}}d\omega_{i_{1}}d\omega_{i_{2}} f^{\prime}(\omega_{s1},\omega_{i1})f^{\prime}(\omega_{s2},\omega_{i2}) \tag{6}\] \[\qquad\qquad\cdot f^{\prime*}(\omega_{s1},\omega_{i2})f^{\prime*} (\omega_{s2},\omega_{i1}) \tag{7}\]
where \(f^{\prime*}\) signifies a conjugate function.
Visibility is theoretically equal to the state purity, \(V=P=\frac{1}{K}\)[58], which is the straightforward method for evaluating spectral purity.
|